Five Ways Criminals Are Exploiting AI

Artificial intelligence has revolutionized productivity across many sectors, including the criminal underworld. Generative AI provides powerful tools that allow malicious actors to operate more efficiently and globally, according to Vincenzo Ciancaglini, a senior threat researcher at Trend Micro. Here are five ways criminals are leveraging AI today:

1. Phishing

Generative AI’s most significant criminal application is phishing—tricking individuals into revealing sensitive information for malicious purposes. Mislav Balunovi?, an AI security researcher at ETH Zurich, highlights a surge in phishing emails coinciding with the rise of AI models like ChatGPT. Tools like GoMail Pro integrate ChatGPT, enabling criminals to craft convincing phishing messages in multiple languages, bypassing traditional language barriers and detection mechanisms. Despite OpenAI’s efforts to curb misuse through policies and monitoring, enforcing these restrictions remains challenging.

2. Deepfake Audio Scams

The advancement of generative AI has made deepfake audio and video remarkably realistic. This year, a Hong Kong employee was scammed out of $25 million using a deepfake of the company’s CFO. Criminals are now marketing deepfake services on platforms like Telegram, with convincing audio deepfakes being particularly problematic due to their low production cost and high believability. High-profile scams involving fake kidnapping calls using deepfake voices have also been reported in the US.

3. Bypassing Identity Checks

Criminals are using deepfakes to circumvent “know your customer” (KYC) verification systems used by banks and cryptocurrency exchanges. These systems typically require a photo of the user holding an ID, but deepfake technology can superimpose a fake or stolen ID onto a real person’s face. Such services are being sold on platforms like Telegram, with prices as low as $70 for bypassing identity checks on sites like Binance. Although currently basic, these techniques are expected to evolve, allowing for more sophisticated fraud.

4. Jailbreak-as-a-Service

Rather than developing their own AI models, which is costly and risky, criminals are turning to “jailbreak-as-a-service” to bypass safety mechanisms on existing models. Services like EscapeGPT and BlackhatGPT provide anonymized access to language-model APIs and regularly updated jailbreaking prompts, allowing users to generate harmful content. AI companies like OpenAI and Google are engaged in a continuous battle to plug these security gaps, but the dynamic nature of jailbreaking remains a significant challenge.

5. Doxxing and Surveillance

AI language models are increasingly used for doxxing—revealing private information online. These models, trained on extensive internet data, can infer sensitive details about individuals from seemingly mundane conversations. For instance, AI can deduce a person’s location or age based on textual clues. Researchers, including Balunovi?, have demonstrated that AI models like GPT-4 can infer personal information, posing significant privacy risks. This capability has led to new services that exploit AI for surveillance and doxxing purposes.

Conclusion

As AI technology advances, so do its applications in the criminal world. To counter these threats, companies must invest in robust data protection and security measures. Individuals should be cautious about sharing personal information online and remain vigilant against potential AI-driven scams. Awareness and proactive defense are crucial in mitigating the risks posed by malicious AI use.

By staying informed and implementing stringent security practices, both organizations and individuals can better protect themselves against the evolving landscape of AI-enabled crime.

Leave a Reply

Your email address will not be published. Required fields are marked *