The Rise of AI Scams: How Generative AI Is Changing Online Fraud
Published on January 2, 2025
Generative AI is helping scammers execute fraud on a massive scale. See how they’re doing it and what you need to watch out for.
AI Scams Are Here: Are You Ready?
- Your phone rings, you recognize the voice immediately as a loved one. They’re in trouble and need urgent help.
- You watch a video clip of a well-known billionaire strongly promoting a new cryptocurrency as a fantastic investment.
- Your social media feed features heartbreaking photos of people affected by a natural disaster. You’re invited to help by making a charitable donation.
As convincing as each of these may appear to be – turns out they are all fake. These scams are designed to completely fool you and get to your wallet. But these are no ordinary scams – they are super-powered by AI!
These scams have become so commonplace, so quickly, that the FBI has issued a Public Service Announcement (PSA) to educate, inform, and protect citizens from being the next victim.
In this blog, we’ll explore how generative AI is being misused by criminals, the types of scams it enables, and why these scams are so dangerous. By understanding these tactics, you’ll be better equipped to recognize and avoid falling victim to them.
Let’s dive into the world of AI-powered scams and discover how criminals are exploiting this groundbreaking technology.
How Is Generative AI Used By Scammers?
Generative AI is a groundbreaking technology with many legitimate applications, from automating tasks to creating art and music. However, its power also makes it a valuable tool for criminals, enabling them to commit fraud on a larger and more convincing scale.
The FBI highlights several ways scammers exploit generative AI:
- Convincing Messages: AI crafts extremely polished phishing emails and social engineering messages.
- Fake Profiles: Criminals mass-produce convincing social media accounts with realistic bios and images to execute romance or investment scams.
- Vocal and Visual Imitation: AI replicates voices and generates lifelike images or videos to impersonate loved ones, authority figures, or public personalities.
- Fraudulent Websites: AI builds professional-looking scam websites complete with chatbots and persuasive content to deceive victims.
Why Generative AI Makes Scams More Dangerous
Generative AI eliminates the usual red flags that have helped us detect fraud in the past, such as poor grammar or crude visuals. It also dramatically increases the volume of output, allowing scammers to reach more victims faster and with greater believability.
Understanding these tactics is the first step in protecting yourself. In the next section, we’ll dive into specific examples of AI-powered scams and how they operate.
4 Types of Fraud Super-charged by AI
1. AI-Generated Text
If you have played around with Gemini or ChatGPT then you know they instantly produce polished, persuasive, error-free text content. While these useful tools have ethical boundaries and limitations built in, other products available to hackers have no such protection. Here are some ways scammers utilize AI-generated text:
- Business Email Compromise (BEC): Scammers are using AI-generated text to impersonate company executives or trusted partners. The goal is to trick employees into transferring money or sharing sensitive information.
- Phishing Campaigns: AI generates convincing emails designed to steal login credentials, payment information, or other sensitive data. These emails often mimic trusted brands or organizations with near-perfect accuracy.
- Fraudulent Websites: AI tools generate professional-looking content for scam websites, making fake cryptocurrency platforms or investment schemes appear legitimate and trustworthy.
2. AI-Generated Images
Just like text content, realistic and compelling images can be created with minimal skill required. Such images are leveraged by scammers in many ways including:
- Fake Profile Photos: AI-generated images are used to create convincing social media profiles used in romance scams or investment fraud. Victims are more likely to trust and engage with profiles that have realistic photos.
- Fraudulent Identification Documents: AI is utilized to forge realistic identification documents, such as employee badges, which are then used in identity theft or impersonation scams.
- Charity Fraud: Scammers solicit donations for fake charities by creating images depicting human suffering caused by natural disasters or other tragic events. Manipulating the natural desire to help others, victims are directed to fake websites where they enter financial information to make a “donation”.
- Manipulative “Endorsements": AI-generated images of celebrities or influencers are used to promote counterfeit products, scams, or political agendas. According to CNN roughly 1 in 10 viral posts analyzed by News Literacy Project contained fake endorsements by celebrities such as Morgan Freeman, Michelle Obama, and Bruce Springsteen among others.
3. AI-Generated Audio (Vocal Cloning)
From a voice clip a few seconds long, AI voice cloning software can replicate voices with astonishing accuracy. The longer the sample voice recording, the better the results. Fraudsters can use this technology in many damaging ways:
- Emergency Impersonation: Scammers use AI-generated audio to mimic the voice of a loved one, fabricating crisis scenarios like kidnappings or medical emergencies. For example, an Ontario man was scammed out of $8,000 after receiving a call from his fishing buddy who claimed to have been arrested for texting while driving. Turns out it wasn’t his friend at all.
- Bank Fraud: Criminals clone the voices of account holders to bypass voice authentication systems used by banks and financial institutions. A reporter for BBC News had her voice cloned as part of an investigative report. She used the mimicked voice to successfully bypass voice ID security for two different banks.
- Corporate Espionage and Fraud: Vocal cloning is being used to impersonate executives or other key employees in businesses. For instance, a scammer might call an employee while sounding like the company’s CEO, instructing them to transfer money or share sensitive information. Lifelike audio makes these requests highly convincing – lowering defenses and increasing successful attacks.
4. AI-Generated Video
AI-generated videos, often referred to as deepfakes, add a new level of believability to scams by creating highly realistic visuals. These videos are used to manipulate trust and deceive victims in various ways:
- Fake Video Chats: Scammers use AI-powered tools to impersonate executives, law enforcement officers, or personal contacts during real-time video calls. For example, in a widely reported case, criminals in Hong Kong used deepfake technology to impersonate a company executive in a video conference, convincing employees to authorize a $35 million transfer.
- Promotional Fraud: Fraudulent businesses use deepfake videos featuring public figures or influencers endorsing fake products, services, or investment schemes. Victims are persuaded by the apparent credibility of endorsements and are more likely to invest or purchase.
- Personal Impersonation: AI-generated videos are used to “prove” the identity of scammers posing as romantic partners or business contacts. These videos create a false sense of security, making it easier for scammers to execute romance scams or financial fraud.
Make Sure Your Team is Prepared for AI-Powered Attacks
Generative AI has allowed criminals to take a quantum leap forward in how they execute scams. Hyper-realistic fake messages, images, voices, and videos that are difficult to detect are creating a world of new opportunities for exploitation. These sophisticated tactics are more believable, able to access emotions, and lower defenses, making some types of fraud more dangerous than ever before.
Businesses now face a huge challenge in preparing for and defending against these advanced scams. Understanding how these AI-driven schemes work better prepares you to detect and avoid being deceived. Ongoing cyber awareness training is an investment that every business owner should prioritize since an alert, prepared team is your first line of defense against more and more sophisticated attacks.
So... What's Next?
Knowing about these scams is only half the battle. In the next blog post, we’ll dive into actionable strategies to safeguard yourself, your loved ones, and your business. From creating verification systems to spotting subtle red flags in AI-generated content, we’ll explore the practical steps needed to stay one step ahead of scammers.
Stay informed, stay vigilant, and empower your team to navigate this new era of digital deception with confidence.