The rapid development of artificial intelligence (AI) technologies is enabling increasingly sophisticated scams. Cybersecurity experts are warning internet users to be more vigilant as AI is increasingly used to create convincing fake content like text, images, and video, which are now being used in a variety of cyberattacks.
The Rise of AI-Powered Scams and Cyberattacks
High-Profile Scams Highlight Growing Risk
In recent weeks, high-profile scams have underscored the growing risks posed by AI-driven cyberattacks. A romance scam in France saw a woman lose €830,000, while fake donation drives targeted victims of wildfires in Los Angeles. These incidents highlight how both individuals and businesses are becoming increasingly vulnerable to cybercrime.
Phishing and Pretexting: Top Cyberattack Methods
Phishing is one of the most common cyberattack methods, where scammers use deceptive messages, emails, or texts to trick users into sharing sensitive information. Pretexting, a similar tactic, involves creating a fabricated story to gain trust. These two methods combined were responsible for over 20% of nearly 10,000 global data breaches last year, as reported in Verizon’s 2024 Data Breach Investigations Report.
The Role of AI in Enhancing Phishing Scams
AI Chatbots Make Phishing Scams More Convincing
AI chatbots powered by large language models have made phishing scams far more convincing. Attackers now use these tools to craft realistic fake messages quickly, which eliminates some of the common errors that would previously make a scam detectable.
Eliminating Language Red Flags with AI Technology
AI also helps attackers remove language red flags that would have given away a scam in the past. With advanced AI tools, scammers can now create emails that mimic native speakers’ language patterns, making them harder to spot as fraud.
Personalized Scams Powered by AI
Leveraging Stolen Data for Tailored Attacks
AI can analyze vast amounts of stolen data to create personalized scams. This enables attackers to target individuals more effectively, building trust over long periods before exploiting their victims.
The Shift from Human Labor to AI in Scamming
What once required a team of human scammers can now be executed by AI, making personalized scams far more efficient and scalable. This shift is a game-changer in the cybercrime industry, as AI reduces the time and resources needed to craft convincing attacks.
AI-Generated Deepfakes: A New Threat to Businesses
Deepfakes Deceive Even the Most Vigilant Employees
AI-generated deepfakes are becoming so sophisticated that it’s increasingly difficult for employees to distinguish between real and fake videos. One recent example involved scammers using deepfakes to trick a finance employee at a multinational company in Hong Kong into transferring $26 million. The employee thought they were speaking with their CEO and other company leaders via video call.
The Importance of Verifying Videos
Cybersecurity experts are now advising people to approach videos with the same skepticism they apply to images. Verifying videos through trusted channels can help protect against deepfake scams.
How to Protect Yourself from AI-Driven Scams
Use a “Safe Word” Strategy for Personal Communications
Experts recommend using a “safe word” strategy for verifying identities during personal communications. For example, if a suspicious email or message claims to be from a CEO, asking a question that only that individual could answer helps confirm their identity.
Asking Video Callers to Perform Simple Tasks
Another useful tactic is to ask video callers to perform tasks that AI has difficulty replicating, such as panning the camera around the room. These small actions can help determine if the person on the other side is legitimate or a deepfake.
Conclusion: Stay Vigilant in the Age of AI
As AI continues to evolve, so do the tactics used by cybercriminals. Staying alert, verifying communications, and using proactive strategies like safe word systems can help protect against the growing threat of AI-powered scams. While AI is an effective tool for attackers, human awareness remains the ultimate defense against these evolving risks.