AI helps us get information faster, but it’s also making it easier for cybercriminals to get their hands on your personal data. If you’ve been the target of any type of cybercrime, fraud or attempted fraud in the past 12 months, chances are AI played a role in it.
AI-generated content contributed to more than $12 billion in fraud losses in 2023, according to a Deloitte digital fraud study. That number could triple to more than $40 billion in the US by 2027.
Artificial intelligence is already woven into our daily lives. Need a prescription? Ask ChatGPT. Did you use a Google search? You have probably been given an AI summary for your question. Even financial apps are starting to use AI to help you stick to your budget or find ways to save.
With some of the world’s most advanced AI technologies now in the hands of cybercriminals, we’re witnessing a whole new world of digital fraud… one we all need to better prepare for.
Protect your data and keep your identity safe with Aura.
The details
Artificial intelligence helps hackers to overcome their limits
Cybercriminals have only so many hours in the day and only so many people they can hire to help carry out their schemes. AI helps to solve both of these problems.
Now, a handful of coded instructions is often all it takes to create a global phishing attack that can be translated into multiple languages and eliminate many clues that the message you’re reading is a scam. AI can fix bad grammar, correct spelling mistakes, and rewrite awkward greetings to make phishing messages seem more legitimate.
AI can also help cybercriminals better orchestrate phishing attacks around a specific industry or company, or around a specific event, such as a conference, trade show or national holiday.
Researchers at the University of Illinois Urbana-Champaign recently used voice-activated AI bots to perform some of the most common scams reported to the federal government before safely returning money to victims.
In some of the scams, the bots not only had a success rate of more than 60%, but they were also able to pull off the scam within seconds.
How scammers use AI to steal from you
AI helps criminals wade through trillions of data points more quickly, whereas previously they had a harder time working through the reams of data (think billions and billions of personal data) stolen in data breaches or purchased on the dark web.
Fraudsters can now use AI to decipher patterns in data and other valuable information in large data sets that they can exploit, not to mention help orchestrate attacks. AI is also helping to strengthen other forms of fraud.
Synthetic identity fraud
Synthetic identity theft involves stealing a Social Security number — usually from a child, elderly or homeless person — and combining it with other stolen or forged information such as names and dates of birth to create an identity new, fake.
Hackers then use this fake identity to apply for loans, leaving the original owner of the SSN with the bill.
Artificial intelligence helps facilitate this popular form of fraud by making it much easier to create highly realistic fake identity documents and synthetic images that mimic real faces and can bypass biometric verification systems like those found on an iPhone.
Deep deceptions
An AI-assisted deep fake scam will occur every 5 minutes in 2024, according to an estimate by security firm Entrust.
There are countless stories of fraudsters using AI to successfully scam businesses and ordinary people out of millions of dollars. Bad actors use very realistic but completely fake videos and voices of people the victims know, which can fool even the most cautious among us.
Less than a year ago, an employee at Arup, a British design and engineering company, was duped into transferring $25 million to fraudsters who used a fake video impersonating a CFO.
Artificial intelligence isn’t just cloning voices and faces, it’s also capable of duplicating human personalities, according to a recent study by researchers at Stanford University and Google DeepMind.
With little information about their subjects, the AIs were able to mimic political beliefs, personality traits and possible answers to questions in an attempt to trick victims, the study found.
These results—coupled with advances in deep-fake video and voice cloning already in use by cybercriminals—can make it even harder to tell whether the person you’re talking to online or on the phone is real or a fake. artificial intelligence.
AI can copy your important documents
Despite the world’s reliance on technology, physical documentation is still primarily used to verify your identity.
Artificial intelligence has become adept at creating trusted versions of passports, driver’s licenses, birth certificates and more, leading businesses and governments to find better ways to confirm identities in the future.
How to protect yourself from AI-assisted scams
Following tips to protect yourself from a human scam can also help protect you from AI-assisted scams. That means being vigilant, protecting your bank accounts with multiple layers of security, embracing multi-factor authentication, freezing and monitoring your credit report, and enrolling in identity theft protection.
Here are other tips to stay safe:
- As AI becomes more powerful and widely used, it’s even more important to critically examine what we see.
- Always verify any correspondence you receive with the company to confirm its authenticity.
- Make sure everything you see online is true before you mistakenly spread misinformation.
- As an additional layer of protection against phishing attacks, use a hardware security key, such as Yubikey or Google Titan. These keys are only $30.
- If you’re not already using a password manager, it might be time. 1Password, DashLane, and LastPass are among the most popular and will help you create unique passwords for every online account you create.
- Watch for falsified data. Fake voices can sound “sweeter” or monotone, lacking the emotion of a typical human conversation. With deep fake videos, watch for unusual eye, mouth or lip movements, facial distortions, and pixelation.
AI-assisted scams will become more convincing as the technology advances. Staying aware of common scam tactics and using your common sense and caution remains your best defense against these attempts.
Protect your data and keep your identity safe with Aura.
The details