Você está aqui: Página Inicial / Blog / How artificial intelligence may boost cybercrime

How artificial intelligence may boost cybercrime

Artificial intelligence (AI) is not only a program trying to behave like a human being. The "intelligence" created from "machine learning" and deep learning is capable of doing good things that would be too complex for a human being, but they also fail disastrously - and even amusingly - in certain other areas.

That's why the potential of this technology still amazes all, whether positively or negatively. It is only after training an AI for a specific purpose that we’ll find out whether it will be able to excel at that task. 

Despite this, we already have an idea of how digital criminals might use artificial intelligence in Internet scams, including to improve social engineering.

Deepfakes and conversations with AI

AIs generally do a great job imitating someone. Voices, faces, and even artistic styles can be copied by an AI. That's the origin of deepfakes, i.e. fake content driven by deep learning.

In 2022, US authorities warned that criminals were using deepfakes of executives' voices to arrange virtual meetings with company employees. During these meetings, the criminals requested fraudulent payments and transfers.

In 2024, Hong Kong police reported that a company employee had transferred USD25 million to criminals. According to the police report, the sum was requested during a video conference - which means that the fraud evolved from voice to video in two years.

Not all scams in this category involve millions of dollars. In fact, any Internet user can end up talking to a chatbot. There are loads of scams whereby a cybercriminal tries to convince the victim to download dangerous software or transfer money, either to take advantage of an "opportunity", help a relative or avoid an embarrassing situation - the famous blackmail.

Bots can conduct these conversations almost autonomously, increasing the number of victims approached.

Personalized phishing

Just as bots conduct conversations with humans, AIs can be used to send personalized messages to each phishing victim. In the past, victims of a phishing campaign would all receive the same message.

Nowadays, however, even large-scale phishing messages can be personalized by AI. Each interaction or post on a victim's social network can be analyzed automatically, listing interests, identifying their profession and even mapping their network of friends.

With this data, AI can create unique messages for each target, increasing the spread of "spear phishing" scams - fraudulent messages dedicated to a specific recipient.

What does AI change in the battle against fraud?

Knowing the categories of fraud and how they work gives us a better idea of how criminals operate. The most important thing to remember is that AI fraud is highly likely when you find yourself faced with a suspicious situation.

Before AI and deepfakes, personalized scams with fake images and voices were rare. Unfortunately, you can no longer let your guard down. A video conference alone – especially with someone you've never spoken to – doesn't necessarily prove that a request is legitimate.

Even so, scams tend to give us some indication that something is fishy, mainly because criminals need to convince us to take unexpected actions. Whether it's accessing an unrecognized link or transferring money for a payment we hadn't planned on, fraud often interrupts our routine.

When in doubt, you need to look for something that confirms the veracity of the contact. Searching the web for similar scams can help, but you shouldn't dismiss your gut feeling just because you can't find another identical scam. After all, originality is one of the characteristics of AI fraud.

Being aware of this should make you more suspicious of messages or requests that seemed legitimate in the past. Your intuition - especially when it involves people you know well – can be the best weapon in defending yourself against these scams.