Deepfakes: what they are, how they are used and tips to stay safe
Have you heard the term deepfake? Despite not exactly being a new phenomenon, cybercriminals are increasingly using this technology to scam people. According to a 2022 global survey by VMware, there has been a 13% rise in the number of attacks using deepfakes when compared to the previous year. The most common delivery vector, as you can imagine, is via e-mail, using highly targeted phishing campaigns.
But do you actually know what a deepfake is? Basically, it is a technology that uses artificial intelligence algorithms and machine learning to create extremely realistic videos and images, showing people doing things they have never done in real life. All you need is an algorithm created for this purpose and a few “samples” of what you’d like to falsify: a voice message, a photo or a video. Ready, the machine learning system will then apply the face or voice selected to existing clips.
Several uses in cybercrime
Deepfakes were initially employed in highly specific situations, generally to spread fake news to manipulate public opinion about politics.
However, as shown by the VMware survey, deepfakes are becoming increasingly popular among cybercriminals, who now use the technology to create highly targeted and realistic phishing campaigns. Let’s imagine you receive an audio message with a tone and cadence almost identical to that of a supervisor at work, asking you to perform an urgent bank transfer. Or perhaps you even join a fake meeting that accurately simulates your boss’ image. It’s a far better way to trick an employee than a simple fake e-mail, right?
Believe it or not, there have been dozens of reports in which deepfakes created for malicious intent have proven successful. The CEO of a British energy firm transferred nothing less than USD 243,000.00 after receiving an audio message perfectly simulating the voice of a work colleague who was also part of upper management. Obviously, the checking account to which the transfer was made belonged to the fraudster and not the work colleague.
Going beyond phishing attacks, deepfakes can also be used to extort executives. A cybercriminal can manipulate an embarrassing video using the face of their target and extort them under the threat of releasing the clip.
A tough scam to identify
Unfortunately, deepfakes come in an array of formats and detecting them is still highly challenging. After all, most employees don’t even know that this type of technology exists: according to Kaspersky, more than 65% of Brazilians, for example, have no idea what a deepfake is.
“Cybercriminals are now incorporating deepfakes into their attack methods to evade security controls. They have evolved beyond merely using fake audio and videos to influence people or as part of disinformation campaigns. The latest goal is to use deepfake technology to jeopardize organizations and gain access to them”, explains Rick McElroy, head of cybersecurity strategy at VMware.
Fortunately, specialists have discovered an easy way to detect a deepfake in video calls: simply ask the caller to turn sideways. Most algorithms are still unable to simulate faces in this position. When it comes to voice simulations, the same tip applies as when dealing with traditional phishing scams – whenever receiving a suspicious request, check (personally, if possible) whether the demand is real before taking any action.