The Dangers of Deepfakes – A Cybersecurity Perspective

Deepfake is an artificial intelligence (AI) and machine learning technique that creates highly realistic and often deceptive media, such as videos, images, or audio recordings. The term “deepfake” is a combination of “deep learning,” which is a subset of machine learning, and “fake.”

At its core, deepfake technology analyzes and manipulates existing data, such as photographs or videos. It creates new content that convincingly resembles real human behavior. By training on vast amounts of data, these networks learn patterns and features that allow them to copy a person’s appearance, speech, and expressions.

Deepfake techniques can replace someone’s face in a video with another person’s. This is called face swapping. This process requires matching the facial expressions and movements from one person onto another, creating the illusion that the target person is saying or doing things they never actually did.

Creating Deepfakes and The Malicious Implementations

The creation of deepfakes involves several steps. Initially, a large amount of images or videos of the target person is collected. These may include various poses, expressions, and lighting conditions. Then, using algorithms such as generative adversarial networks (GANs), the deepfake learns to create new images or videos that closely resemble the target person based on the collection.

Deepfakes have become more believable over time due to advancements in AI and access to more data. They can be made with relatively inexpensive hardware and software, resulting in a wider range of users.

While deepfakes have the potential for entertainment purposes, such as in movies or video games, they also raise significant concerns. Deepfakes can be misused to spread disinformation, defame individuals, manipulate political campaigns, or create non-consensual pornography. The ease of creating convincing deepfakes poses challenges for media authenticity and trustworthiness.

Forms of Deepfake Cyberattacks

When it comes to cyberattacks, there are several malicious ways to use deepfakes:

Fraudulent Activities

Deepfakes can be used for financial scams, such as impersonating a high-ranking executive or a trusted individual. These can trick employees or clients into authorizing fraudulent transactions. In a notable incident in 2019, criminals successfully deceived the CEO of an energy firm by having an AI-generated deepfake voice impersonate the CEO’s boss. This convincing audio deepfake allowed the fraudsters to manipulate the CEO into urgently transferring €220,000 to a Hungarian bank account. The voice’s authenticity, including subtle German accents and  melodies, convinced the victim to go with it.

Disinformation and Influence Campaigns

Deepfakes can spread false information or manipulate public opinion. By creating convincing videos of politicians, celebrities, or other influential figures saying or doing controversial things, deepfake users can cause confusion, damage reputations, or sway public sentiment.

On June 5, 2023, a blue-badged Twitter account called DeSantis War Room posted a video aimed at former US President Donald Trump. The “DeSantis War Room” campaign video targeted former President Donald Trump and his relationship with Anthony Fauci, the former head of the CDC. However, experts of digital forensics in cybersecurity and fact-checkers discovered that some of the images in the video were likely deepfakes created using artificial intelligence and machine learning software.

While the video contained some real images, including ones sourced from Getty Images, the National Institutes of Health, and Reuters, deepfakes generated using AI were also present. Digital forensics analysts noticed irregular characteristics in these images, such as inconsistent textures and blurry elements. The inclusion of text under an image of the White House also raised suspicions, as current AI tools struggle with text generation.

Espionage and Sabotage

It is possible to use deepfakes for espionage purposes by creating compromising videos or audio recordings of targeted individuals. These can blackmail or discredit those individuals, potentially gaining access to sensitive information or disrupting operations.

Consider the DeSantis War Room above. The purpose of the video was to influence people against the former president. A deepfake user can change the information in the video and turn it into a compromising video. Deepfakes can turn any individual into a “captured spy”. They can release pictures, videos, or audio that pose a risk to national security.

Social Engineering

Deepfakes can improve the effectiveness of social engineering attacks. By impersonating someone known and trusted by the target, like a colleague, friend, or family member, attackers can trick individuals into sharing sensitive information or granting unauthorized access to systems.

A person with malicious motives can access any publicly available social media account to gain pictures, videos, and audio. Targets are most likely celebrities or social media influencers, but any individual is at risk. This malicious person can then use AI to generate a deepfake version of that person’s face, movements, and voice. This lets them gain access or information, even steal money from financial institutions by bringing dead people back to life! And it doesn’t stop there. Deepfakes can use AI to create hyper-realistic versions of non-existing individuals, a completely digital identity. Without proper training, this becomes a huge problem. This makes cybersecurity all the more important in the insurance industry.

Deepfakes used for malicious attacks have seen an increase in recent years. During the Black Hat USA 2022 event, reports claimed that cyberattacks have increased since the Russia-Ukraine rift, with deepfake attack methods seeing a 13% increase compared to 2021. This poses a bigger challenge to cybersecurity as deepfakes can deliver destructive malware, such as HermeticWiper, a malware that renders Windows devices unusable.

Protecting Against Deepfakes

To combat the negative effects of deepfakes, researchers are working on developing detection tools that can identify manipulated media. These tools use similar AI techniques to analyze visual or audio cues that hint at the presence of deepfakes, such as inconsistencies in facial expressions and unnatural movements.

Biometric systems that verify unique features such as fingerprints, irises, faces, and voices, offer a robust defense against deepfakes. These systems authenticate individuals through biometric data, which deepfake individuals cannot provide.

It’s important to note that deepfake technology itself is not inherently malicious. It can have positive applications in different fields such as entertainment, education, and creative expression. However, the potential for abuse and the associated risks highlights the need for countermeasures to prevent the negative impacts of deepfakes.

As deepfake technology continues to advance, it is crucial to be cautious when seeing media online. Recognizing the threats just mentioned, adopting best practices, and maintaining a proactive security approach are key to ensuring personal and organizational safety.

To safeguard all members of your organization from deepfake threats, it is advisable to seek reliable biometric authentication solutions from trusted security partners.

About IPV Network
Since 2016, IPV Network has been a trusted partner of leading enterprises in the Philippines. It brings the best-of-breed cybersecurity solutions. IPV network helps businesses identify, protect, detect, respond, and recover from cyber threats. Email us at [email protected] or call (02) 8564 0626 to get your FREE cybersecurity posture assessment!

Sources:
https://www.newsweek.com/desantis-war-room-deepfake-attack-trump-lays-bare-ai-threat-elections-1805303
https://q5id.com/blog/the-serious-dangers-that-deepfakes-cause-financial-institutions

Previous

Next