Social Engineering: Evolving via AI

Social engineering is the art of manipulating people into giving up their personal information or taking actions that compromise their security. It’s a type of cyberattack that relies on human interaction, rather than technical vulnerabilities, to succeed.

How Social Engineers Exploit Vulnerabilities

Social engineers are experts at exploiting human emotions, such as fear, trust, and curiosity. They use these emotions to trick victims into doing things they wouldn’t normally do. These include clicking on a malicious link or giving out their password.

Here are some realistic situations where social engineering attacks might happen:

  • You get an email from someone claiming to be from your bank. The email says there’s been suspicious activity on your account and asks you to click on a link to verify your information. If you click on the link, you’ll land on a fake website that looks like your bank’s website. Once you enter your personal information on the fake website, the attacker will steal it.
  • You’re at the mall and someone approaches you wearing a security guard uniform. The person says they’re investigating a theft and asks you to see your receipt. You show them your receipt and they take it from you. They then walk away, and you realize that they’ve just stolen your credit card information.
  • You’re on the phone with someone who claims to be from your internet service provider. The person says there’s been a problem with your account and asks you to give them your credit card number to pay for the service call. You give them your credit card number and they hang up. A few days later, you realize that they’ve charged your credit card for a service call that you never authorized.

Social engineering attacks can be very effective because they exploit human nature. People are often more trusting than they should be, and they can be easily tricked into giving up their personal information or taking actions that put their security at risk.

How AI is Changing Social Engineering

Artificial intelligence (AI) is truly a revolutionary feat of computer science, which should become an essential component of all modern software in the next few years and decades. This offers both a threat and an opportunity. This is evident in the area of social engineering. Threat actors, phishers, and social engineers are always looking for new ways to exploit human vulnerabilities. Without a doubt, AI is giving them a powerful new tool to do just that.

Here are some ways that malicious actors advance social engineering through AI:

AI creates more realistic and convincing phishing emails.

Traditional phishing emails often have grammatical errors and typos. AI can write emails that are much more difficult to spot as fraudulent.

AI can be used to create deepfakes.

Deepfakes are modified videos or audio recordings. They look or sound like someone is saying or doing something they never did. Deepfakes of CEOs, government officials, or other trusted individuals are only possible because of AI. This can be used to trick people into giving up their personal information or taking other actions that are harmful to themselves or their organization.

AI can be used to carry out voice phishing attacks.

Voice phishing, or “vishing,” is a type of social engineering attack that uses phone calls to trick victims into giving up their personal information. Further, AI can be used to create realistic-sounding voices that can be used to impersonate real people. This leads to victims thinking they are talking to a legitimate customer service representative or other authority figure.

AI can be used to automate social engineering attacks.

Automating the steps involved in carrying out a social engineering attack, such as identifying targets, sending phishing emails, and responding to victim inquiries are possible through AI. This can make it much easier for threat actors to carry out large-scale social engineering attacks.

These are just a few examples of how AI is being used to advance social engineering. As AI continues to develop, it is likely that social engineering attacks will become even more sophisticated and dangerous.

The Effects of Generative AI on Social Engineering

Darktrace, a British-American cyber defense company, recently highlighted a growing threat of individuals falling victim to malicious emails and innovative social engineering attacks. In fact, this rise coincides with the increasing adoption of generative AI technologies like ChatGPT. Max Heinemeyer, Chief Product Officer at Darktrace, emphasized the concerning trend in an April 2 blog post, drawing attention to a 135% surge in “novel social engineering attacks” among active Darktrace/Email customers from January to February 2023. Widespread use of ChatGPT factors into this surge.

These “novel social engineering attacks” differ significantly from typical phishing attempts, especially in their linguistic aspects. Heinemeyer pointed out that generative AI, such as ChatGPT, allows threat actors to craft sophisticated and targeted attacks rapidly and on a large scale. One significant consequence is the weakening effectiveness of security training efforts. This is because increasingly authentic-looking and subtle emails challenge individuals’ ability to distinguish legitimate messages.

Heinemeyer raised concerns about the potential weathering of trust in digital communications, predicting that the use of different malicious generative AI models could lead to further instances of deception. He posed a thought-provoking question: How will workplaces function if employees begin doubting the authenticity of communications, even when talking with colleagues over video calls?

The Possible Future of Cybersecurity and AI

Darktrace sees a future where AI collaborates with human users to enhance email security. Instead of only predicting attacks, AI could analyze user behavior and interaction patterns within email inboxes to identify potential threats. Heinemeyer advocated for a partnership between AI and humans, wherein algorithms take on the responsibility of distinguishing between malicious and benign communications, removing the burden on individuals.

This approach transfers the responsibility of email security from humans to AI. It also introduces the need for a more intrusive form of AI to counteract the threats posed by malicious AI and human actions. While not a flawless solution to the misuse of generative AI, this strategy offers a simpler yet substantial proposition for businesses and cybersecurity incident response services to combat emerging threats effectively.

Facing the Rising Threat of Advanced Social Engineering

The rising usage of AI in social engineering scams is reflected in the surge of complaints submitted to the Federal Trade Commission (FTC). In the three months post-launch of ChatGPT, the FTC observed a notable upswing in social engineering-related complaints. Instances of imposter scams increased by 34%, while government imposter scams surged by 50%.

This escalating threat emphasizes the necessity for organizations to comprehend these risks and add relevant safeguards for themselves and their clients. This begins with robust data protection and vigilant malicious software detection mechanisms to prevent unauthorized access to sensitive data, which scammers could exploit. Companies must prioritize the secure collection, processing, and storage of consumer information. They can do this by launching encryption to secure data during transit and storage. Safeguarding consumer data from breaches constitutes the foremost preventive action that businesses can adopt.

To counteract attacks, businesses should create a multi-layered strategy for account security. AI-fueled fraud detection tools can check customer data in real-time, identifying unusual activities that signal abnormal login attempts, uncommon transaction patterns, and other indications of fraud. Additionally, advanced authentication solutions, powered by AI and machine learning like Incognia, offer better account protection by relying on distinct behavioral patterns of users, removing the need for traditional login credentials. These innovative solutions provide dynamic and continuous authentication, exceptionally resistant to forgery or replication. The integration of these measures can mitigate the risks of social engineering while reducing reliance on credentials prone to theft.

In Conclusion

The growth of AI-aided social engineering scams poses substantial hazards for both enterprises and consumers. These sophisticated attacks can result in financial losses and reputational harm. Although educating users remains important, the responsibility ultimately falls on businesses to ensure the security of customer accounts. Embracing new behavior-based solutions capable of preemptively detecting social engineering scams can set businesses apart, fostering trust with customers. The increasing volume of FTC complaints related to imposter scams underscores the need for individuals and businesses to safeguard themselves against these threats. As AI technology evolves, organizations must stay ahead of the curve, adopting advanced fraud detection and prevention strategies to shield customers and themselves from the damaging effects of AI-empowered fraud.

About IPV Network
Since 2016, IPV Network has been a trusted partner of leading enterprises in the Philippines. It brings the best-of-breed cybersecurity solutions. IPV network helps businesses identify, protect, detect, respond, and recover from cyber threats. Email us at [email protected] or call (02) 8564 0626 to get your FREE cybersecurity posture assessment!

Sources:
https://www.rappler.com/technology/sophisticated-phishing-social-engineering-increase-ai-adoption/
https://www.forbes.com/sites/forbestechcouncil/2023/05/26/how-ai-is-changing-social-engineering-forever/?sh=1e2d8db5321b
https://technative.io/ai-enabled-social-engineering-how-businesses-can-safeguard-their-customers/

Previous

Next