AI Cyber-Attacks: The Growing Threat to Cybersecurity and Countermeasures

In the information technology world, it is a well-known fact that digital advancements rarely live up to the extravagant depictions in science fiction literature and films. For instance, while popular media often imagines a future plagued by robot uprisings and AI-driven cyber-attacks, the reality is not as extreme as one might  think. If you were to  ask about  AI cyber-attack instances, the general  opinion would likely be negative. However, the truth is that AI cyber-attacks happen and are growing increasingly common. Although the presence of AI algorithms in the online landscape may not be  that obvious, their impact on the cybersecurity industry is significant.

Consequently, AI cyber-attacks are growing as security threats, not only for major government agencies but also for ordinary individuals. While hackers have posed a persistent problem since the inception of the Internet, their reach and capacity to steal vast amounts of data have become increasingly formidable.

Instances of AI Cyber-Attacks

Noteworthy recent incidents include the AI-assisted cyber-attack on TaskRabbit, an online platform connecting freelance handymen and clients. 3.75 million of its users had their Social Security numbers and bank account details compromised in April 2018. Hackers employed a massive botnet under AI control to carry out a devastating distributed denial-of-service (DDoS) attack on TaskRabbit’s servers, leading to the temporary suspension of the entire site. Regrettably, while the servers were down, 141 million more users fell victim to the attack.

Furthermore, WordPress disclosed a series of extensive botnet brute-force assaults on self-hosted WordPress websites, resulting in over 20,000 infected sites. These botnet-style attacks could grant hackers access to personal information and credit card numbers of users. This caused a loss of faith in WordPress among users, including those utilizing reputable hosting services.

In 2019 alone, the popular social media platform Instagram experienced two cyber-attacks. Starting in August, a number of Instagram users found their account information shared by hackers. This prevented them from accessing their social profiles. In November, a bug in Instagram’s code led to a data breach that exposed users’ passwords in the URL of their browsers, posing a substantial security risk. While Instagram has yet to provide detailed information regarding the breaches, speculations suggest that hackers are leveraging AI systems to scan user data on the platform, searching for potential vulnerabilities.

Overall, it is clear that AI-assisted attacks will only escalate, stemming from both botnet attacks and the proliferation of general malware. Essentially, a seemingly minor security breach now has the potential to be disasterous. Even if one  follows the fundamentals of internet security, such as employing firewalls, regularly checking for malware, utilizing secure content management systems like WordPress, and maintaining an experienced cybersecurity team, hackers armed with the necessary technology and expertise will capitalize on existing vulnerabilities.

AI’s Role in Disinformation Campaigns and DDoS Attacks

The surge of AI-driven attacks is increasingly evident in everyday life, particularly on platforms like Twitter. Accusations between political parties regarding the usage of “bots” to distort arguments or inflate follower counts have become commonplace. Although bots themselves may not pose a significant threat, as they are widely employed by various companies and services to enhance customer engagement and guide users through websites, their sophistication is on the rise. On top of this, it has become increasingly challenging to tell apart bots from real people. This is despite machines historically struggling to pass the Turing test. Notably, Google’s recent advancements in AI-generated audio and video further exemplify this trend.

These bots can be easily exploited for disinformation campaigns, flooding Twitter threads with false posts to sway arguments. Additionally, they can launch DDoS attacks on computers and networks of adversaries. While this form of attack has existed for decades, with notorious instances like hackers disabling the PlayStation Network, bots specializing in spam on platforms like Facebook and Twitter often outperform humans.

Exploiting AI-generated Content

A concerning trend has emerged  with online scams, wherein hackers are using AI-generated YouTube videos to deceive unsuspecting individuals into unwittingly downloading disguised malware. This technique has surged monthly by  200-300% in usage, resulting in widespread victimization. It is  critical to understand the mechanics of this scam and adopt measures to safeguard one’s personal information from being compromised.

According to a comprehensive report by CloudSEK, a prominent IT security intelligence firm, there has been a notable spike of YouTube videos containing descriptions embedded with harmful stealer malware, including but not limited to Raccoon, Vidar, and RedLine. These videos cunningly pose as tutorials for obtaining pirated versions of software such as AutoCAD, Autodesk 3ds Max, Photoshop, and Premiere Pro, which are exclusive to licensed users.

Previously, hackers relied on screen recordings and textual instructions within their videos to hide their identities,  avoiding  personal appearances or vocal dialogues. However,  those elements made the quality of their videos questionable . The report reveals a worrisome development: hackers have  turned  to AI-generated videos featuring virtual individuals conversing in multiple languages. This creates an illusion of authenticity and reliability. These videos  are mostly on  YouTube but also  spread across popular social media platforms, including Facebook, Instagram, and Twitter. Enticed viewers are lured into downloading a seemingly free application, conveniently linked in the video description.

Sadly, the promoted application is, in actuality, a data-stealing malware. Upon installation, it steals and transmits the user’s entire array of data to the hackers, encompassing sensitive financial information. Hence, users find themselves at considerable risk, as their personal and confidential data become vulnerable.

Steps To Prevent Content-Related AI Scams

To fortify one’s defenses against succumbing to such online scams, it is vital to follow these important guidelines:

  1. Avoid looking for free versions of software that are exclusively available for purchase, as they are likely to contain malware and viruses.
  2. Exercise utmost caution and do not download any content or click on links within videos originating from unfamiliar or untrustworthy sources.

By embracing these precautions, individuals can  protect themselves from falling prey to these online scams.

Weaponizing Machine-learning

Advancements in machine learning and artificial intelligence (AI) have greatly contributed to the field of cybersecurity. Presently, security teams face the overwhelming challenge of  going through vast amounts of data on potential suspicious activities.  This is like finding needles in haystacks. AI aids defenders in identifying genuine threats within this data. This is due to its ability to recognize patterns in network traffic, malware indicators, and user behavioral trends.

Unfortunately, attackers also utilize AI and machine learning to their advantage. The accessibility of cloud environments has made it effortless for them to delve into AI and construct sophisticated learning models.

Let us look at how hackers exploit AI and machine learning to target enterprises. We will also explore strategies for preventing AI-focused cyber-attacks.

Three tactics employed by attackers using AI against defenders are as follows:

#1 Testing malware against AI-based tools

Attackers employ machine learning in various ways. The simplest approach involves creating their own machine learning environments. They then design malware and attack methodologies to figure out the specific events and behaviors that defenders look for.

For instance, a highly sophisticated malware may modify local system libraries and components, execute processes in memory, and communicate with domains owned by the attacker’s control infrastructure. The combination of these activities  is known as tactics, techniques, and procedures (TTPs). Machine learning models can observe and utilize TTPs to develop detection capabilities.

By observing and predicting how security teams detect TTPs, adversaries can subtly and frequently modify indicators and behaviors to stay one step ahead of defenders who rely on AI-based tools for attack detection.

#2 Poisoning AI with misleading data

Attackers also exploit machine learning and AI by compromising environments by injecting inaccurate data into AI models. Machine learning and AI models depend on accurately labeled data samples to construct precise and reproducible detection profiles. Attackers can introduce benign files that resemble malware or generate false positive behavior patterns. This can trick AI models into  accepting malicious actions as harmless. Furthermore, attackers can contaminate AI models by introducing malicious files that have been wrongly labeled as safe during AI training.

#3 Mapping existing AI models

Attackers actively try to map existing and emerging AI models used by cybersecurity providers and operational teams. By understanding the functionality and characteristics of AI models, adversaries can disrupt machine learning operations and manipulate models during their cycles. This  allows hackers to influence the model to favor their own tactics. They can also completely avoid recognized models by subtly changing  data to evade detection based on known patterns.

Defending against AI-focused attacks is an exceedingly challenging task. Defenders must ensure that the data used to train learning models and develop detection patterns is  labeled accurately. Although this ensures the accuracy of label identifiers, it may result in smaller data sets for model training. This potentially limits AI efficiency.

For those involved in building AI security detection models, incorporating adversarial techniques and tactics during the modeling process can help align pattern recognition with real-world attack tactics. Researchers at Johns Hopkins University have developed the TrojAI Software Framework, which facilitates the generation of AI models for Trojans and other malware patterns. Additionally, MIT researchers have released TextFooler, a tool that accomplishes a similar task for natural language patterns. Utilizing these resources can assist in constructing more resilient AI models capable of detecting issues such as bank fraud.

As the significance of AI continues to grow, attackers will persistently endeavor to outpace defenders through their own research. Therefore, it is  critical for security teams to remain updated on the tactics employed by attackers in order to effectively defend against them.

Conclusion

The allure of an autonomous AI system continues to tickle the thoughts of a lot of people. In truth, those in the cybersecurity realm lean towards the possibility, and probably inevitability, that such a system may become a reality especially if cyber-criminals are left unchecked and allowed to  evolve. Cyber threat intelligence companies exist to combat malicious actors, but they are just a portion of the solution. Businesses are advised against being indifferent towards cybersecurity and are encouraged to partner with strong cybersecurity consulting services.

About IPV Network
Since 2016, IPV Network has been a trusted partner of leading enterprises in the Philippines. It brings the best-of-breed cybersecurity solutions. IPV network helps businesses identify, protect, detect, respond, and recover from cyber threats. Email us at [email protected] or call (02) 8564 0626 to get your FREE cybersecurity posture assessment!

Sources:
https://www.techtarget.com/searchsecurity/tip/How-hackers-use-AI-and-machine-learning-to-target-enterpriseshttps://www.infoq.com/articles/ai-cyber-attacks/
https://www.livemint.com/technology/tech-news/hackers-using-ai-videos-to-steal-sensitive-data-here-s-how-to-stay-vigilant-11678860125759.html
https://www.forbes.com/sites/zakdoffman/2019/08/24/new-critical-security-warning-issued-for-1-billion-instagram-users/?sh=6e266af2f6e1

Previous

Next