How ChatGPT Affects Cybersecurity According to Experts

What is ChatGPT?

ChatGPT is an advanced natural language processing model developed by OpenAI. It is based on the GPT-3.5 architecture and is designed to have natural conversations with users. ChatGPT is able to respond to the inputs it receives using deep learning techniques. This allows it to understand and generate human-like text.  A vast amount of diverse internet text was used to train it. This gives it the ability to produce coherent and relevant responses across a wide range of topics. ChatGPT’s purpose is to assist users answer questions, provide explanations, offer suggestions, and hold casual conversations. Its proficiency in understanding and generating human-like text makes it a valuable tool for various applications, including customer support, virtual assistants, and content generation.

ChatGPT and Cybersecurity

The integration of machine learning/artificial intelligence (ML/AI) in the  field of cybersecurity is a relatively new development. One prevalent  application involves endpoint detection and response (EDR), where ML/AI  uses behavior analytics to  point out anomalous activities. By comparing observed behavior to established norms, ML/AI can identify and mitigate processes, secure accounts, trigger alerts, and more.

ML/AI holds promise in augmenting security efforts and fortifying cybersecurity posture. This can range from task automation or aiding the development and refinement of novel ideas. Let’s look at a few potential applications.

ChatGPT as an Optimization Tool

The first notable advantage of ChatGPT is its  ability to simplify complex tasks for entry-level analysts. Analysts, who use a security information and event management (SIEM) tool called Splunk for fraud and security event detection may find  its language more complicated than ChatGPT. That is because SPL, Splunk’s language,  becomes  more intricate with advanced queries.

Understanding this  highlights the significance of ChatGPT, which already  knows SPL and can swiftly convert  analyst prompts into  queries. This significantly reduces the entry barrier, enabling ChatGPT to generate an alert for a brute force attack against Active Directory and explain its reasoning. This streamlined approach  works with standard security operations center (SOC) alerts, making it a useful tool  for  beginner SOC analysts.

Other Benefits of ChatGPT in Cybersecurity

Another compelling application of ChatGPT  is how it automates daily tasks for  busy IT teams. In most environments, stagnant Active Directory accounts are widespread, ranging from a few dozen to hundreds. These accounts often possess privileged permissions, and while privileged access management strategies can be ideal, it may not always be  possible for businesses to prioritize.

In such cases, IT teams resort to the traditional do-it-yourself (DIY) approach. This involves system administrators employing self-scripted, scheduled tasks to disable dormant accounts. ChatGPT can  create these scripts, devising the logic to identify and disable accounts inactive for the past 90 days. By empowering junior engineers to develop and schedule these scripts while  fully understanding the underlying logic, ChatGPT allows senior engineers and administrators to allocate their time to more  pressing issues.

ChatGPT can also be an effective force multiplier in dynamic exercises such as purple teaming. This is where red and blue teams collaborate to test and enhance an organization’s security posture. It can generate simple examples of scripts used by penetration testers or debug malfunctioning scripts.

A common technique across cyber incidents is persistence, and an analyst or threat hunter should actively seek signs of attackers adding their specified scripts/commands as startup scripts on Windows machines. By making a simple request, ChatGPT can create a basic yet functional script that  allows red teamers to implement this persistence on a target host. While red teams utilize this tool to aid penetration tests, blue teams can leverage it to develop  better alerting mechanisms.

How Limited is the Application of ChatGPT Within Cybersecurity?

While the benefits of AI in cybersecurity are abundant, it is crucial to acknowledge its limitations. Complex human cognition and real-world experiences often play pivotal roles in decision-making, and AI tools cannot replicate those just yet. Therefore, AI is a support system for analyzing data and producing output based on provided facts. Despite significant progress, AI still generates false positives that  humans have to correct.

Nonetheless, one of the primary advantages of AI lies in automating routine tasks, allowing human professionals to concentrate on more creative and time-intensive endeavors. AI can streamline the creation and optimization of scripts for cybersecurity engineers and system administrators.

ChatGPT as a Cybersecurity Risk

The appeal of ChatGPT  is its capacity to execute complex tasks with minimal prompts, particularly in the realm of programming. However, concerns have been raised regarding the potential for this technology to lower the barriers to entry for malware creation. This could lead to  a wave of virus writers relying on AI tools for their nefarious activities.

Joshua Long, Chief Security Analyst at Intego, highlights this issue, emphasizing that computer code, like any tool, can be used for good or malicious purposes. When requesting code for file encryption, for instance, ChatGPT cannot  tell the user’s true intentions. It will believe the claim of needing encryption code to safeguard personal files, even if the actual goal is to develop ransomware.

ChatGPT has implemented several safeguards to combat such misuse, but getting around these protections is the challenge for virus creators. A straightforward request for an effective virus will result in ChatGPT refusing.   Users need a level of creativity to manipulate ChatGPT into generating malicious codes against its own judgment. While the possibility of using AI to create malware exists in theory and has already been demonstrated, Martin Zugec, Technical Solutions Director at Bitdefender, believes the risks remain relatively low as novice malware writers generally lack the skills required to bypass security measures.

Zugec further notes that chatbot-generated malware  deserves a conversation but lacks evidence suggesting it poses a significant threat in the near future. The quality of malware code produced by chatbots tends to be subpar. This makes it less appealing to experienced malware writers who can find superior examples in public code repositories.

ChatGPT Can Facilitate Social Engineering

While ChatGPT’s coding abilities may not be a significant concern, its potential for facilitating more effective phishing and social engineering campaigns raises concern. Companies often face attacks targeting employees, exploiting their vulnerability to unintentionally grant unauthorized access. Hackers could leverage AI chatbots to craft convincing phishing emails or generate a large number of persuasive messages quickly. This exceeds the capabilities of human threat actors.

Karen Renaud, Merrill Warkentin, and George Westerman suggest in MIT’s Sloan Management Review that a fraudster could employ ChatGPT to generate a script read aloud by a deepfake voice impersonating a company’s CEO. This voice could then instruct an employee to transfer funds to a fraudulent account, capitalizing on the trust and authority associated with the CEO’s voice.

Using ChatGPT to Combat Cybercrime

While ChatGPT presents risks in terms of phishing campaigns, its attributes also make it a valuable resource for cybersecurity researchers and antivirus firms. Long highlights the use of AI chatbots by researchers to identify undiscovered vulnerabilities in code. They do this by uploading it and asking ChatGPT to identify potential weaknesses. Thus, the same methodology that could weaken defenses can also strengthen them.

Additionally, ChatGPT’s aptitude for crafting plausible phishing messages can educate companies and users about identifying scams and avoiding falling victim to them. It can also help reverse engineer malware, enabling rapid development of countermeasures by researchers and security firms.

Ultimately, ChatGPT itself is neither inherently good nor bad. As Zugec asserts, concerns about AI facilitating malware development could apply to any technological advancement that benefits developers, such as open-source software or code-sharing platforms. As long as safeguards continue to improve, the threat posed by even the most advanced AI chatbots may not be as perilous as recent predictions suggest.

To protect oneself from the threats associated with AI chatbots and the potential for maliciously created malware, adopting a multi-layered defense approach is crucial. This includes implementing endpoint security solutions, keeping software and systems up to date, and  being aware of  suspicious messages or requests.

It is also best to exercise caution when prompted to install files automatically while visiting websites. When updating or downloading applications, it is advisable to obtain them from the official app store or the software vendor’s website.

Conclusion

ChatGPT, in its current version, is proving to be a valuable tool for novice and experienced users. It is very capable of learning and using its knowledge to deliver  requests, albeit with restrictions. As of the moment, ChatGPT’s security features prevent and discourage users from creating malicious code. However, only time will tell if ChatGPT’s security features can be bypassed. ChatGPT is a tool, and like any other tool, it can be used for good and bad. Fortunately, AI can combat AI, and there are cybersecurity consulting companies that are capable of providing cyber threat intelligence and incident response, digital forensics, and monitoring, among others. It is strongly recommended to establish long-term partnerships with strong cybersecurity providers, as investing in one is worth more than losing to cyber-attacks.

About IPV Network
Since 2016, IPV Network has been a trusted partner of leading enterprises in the Philippines. It brings the best-of-breed cybersecurity solutions. IPV network helps businesses identify, protect, detect, respond, and recover from cyber threats. Email us at [email protected] or call (02) 8564 0626 to get your FREE cybersecurity posture assessment!

Sources:

https://www.digitaltrends.com/computing/is-chatgpt-a-cybersecurity-malware-risk/
https://venturebeat.com/security/chatgpt-is-about-to-revolutionize-cybersecurity/
https://www.zdnet.com/article/chatgpt-and-the-new-ai-are-wreaking-havoc-on-cybersecurity/
https://www.washingtonpost.com/technology/2023/05/11/hacking-ai-cybersecurity-future/
https://www.centraleyes.com/what-are-the-cyber-security-risks-of-chatgpt/


Previous

Next