The misuse of AI technologies has transitioned from a looming threat to a tangible reality that escalates by the day. To build on our last post that detailed how AI jailbreaks enable novice attackers to launch sophisticated cyber threats, this blog explores malicious AI and the rise of dark LLMs and describes how Large Language Models (LLMs) are being weaponized for cyberattacks.
Over the last year or so, dark LLMs (also referred to as BlackHat GPT or LLM hacking tools) such as WormGPT and DarkBARD have emerged as real-world threats that underscore a significant shift in the nature of cyber warfare. These advanced tools leverage generative AI to execute complex and elusive attacks, posing new challenges to even the most advanced security frameworks.
What are Dark LLMs?
Dark LLMs are AI-powered tools that leverage OpenAI’s API to engineer malicious variations of ChatGPT that operate without any sort of ethical guidelines or built-in limitations. Tailor made for the vast cybercriminal networks, the main goal of the dark LLMs is to facilitate nefarious activities. At this point, they have primarily been observed being utilized to generate malicious code, exploit vulnerabilities, and create targeted spear-phishing emails.
Examples of Dark LLMs
As Dark LLMs grow more prevalent in the cybercriminal world, their applications become increasingly varied and sophisticated. Below we share some specific examples of these tools and their capabilities.
XXXGPT. XXXGPT is a malicious iteration of ChatGPT, engineered for illicit activities. The tool claims to offer a broad spectrum of functions to facilitate various types of cyberattacks including botnets for large-scale attacks, Remote Access Trojans (RATs), Crypters, and malware creation.
Notably, it excels in producing hard-to-detect malware, thanks to its convincing nature and an advanced obfuscation feature. This obfuscation capability significantly enhances its ability to camouflage the code it generates, making the malware challenging to identify and thwart, thereby adding a complex layer to cybersecurity defense strategies.
Wolf GPT. Wolf GPT, a malicious variant of ChatGPT, harnesses Python programming to craft cryptographic malware from extensive datasets of existing malicious software. Distinguished by its ability to enhance attacker anonymity within specific attack vectors, it also facilitates advanced phishing campaigns. Similar to XXXGPT, Wolf GPT possesses an obfuscation feature that considerably hampers cybersecurity teams’ efforts to detect and mitigate these sophisticated threats.
WormGPT. WormGPT, built on the GPT-J language model developed in 2021, is a tool designed for cybercriminal activities, especially focused on malware creation and exploitation. It stands out with features like unlimited character support, chat memory retention, and code formatting capabilities. WormGPT is known for its rapid response times, expansive character count handling, and a strong emphasis on privacy, avoiding the logging or retention of user data.
Its versatility is enhanced by various AI models, allowing dynamic usage and prompt alteration. Notably, it includes ongoing development in context memorization and offers formatted code and scripts in responses, tailored to cater to code-related queries.
DarkBARD. DarkBARD AI, the evil twin of Google’s BARD AI, is an advanced tool equipped for a range of cybercrimes. It’s defined by its real-time processing of information from the clear web, enhancing its adaptability.
DarkBARD AI’s capabilities extend to creating misinformation, deepfakes, and managing multilingual communications. It can generate diverse content like code and articles and integrates with Google Lens for image-related tasks. This tool’s versatility is further underscored by its potential in executing ransomware and DDoS attacks.
Illicit Applications
Researchers have observed the use of dark LLMs like those named above in various illicit activities. These include synthesizing targeted research to automatically correlate public information like employers and colleagues, enhancing social engineering lures for phishing, and leveraging voice-based AI for fraud and early-stage attack tactics.
They’re also used for automated attacks, including discovering vulnerabilities and spreading malware. AI’s role in phishing and evasion techniques has grown, aiding in the creation of convincing fake profiles and malware that evades detection. Additionally, deepfakes and disinformation, AI-powered botnets, supply chain attacks, data poisoning, and advanced password guessing are among the sophisticated methods employed by cybercriminals using these tools.
In light of the escalating sophistication of cyber threats powered by the rise of dark LLMs, the cybersecurity landscape is at a critical juncture. The reliance on traditional defense mechanisms and the expectation for users to play a key defensive role by recognizing phishing attacks are becoming increasingly untenable. As we’ve seen, AI’s ability to simulate emails that completely fool even the most discerning users signals a significant shift in adversarial tactics. This evolution requires a reevaluation of the standard approach to both phishing detection, as well as phishing awareness training. Check back for our next post which will explore this topic in greater detail.