Skip to content

Generative AI Struggles to Generate Malware Despite Interest from Criminals

Reports indicate cybercriminals aren't extensively using AI for malicious code due to limitations and safeguards. Instead, AI is aiding phishing campaigns and disinformation.

Despite the buzz around criminals using large language models like ChatGPT to streamline malware writing, it appears that this AI technology falls short in aiding such endeavors.

Recent research highlights that while some wrongdoers are intrigued by the prospect of using AI models that suggest code, these technologies aren't being widely employed to craft malicious software. This hesitance might stem from the AI systems' limitations or the safeguards that deter cybercriminals from the process.

For those seeking effective and dependable exploits and intrusion tools, options include paying a premium, sourcing from platforms like GitHub, or mastering the skills required to develop from scratch. The notion of AI offering a shortcut to criminal activities is a misconception, as its uptake among cyber-criminals is analogous to its adoption in the broader tech landscape.

Two newly published reports, one from Trend Micro and the other from Google's Mandiant, concur that while cybercriminals are enticed by the potential of AI for malicious activities, its actual implementation remains limited.

Trend Micro researchers David Sancho and Vincenzo Ciancaglini emphasize that "AI is still in its early days in the criminal underground," implying its usage is far from widespread.

Mandiant's Michelle Cantos, Sam Riddell, and Alice Revelli, who have been monitoring criminals' AI usage since 2019, discovered that AI is primarily employed for social engineering rather than automating malware development.

Both research groups align on the central role AI plays for criminals: generating text and other content to manipulate individuals into divulging sensitive information, rather than fabricating malware.

Trend Micro's team asserts that ChatGPT excels at crafting plausible text suitable for spam and phishing campaigns. Interestingly, certain products available on illicit online forums now incorporate a ChatGPT feature enabling buyers to craft convincing phishing emails.

In addition to its application in crafting phishing emails and other social engineering ploys, AI is proficient in generating content for disinformation campaigns, including deep-fakes, audio, and images.

However, one area where AI proves highly effective, according to Google, is fuzzing, also known as fuzz testing. This technique automates the detection of vulnerabilities by injecting random or precisely designed data into software to provoke and unearth exploitable bugs.

Members of Google's Open Source Security Team, Dongge Liu, Jonathan Metzman, and Oliver Chang, underscore the potential of Language Model Models (LLMs) to enhance the code coverage of critical projects through the OSS-Fuzz service. This approach is poised to revolutionize security enhancements across over 1,000 currently fuzzed projects and mitigate obstacles in future fuzzing project adoption.

While the process involved significant prompt engineering and preparatory work, the team noted substantial project gains, ranging from 1.5 percent to 31 percent increase in code coverage.

In the coming months, these Google experts plan to open-source their evaluation framework, enabling other researchers to experiment with their automated fuzz target generation.

Mandiant delves into the capacities of generative AI models further, categorizing them into generative adversarial networks (GANs) capable of creating realistic headshots and text-to-image models producing custom images from text prompts.

Though GANs are commonly used, particularly by nation-state threat groups, text-to-image models possess a more potent potential for deception, as they can contribute to misleading narratives and disseminate fabricated news.

Among these reports, the consensus emerges that while cybercriminals express curiosity about leveraging LLMs for crafting malware, this interest does not translate into widespread malicious code production.

Even though AI aids in refining code and understanding new programming languages, utilizing AI for creating malware necessitates a level of technical proficiency and oversight by a human coder.

With all things considered, the likelihood of AI revolutionizing the automation of cyber threats is restrained by factors including existing restrictions on AI usage, which deter its malevolent application. As Trend Micro notes, discussions around circumventing ChatGPT's safeguards are notably prevalent in sections like "Dark AI" on Hack Forums.

While there is speculation about the emergence of "prompt engineers" who specialize in harnessing AI for malicious purposes, Sancho and Ciancaglini withhold judgment on this prediction for the future.

Comments

Latest