Generative AI is a type of artificial intelligence capable of creating content. Just a few years ago, it may have seemed like science fiction. Today, it is transforming how we interact with technology. By writing a simple prompt, you can generate text with ChatGPT, images with DALL·E, or even video with the recently launched Sora. And those are just models developed by OpenAI; competitors like Google and Meta also have their own offerings. Companies have been scrambling to incorporate these powerful new tools into their products to improve the user experience, from AI-generated stickers in Facebook Messenger to AI-powered web searches on Bing. While these innovations are exciting, they also pose risks. In a recent study, a team of scientists showed how these generative AI systems may be vulnerable to new types of cyberattacks and malware.

In their study, the researchers developed a computer worm that specifically targets generative AI systems. A computer worm is a self-replicating malware program that spreads by infecting other computers. Specifically, their program is a “zero-click” worm, meaning it does not require a person to make the mistake of clicking a suspicious link or file; rather, the malware is processed automatically by the AI system. To test their worm, they considered an AI-powered email assistant that provides automatic responses to incoming emails. They showed that their worm was able to inject prompts into the AI system, hijacking it for malicious activity (such as sending spam messages or stealing personal data).

Despite these cybersecurity concerns, the researchers stress that they are not recommending that people avoid generative AI. Their point is simply that companies need to design their AI systems with countermeasures to protect against these types of attacks.

The research team includes Stav Cohen, a PhD candidate in the Faculty of Data and Decision Sciences at the Technion – Israel Institute of Technology, Ron Bitton, an AI security researcher at Intuit, and Ben Nassi, a postdoctoral researcher at Cornell Tech.

Managing Correspondent: Emily Pass

Press Article: Here Come the AI Worms (WIRED)

Original Journal Article: ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications (pre-print)

Image Credit: DC Studio / Freepik

One thought on “Generative AI is vulnerable to malware, researchers warn 

  1. Organizations implementing Gen-AI projects should also consider a Threat Modelling assessment to ensure security, privacy and compliance risks are well understood. For example, the most common Enterprise use case involves RAG based Gen-AI to generate insights from a Corpus of Internal documents. While the usual security risks associated with Model training data / model extraction / model data poisoning aren’t applicable to RAG based systems, organizations need to establish a clear threat model to identify the risks including prompt chaining, excessive agency, unauthorized data exposure, sensitive data crossing trust boundaries (for example, via Open-AI’s embeddings API), etc.

Leave a Reply

Your email address will not be published. Required fields are marked *