The new AI worm is named after the first worm that infected PCs on the internet

AI Worm ‘Morris II’ Poses Threat to Data Security, Targets ChatGPT and Gemini

Share If You Like The Article

A new breed of artificial intelligence (AI) threat dubbed ‘Morris II’ has emerged, posing a significant risk to data security by targeting popular AI models like ChatGPT and Gemini. Researchers have unveiled the capabilities of Morris II, warning of its potential to steal confidential information, including credit card details, from AI-powered email assistants.

Named after the infamous 1988 internet worm, Morris II has been developed by a team of researchers comprising Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Button from Intuit. Their research paper outlines how Morris II can infiltrate AI systems, manipulate generative prompts, and exploit vulnerabilities to extract sensitive data.

The AI worm utilizes a range of methods to infect and propagate itself across AI platforms. It can compromise generative AI email assistants, extract data from AI-enabled email systems, and circumvent security measures implemented by widely-used chatbots like ChatGPT and Gemini. Morris II employs self-replicating prompts, allowing it to navigate through AI networks undetected.

According to the researchers, Morris II exploits text and image prompts to infiltrate AI systems. By leveraging large language models such as GPT-4 and Gemini Pro, the worm generates text content that bypasses safeguards, facilitating data theft. Image prompts embed harmful commands within photos, enabling automatic transmission to infect new email clients.

The potential ramifications of Morris II are alarming, as it has demonstrated the ability to mine confidential information, including social security numbers and credit card details, from compromised AI systems.

Upon discovery, the researchers promptly notified both OpenAI and Google of their findings. While Google declined to comment, a spokesperson for OpenAI acknowledged the issue, affirming efforts to enhance system security. They emphasized the importance of developers employing stringent measures to safeguard against malicious inputs.

The emergence of Morris II underscores the evolving landscape of AI security threats, highlighting the need for proactive measures to mitigate risks and protect sensitive data.


Share If You Like The Article

Leave a Reply

Your email address will not be published. Required fields are marked *