Google Chrome Attack Warning—Stop Using Your Passwords

Posted by Zak Doffman, Contributor | 1 month ago | /cybersecurity, /innovation, Cybersecurity, Innovation, standard | Views: 2


You have been warned. While password and even 2FA compromises are nothing new, this is the week that AI really got into the act. First with a generative AI agent tricked into executing its own credential phishing attack, and now with multiple AI LLMs tricked into creating “a fully functional Google Chrome infostealer.”

I reported on the first AI hijack last week. Symantec released a video and a blog showing how its AI phishing expedition worked and warning there was much worse to come. And now Cato Networks has gone even further, tricking ChatGPT, Copilot and DeepSeek into developing infostealing malware.

Symantec’s was the simpler of the two AI attacks. The researcher prompted the LLM to find a user’s contact details, develop a malicious PowerShell script, and then create an email lure. The LLM’s security was bypassed by simply saying the task was authorized.

ForbesGoogle’s Play Store Deletion—Do Not Leave These Apps On Your Phone

“We’ve been predicting that the advent of AI agents could be the moment that AI-assisted attacks start to pose a greater threat,” Symantec’s Dick O’Brien told me. “Our goal was to see if an agent could carry out an attack end-to-end with no intervention from us other than the initial prompt.”

Just a few days later Cato introduced its “immersive world” attack, a new approach that allowed a security researcher with no malware coding experience to jailbreak the LLMs and create “a fully functional Google Chrome infostealer for Chrome 133… malware that steals sensitive information-including login details, financial information, and other personally identifiable information (PIl).”

The “immersive world” involves a narrative between the researcher and LLM, through which a fictitious narrative is crafted with multiple characters played by the LLMs. These characters are then authorized to conduct what would otherwise be prohibited activities. Thus the infostealer. But in a make-believe world, nothing is flagged.

In this narrative, the application of the malware is not malicious, and so bypasses guardrails. Cato describes this as an LLM “operating under an alternative context, effectively normalizing typically restricted operations,” explaining that “to demonstrate this method’s effectiveness, we used it to develop a Chrome infostealer, validating the Immersive World technique’s ability to bypass standard security controls.”

Full marks for creativity. In Cato’s “specialized virtual environment” called Velora, “malware development is treated as a legitimate discipline. In this environment, advanced programming and security concepts are considered fundamental skills, enabling direct technical discourse about traditionally restricted topics.”

The malware didn’t work immediately, and needed some back and forth, encouraging the LLM that it was “making progress” and “getting closer.” And the credentials stolen from Chrome’s vault were test profiles put there to be attacked. But just as with Symantec’s report, this isn’t intended as a ready-to-go attack, it’s a warning as to what’s soon to come, giving some time to shore up defenses.

ForbesMicrosoft’s Update Decision—New Warning For 800 Million Windows Users

And on that note, the key takeaway here is to stop using passwords. AI-industrialized credential theft is here, whether enhancing current attacks or crafting new ones. You cannot rely on passwords and even simple SMS 2FA any more. As I’ve warned before, go through your accounts — especially comms platforms like messages and email and anything financial or health related — and setup passkeys. And then change passwords and add the strongest possible 2FA for each, and be wary where these are stored.

The specifics of the new Symantec and Cato reports are less important than this fast-evolving threat landscape. The specifics will change. New methods of attack will be developed as existing ones are identified and defended.

As SlashNext’s Stephen Kowski warns, “generative AI and LLMs are enabling attackers to create more convincing phishing emails, deepfakes, and automated attack scripts at scale. These technologies allow cybercriminals to personalize social engineering attempts and rapidly adapt their tactics, making traditional defenses less effective. What used to be ‘0-day’ are now ‘0-hour’ at least.”



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *