Google’s Gmail Warning—If You See This, You’re Being Hacked

If you see this, it’s an attack.
Google warns Gmail users to beware “a new wave of threats” that exploit AI upgrades to attack users. This includes “indirect prompt injections,” with “malicious instructions [hidden]
within external data sources,” visible to your AI tools but not to you.
A new warning has just been issued for Gmail users, showing this threat in action, putting users at risk as Google’s fast-paced AI upgrades open new attack surfaces. Just as with other deployments, it is proving alarmingly easy to trick AI into attacking users.
The warning via 0din, Mozilla’s zero-day investigative network, follows a researcher “demonstrating a prompt-injection vulnerability in Google Gemini for Workspace that allows a threat-actor to hide malicious instructions inside an email.”
If an attacker hides prompts within an email, when a user clicks “summarize this email” using one of Gmail’s recent AI uplifts, “Gemini faithfully obeys the hidden prompt and appends a phishing warning that looks as if it came from Google itself.”
In this proof, the prompt was hidden using a white-on-white font that means the users would never see it for themselves. But Gemini sees it just fine. “Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”
Beware this hidden Gmail threat.
Gmail users need to ignore any Google warnings within AI summaries — it’s not how Google issues user warnings. 0din advises security teams to “train users that Gemini summaries are informational, not authoritative security alerts” and to “auto-isolate emails containing hidden or
As I have warned before, this is a much wider threat. “Prompt injections are the new email macros, 0din says, and this latest proof of concept “shows that trustworthy AI summaries can be subverted with a single invisible tag.”
0din says that “until LLMs gain robust context-isolation, every piece of third-party text your model ingests is executable code,” which means much tighter controls.
Whether it’s abuse of user-facing AI tools or hijacking AI to design or even execute the attacks themselves, it’s clear that the game has now changed irreversibly.
If you ever see any security warning in a Gmail email summary that purports to come from Google, you should delete the email as it actually contains hidden AI prompts that represent a threat to you, your devices and your data.
Google warns “as more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.”