FBI Warning—You Should Never Reply To These Messages

Posted by Zak Doffman, Contributor | 9 hours ago | /cybersecurity, /innovation, Cybersecurity, Innovation, standard | Views: 12


Republished on July 10 with new report into AI deep fake attacks and advice for smartphone owners on staying safe as threats surge.

The news that AI is being used to impersonate Secretary of State Marco Rubio and place calls to foreign ministers may be shocking, but it shouldn’t be surprising. The FBI has warned such attacks are now underway and it will only get worse.

As first reported by the Washington Post, the State Department has told U.S. diplomats that this latest attack has targeted at least three foreign ministers, a U.S. senator and a governor, using an AI generated voice to impersonate Rubio.

A fake Signal account (Signal strikes again) was used to initiate contact through text and voice messages. It’s clear that voice messages enable attackers to deploy AI fakes without the inherent risk in attempting to run them in real-time on a live call.

ForbesIf This App Is Installed On Your Smartphone, Delete It Now

The FBI is clear — do not respond to text or voice messages unless you can verify the sender. That means a voice message that sounds familiar cannot be trusted unless you can verify the actual number from which it has been sent. Do not reply until you can.

Darktrace’s AI and Strategy director Margaret Cunningham told me this is all too “easy.” The attacks, while “ultimately unsuccessful,” demonstrate “just how easily generative AI can be used to launch credible, targeted social engineering attacks.”

Alarmingly, Cunningham warns, “this threat didn’t fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability.” People make decisions “while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.”

And while the Rubio scam will generate plenty of headlines, the AI fakes warning has being doing the rounds for some months. It won’t make those same headlines, but you’re more likely to be targeted in your professional life through social engineering that exploits readily available social media connections and content to trick you.

The FBI tells smartphone users: “Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.”

This is in addition to the broader advice given the plague of text message attacks now targeting American citizens. Check the details of any message. Delete any that are clear misrepresentations, such as fake tolls or DMV motoring offenses. Do not click any links contained in text messages — ever. And do not be afraid to hang up on the tech or customer support desk or bank or the law enforcement officer contacting you. You can then reach out to the relevant organization using publicly available contact details.

ESET’s Jake Moore warns “cloning a voice can now take just minutes and the results are highly convincing when combined with social engineering. As the technology improves, the amount of audio needed to create a realistic clone also continues to shrink.”

“This impersonation is alarming and highlights just how sophisticated generative AI tools have become,” says Black Duck’s Thomas Richards. “It underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI.”

As for the Rubio fakes, “the State Department is aware of this incident and is currently monitoring and addressing the matter,” a spokesperson told reporters. “The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department’s cybersecurity posture to prevent future incidents.”

“AI-generated content has advanced to the point that it is often difficult to identify,” the bureau warns. “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”

With perfect timing, Trend Micro’s latest report warns “criminals can easily generate highly convincing deepfakes with very little budget, effort, and expertise, and deepfake generation tools will only become more affordable and more effective in the future.”

The security team says this is being enabled by the same kinds of toolkits driving other types of frauds that have also triggered FBI warnings this year — including a variety of other message attacks. “tools for creating deepfakes,” Trend Micro says, “are now more powerful and more accessible by being cheaper and easier to use.”

As warned by the FBI earlier in the year and with the latest Rubio impersonations that it has under investigation, deep fake voice technology is now easily deployed.

“The market for AI-generated voice technology is extremely mature,” Trend Micro says, citing several commercial applications, “with numerous services offering voice cloning and studio-grade voiceovers… While “these services have many legitimate applications, their potential for misuse cannot be overlooked.”

ForbesGoogle’s AI Starts Reading All Your Messages—How To Stop It

After breaking the Rubio impersonations news, the Washington Post warns that “In the absent of effective regulation in the United States, the responsibility to protect against voice impostors is mostly on you. The possibility of faked distressed calls is something to discuss with your family — along with whether setting up code words is overkill that will unnecessarily scare younger children in particular. Maybe you’ll decide that setting up and practicing a code phrase is worth the peace of mind.”

That idea of a secure code word that a friend or relative can use to provide they’re real was pushed by the FBI some months ago. “Create a secret word or phrase with your family to verify their identity,” it suggested in an AI attack advisory.

“Criminals can use AI-generated audio to impersonate well-known, public figures or personal relations to elicit payments,” the bureau warned in December. “Criminals generate short audio clips containing a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.”



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *