FBI Warns iPhone, Android Users—Do Not Reply To These Messages

Posted by Zak Doffman, Contributor | 3 hours ago | /cybersecurity, /innovation, Cybersecurity, Innovation, standard | Views: 11


Republished on May 17 with additional advice and resources on defending against these dangerous messages, where normal detection is impossible.

We were warned. Forget looking for telltale signs, the latest set of AI-fueled attacks are so sophisticated you need to check everything to ensure you’re not being attacked. In the last 24-hours, we have seen Gmail and Outlook users warned that malicious emails are now so “perfect” that they’re impossible to detect, and that calls which seem to come from people we know, could be a dangerous deception.

That’s the latest warning to come from the FBI, after the discovery of “an ongoing malicious text and voice messaging campaign.” This has used texts and voice messages purporting to come from “senior U.S. officials,” tricking victims, many of who are also “current or former senior U.S. federal or state government officials and their contacts.”

The bureau’s warning is serious enough that you are now being told: “If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.” The goal of the attacks is to steal credentials through links that seem to be message related.

ForbesGoogle Is Deleting All Your Location Data—Do Not Miss Deadline

According to Cofense’s Max Gannon, “it is important to note that threat actors can also spoof known phone numbers of trusted organizations or people, adding an extra layer of deception to the attack. Threat actors are increasingly turning to AI to execute phishing attacks, making these scams more convincing and nearly indistinguishable.”

The FBI’s advice is wider ranging than just this latest attack, and links back to its recent warnings on the proliferation of AI attacks.

  • Before answering calls or responding to messages, “verify the identity of the person calling you or sending text or voice messages.”
  • Check email addresses, contact details and URLs for any telltale mistakes — albeit AI means such slips are rare and replicas can be near perfect.
  • Obviously be wary of any “subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.”
  • The same goes for voices. “Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical.”

ForbesApple’s iPhone Update—Why You Need To Change Your Messaging App

All that said, the FBI acknowledges that “AI-generated content has advanced to the point that it is often difficult to identify.” Sometimes it will just come down to common sense. Is this a call I could reasonably expect, and am I being asked to do something that would advantage a cybercriminal or scammer. Can I deduce what their take might be. How can I hang up and call back using normal channels. How do I verify the caller.

Ryan Sherstobitoff from SecurityScorecard told me “to mitigate these risks, individuals must adopt a heightened sense of skepticism towards unsolicited communications, especially those requesting sensitive information or urging immediate action.”

Often these texts, calls and voice messages lead to a link. This is the attack, which will phish for credentials or trick you into installing malware. “Do not click on any links in an email or text message until you independently confirm the sender’s identity,” the bureau warns. And “never open an email attachment, click on links in messages, or download applications at the request of or from someone you have not verified.”

In the wake of the FBI’s latest warning, ESET’s Jake Moore told me “it’s vital people think with a clear head before responding to messages from unknown sources claiming to be someone they know. But with newer, impressive and evolving technology, it is understandable why people are quicker to let down their guard and assume that seeing is believing. Deepfake technology is now at an incredible level which can even produce flawless videos and audio clips cleverly designed to manipulate victims.”

ForbesHacking Disaster Warning—Delete All These Emails On Your PC

A new and perfectly timed report from Help Net Security warns “don’t assume anything is real just because it looks or sounds convincing… Remember the saying, seeing is believing? We can’t even say that anymore. As long as people rely on what they see and hear as evidence, these attacks will be both effective and difficult to detect.”

With equally apt timing, Reality Defender had put out a new deepfake guide just 72 hours before the FBI issued its warning. “Deepfake threats targeting communications don’t behave like traditional cyberattacks… Instead, they exploit trust. A cloned voice can pass legacy voice biometric systems. A fake video call can impersonate a company executive with enough accuracy to trigger a wire transfer or password reset.”

Moore’s advice is simple: “To protect yourself from smishing scams and deepfake content avoid clicking on links in unexpected or suspicious text messages — especially those that create a sense of urgency, even when it looks or sounds like the real deal. Never share personal or financial information via text messages and always verify via trusted communication channels.”



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *