Charlie Kirk Killing Sparks Wild Misinformation

Posted by Emma Woollacott, Senior Contributor | 6 hours ago | /cybersecurity, /innovation, /social-media, Cybersecurity, Innovation, Social Media, standard, technology | Views: 25


The shooting of Charlie Kirk unleashed a wave of conspiracy theories and misinformation, much of it coming from chatbots.

On Thursday, the day after Kirk died,, for example, the X account of AI chatbot Perplexity was confidently proclaiming that Kirk was still alive. The post has since been removed.

But Elon Musk’s chatbot Grok, meanwhile, was under a similar misapprehension. “The video is a meme edit—Charlie Kirk is debating, and effects make it look like he’s ‘shot’ mid-sentence for comedic effect,” it claimed. “No actual harm; he’s fine and active as ever.”

This isn’t the first time chatbots have delivered confidently false information.

“During the Los Angeles protests and Israel-Hamas war, users similarly turned to chatbots for answers and were served inaccurate information,” NewsGuard researchers said. “Despite repeated examples of these tools confidently repeating falsehoods, as documented in NewsGuard’s Monthly AI False Claims Monitor, many continue to treat AI systems as reliable sources in moments of crisis and uncertainty.”

Reassuringly, most of the videos currently in circulation are real, according to an analysis by GetReal Security.

“We have analyzed several of the videos circulating online and find no evidence of manipulation or tampering. At the same time, we are seeing some AI-generated videos of the event that are clearly fake,” said cofounder Hany Farid. “This is an example of how fake content can muddy the waters and in turn cast doubt on legitimate content.”

Meanwhile, according to NewsGuard, pro-Kremlin sources have been claiming that Kirk was on the Myrotvorets blacklist, a database of perceived Ukrainian enemies. But, said the reliability ratings agency, “There is no evidence that Kirk was ever on the list, and a NewsGuard search of his name on the database yielded no results.”

Meanwhile, before a suspect was identified, a request for help from the FBI encouraged internet users to suggest various candidates as Kirk’s shooter. The FBI has created a form for people to fill in if they have any information, posting a picture of a “person of interest”.

However, in the replies to the FBI tweet, several people suggested that their clever detective work may have unmasked the shooter. Many were making their accusations on the basis of “AI-enhanced” versions of the FBI’s photo of the alleged shooter, Tyler Robinson.

Even the Washington County Sheriff’s Office in Utah appears to have fallen for one of these images, calling it “a much clearer image of the suspect compared to others we have seen in the media” before realizing the mistake and editing the Facebook post.

A careful look at the various AI-enhanced images, though, show that many have a number of significant differences from the original FBI pictures – differences that certainly aren’t a question of just cleaning up a fuzzy image. In some, for example, the logo on the suspects baseball cap is a different shape; in others, he’s wearing a different pair of shoes.

The Better Business Bureau offers advice on identifying AI in photos and videos. It suggest zooming in to any unusual-looking images, looking for physical impossibilities such as extra fingers or a glossy, “airbrushed” look.

People should also, it said, check out who shared the image, and why: “If it shows shocking political events or messages, ask yourself if the news is on social media or mainstream media. Why might mainstream media be hesitant to pick up the story? It’s probably fake news corroborated by AI-generated images and videos.”



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *