Online propaganda campaigns are using ‘AI slop’, researchers say

Online propaganda campaigns are using ‘AI slop’, researchers say



Many of the largest and most widely established state-sponsored online propaganda campaigns have embraced using artificial intelligence, a new report finds — and they’re often bad at it.

The report, by the social media analytics company Graphika, analyzed nine ongoing online influence operations — including ones it says are affiliated with China’s and Russia’s governments — and found that each has, like much of social media, increasingly adopted generative AI to make images, videos, text and translations.

The researchers found that sponsors of propaganda campaigns have come to rely on AI for core functions like making content and creating influencer personas on social media, streamlining some campaigns. But the researchers say that content is low quality and gets little engagement.

The findings run counter to what many researchers had anticipated with the growing sophistication of generative AI — artificial intelligence that mimics human speech, writing and images in pictures and videos. The technology has rapidly become more advanced in recent years, and some experts warned that propagandists working on behalf of authoritarian countries would embrace high-quality, convincing synthetic content designed to deceive even the most discerning people in democratic societies.

Resoundingly, though, the Graphika researchers found that the AI content created by those established campaigns is low-quality “slop,” ranging from unconvincing synthetic news reporters in YouTube videos to clunky translations or fake news websites that accidentally include AI prompts in headlines.

“Influence operations have been systematically integrating AI tools, and a lot of it is low-quality, cheap AI slop,” said Dina Sadek, a senior analyst at Graphika and co-author of the report. As was the case before such campaigns started routinely using AI, the vast majority of their posts on Western social media sites receive little to no attention, she said.

Online influence campaigns aimed at swaying American politics and pushing divisive messages go back at least a decade, when the Russia-based Internet Research Agency created scores of Facebook and Twitter accounts and tried to influence the 2016 presidential election.

As in some other fields, like cybersecurity and programming, the rise of AI hasn’t revolutionized the field of online propaganda, but it has made it easier to automate some tasks, Sadek said.

“It might be low-quality content, but it’s very scalable on a mass scale. They’re able to just sit there, maybe one individual pressing buttons there, to create all this content,” she said.

Examples cited in the report include “Doppelganger,” an operation the Justice Department has tied to the Kremlin, which researchers say used AI to create unconvincing fake news websites, and “Spamoflauge,” which the Justice Department has tied to China and which creates fake AI news influencers to spread divisive but unconvincing videos on social media sites like X and YouTube. The report cited several operations that used low-quality deepfake audio.

One example posted deepfakes of celebrities like Oprah Winfrey and former President Barack Obama, appearing to comment on India’s rise in global politics. But the report says the videos came off as unconvincing and didn’t get much traction.

Another pro-Russia video, titled “Olympics Has Fallen,” seemed to be designed to denigrate the 2024 Summer Olympic Games in Paris. A nod to the 2013 Hollywood film “Olympus Has Fallen,” it starred an AI-generated version of Tom Cruise, who didn’t participate in either film. The report found it got little attention outside of a small echo chamber of accounts that normally share that campaign’s films.

Spokespeople for China’s embassy in Washington, Russia’s Foreign Affairs Ministry, X and YouTube didn’t respond to requests for comment.

Even if their efforts don’t reach many actual people, there is value for propagandists to flood the internet in the age of AI chatbots, Sadek said. The companies that develop those chatbots are constantly training their products by scraping the internet for text they can rearrange and spit back out.

A recent study by the Institute for Strategic Dialogue, a nonprofit pro-democracy group, found that most major AI chatbots, or large language models, cite state-sponsored Russian news outlets, including some outlets that have been sanctioned by the European Union, in their answers.



NBC News

Leave a Reply

Your email address will not be published. Required fields are marked *