The Sora is being displayed on a smartphone with the OpenAI logo visible
NurPhoto via Getty Images
Shrek and baby Shrek taking a stroll through the White House. Sponge-Bob at Walmart. Mario and Luigi riding elephants through the gardens of Versailles.
What all of these have in common is their ability to be cleverly rendered by Sora 2, the latest iteration of OpenAI’s image generation engine. The possibilities are practically endless (later I’ll include a list made up by ChatGPT, for the edification and amazement of the audience.) So now we have more complex, full-featured image and video generation, greater deepfakes potential and a new app for sharing. What could go wrong?
A lot, according to many viewers who are already sort of turned off by what they call “AI slop.”
Another related term is “brain rot,” or, what happens when someone spends too much time watching boring, mediocre, unchallenging content.
Let’s look at that last word – boring. We know that the ideas like those I discussed above are not classically boring – they’re outrageous. But in their arbitrary parading, they may, in fact, become boring. And that’s just part of what has people worried about Sora 2. There’s already been great controversy over whether AI can learn from creative material made by other AI, and then generate a result that’s up to a human standard. Now we get to see what all of this is like on steroids.
The Race to Meme Vending
Let’s take a step back for a little background.
When Sora 1 came out, the response was lackluster. Google Veo ended up stealing a lot of the thunder.
“AI video has been Google’s playground for months, thanks to its Veo 3 model quietly spreading across Gemini, YouTube, and Workspace,” writes Digit at Digit.in. “But with the September 2025 launch of Sora 2, OpenAI has finally caught up to Google in AI video generation and it has done so with a splash. Sora 2 arrives not just as a technical model but as a social app, signaling OpenAI’s intent to compete head-on for creators’ attention.”
Indeed, it’s the social app, and the features, that are creating some of the concerns. The sharing app could springboard more of the “AI slop” to the front of the line and lead to a kind of malaise in a new generation of users jaded by that endless parade. But there’s more. Through the platform’s Cameos feature, users can drop themselves or others into videos and basically deepfake real people at the drop of a hat.
Watch Your Friends
To OpenAI’s disclaimer that it will only let you deepfake someone who’s added you as a friend, let’s think about this. The original friending feature on Facebook lets the added friend see a private profile with things that the USER HIMSELF OR HERSELF has created. It does NOT let the friend take personal images and data in private, and do whatever he or she wants with them. So choosing a “friend” on a platform like this is fundamentally different. How much should you trust someone to give them license to make the digital ‘you’ do whatever comes into their heads?
“At the heart of the app is the ‘cameos’ feature, a powerful tool that lets users inject their likeness into videos,” writes Markus Kasanmascheff at Winbuzzer. “This immediately brings the issue of deepfakes and consent to the forefront, with some experts warning it is a “deepfake ticking time bomb”. While the feature is designed for personal use, critics worry the technology could be repurposed for malicious deepfakes or misinformation.”
From the Top
Sam Altman’s announcement posted yesterday seems, in the words of one of my favorite podcasters Nathaniel Whittemore, “highfalutin,” and pretty wildly optimistic.
“This feels to many of us like the ‘ChatGPT for creativity’ moment, and it feels fun and new,” he wrote. “There is something great about making it really easy and fast to go from idea to result, and the new social dynamics that emerge. Creativity could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase. Even in the very early days of playing with Sora, it’s been striking to many of us how open the playing field suddenly feels.”
A Cambrian explosion, indeed. It’s going to be a fundamental shift in media.
It Wants to Do What You’re Doing
That brings us back to the other profound worry about the creative process itself and the old latin question: qui bono?
Here’s how the whole thing is described by Ed Newton-Rex, a leading advocate for human creators, who was also quoted by Whittemore on the AI Daily Brief podcast episode around Sora 2.
“The struggle between AI companies and creatives around ‘training data’ — or what you and I would refer to as people’s life’s work — may be the defining struggle of this generation for the media industries. AI companies want to exploit creators’ work without paying them, using it to train AI models that compete with those creators; creators and rights holders are doing everything they can to stop them.”
So Sora 2 may be pitted against the community of humans who have to work for their livings, not just spin out “slop” all day and all night.
Permissions, Regulation and Rules
All of the above included, it’s clear that many of the outcomes around Sora 2 are going to be decided by the establishment of boundaries – in courts, in boardrooms, and around the world, where public opinion has influence and can lead to reasonable policy. Let’s keep an eye on all of this, because it’s going to re-order our world in a big way.