AI Apps Are Undressing Women Without Consent And It’s A Problem

Posted by Bernard Marr, Contributor | 6 hours ago | /ai, /enterprise-tech, /innovation, AI, Enterprise Tech, Innovation, standard, technology | Views: 9


The rise of AI “nudification” tools makes it shockingly easy for anyone to create a fake naked image of you—or any of your family, friends or colleagues—using nothing more than a photo and one of many readily available AI apps.

The existence of tools that let users create non-consensual sexualized images might seem like an inevitable consequence of the development of AI image generation. But with 15 million downloads since 2022, and deepfaked nude content increasingly used to bully victims and expose them to danger, it’s not a problem that society can or should ignore.

There have been calls for apps to be banned, and criminal penalties for creating and spreading non-consensual intimate images have been introduced in some countries. But this has done little to stem the flood, with one in four 13 to 19-year-olds reportedly exposed to fake, sexualized images of someone they know.

Let’s look at how these tools work, what the real risks are, and what steps we should be taking to minimize the harms that are already being caused.

What Are Nudification Apps And What Are The Dangers?

Nudification apps use AI to create naked or sexualized images of people from the sort of everyday, fully-clothed images that anyone might upload to Facebook, Instagram or LinkedIn.

While men are occasionally the targets, research suggests that 99 per cent of non-consensual, sexualized deepfakes feature women and girls. Overwhelmingly, it’s used as a form of abuse to bully, coerce or extort victims. Media coverage frequently suggests that this is increasingly having a real impact on women’s lives.

While faked nude images can be humiliating and potentially career-affecting for anyone, in some parts of the world, it could leave women at risk of criminal prosecution or even serious violence.

Another shocking factor is the growing number of fake images of minors that are being created, which may or may not be derived from images of real children.

The Internet Watch Foundation reported a 400 percent rise in the number of URLs hosting AI-generated child sex abuse content in the first six months of 2025. This type of content is seen as particularly dangerous, even when no real children are involved, with experts saying it can normalize abusive images, fuel demand, and complicate law enforcement investigations.

Unfortunately, media reports suggest that criminals have a clear financial incentive to get involved, with some making millions of dollars from selling fake content.

So, given the simplicity and scale with which these images can be created, and the devastating consequences they can have on lives, what’s being done to stop it?

How Are Service Providers And Legislators Reacting?

Efforts to tackle the issue through regulation are underway in many jurisdictions, but so far, progress has been uneven.

In the US, the Take It Down Act makes online services, including social media, responsible for taking down non-consensual deepfakes when asked to do so. And some states, including California and Minnesota, have passed laws making it illegal to distribute sexually explicit deepfakes.

In the UK, there are proposals to take matters further by imposing penalties for making, not simply distributing, non-consensual deepfakes, as well as an outright ban on nudification apps themselves. However, it isn’t clear how the tools would be defined and differentiated from AI used for legitimate creative purposes.

China’s generative AI measures contain several provisions aimed at mitigating the harm of non-consensual deepfakes. Among these are requirements that tools should have built-in safeguards to detect and block illegal use, and that AI content should be watermarked in a way that allows its origin to be traced.

One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it “isn’t real”.

In Australia, this prompted the government commissioner for online safety to call on schools to ensure all incidents are reported to police as sex crimes against children.

Of course, online service providers have a hugely important role to play, too. Just this month, Meta announced that it is suing the makers of the CrushAI app for attempting to circumvent its restrictions on promoting nudification apps on its Facebook platform.

This came after online investigators found that the makers of these apps are frequently able to evade measures put in place by service providers to limit their reach.

What Can The Rest Of Us Do?

The rise of AI nudification apps should act as a warning that transformative technologies like AI can change society in ways that aren’t always welcome.

But we should also remember that the post-truth age and “the end of privacy” are just possible futures, not guaranteed outcomes.

How the future turns out will depend on what we decide is acceptable or unacceptable now, and the actions we take to uphold those decisions.

From a societal point of view, this means education. Critically, there should be a focus on the behavior and attitudes of school-age children to help make them aware of the harm that can be caused.

From a business point of view, it means developing an awareness of how this technology can impact workers, particularly women. HR policies should ensure there are systems and policies in place to help those who may become victims of blackmail or harassment campaigns involving deepfaked images or videos.

And technological solutions have a role to play in detecting when these images are transferred and uploaded, and potentially removing them before they can cause harm. Watermarking, filtering and collaborative community moderation could all be part of the solution.

Failing to act decisively now will mean that deepfakes, nude or otherwise, are likely to become an increasingly problematic part of everyday life.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *