Lawsuit Against OpenAI And ChatGPT Raises Hard Questions About When AI Makers Should Be Reporting User Prompts

Posted by Lance Eliot, Contributor | 4 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 10


In today’s column, I examine a momentous and unresolved matter concerning when AI makers ought to be reporting on user prompts that might seem out of sorts.

This topic has recently risen to the topmost headlines due to a civil lawsuit that was filed against OpenAI, the AI maker of the widely popular ChatGPT and GPT-5, occurring on August 26, 2025 (the case of Matthew and Maria Raine versus OpenAI and Sam Altman). I aim in this discussion to address overarching societal aspects of the design and use of contemporary AI. The same overall vexing considerations apply to all the other generative AI and large language models (LLMs), such as the competing products of Anthropic Claude, Google Gemini, Meta Llama, xAI Grok, etc.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that involves mental health aspects. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Inspection Of User Prompts

Most users of generative AI are probably unaware that their prompts are being reviewed by the AI maker of the AI that they are using. People seem to have a false impression that their prompts are strictly private and not available to anyone anywhere.

Nope.

The AI makers stipulate in their online licensing agreements that the user agrees to allow their entered prompts to be examined for a variety of reasons, including to ascertain if the user is violating any of the terms of use underlying the AI. For example, suppose the person is using AI as a planning tool to commit a crime. Additional reasons for inspecting prompts involve detecting whether a user might be expressing potential harm to others or to themselves.

Lots of presumably sensible and appropriate reasons to look at user prompts are readily justifiable. Often, prompts are being fed through an automated screening tool that seeks to computationally determine if a prompt is suspicious. Once so flagged, the prompt typically goes to a deeper automated analysis and then, as needed, is brought to the attention of a human working for the AI maker. This might be an in-house employee or a contracted third party.

For more details on how AI and humans are assessing prompts, plus related user privacy issues, see my discussion at the link here.

Darned If Do, Darned If Don’t

When I bring up this inspection aspect during my talks on AI, attendees frequently balk at the idea that another human being might be ultimately seeing their prompts. If it is just the AI doing so, they seem to be somewhat comfortable with that approach. But if a person, a living, breathing human being, is going to potentially be looking at their prompts, that seems outrageous and extremely unsettling.

Why is it necessary to have a human inspect prompts?

Well, partially this is because we don’t yet have artificial general intelligence (AGI), and for the moment, the only viable way to truly vet a prompt is via the smarts of a fellow human being. Contemporary AI cannot do as subtle or delicate a job of figuring out whether a prompt is overboard and beyond the pale. If a prompt is going to raise alerts and red flags, the best bet is to have a human double-check the AI-flagged concern and then humanly decide the next steps to be undertaken.

You might insist that no one should be examining prompts. The AI shouldn’t do so. Nor should a human do so. All prompts should be completely private. Period, end of story.

The problem with that perspective is that if a user enters a prompt of some untoward nature, the odds are that society would expect the AI maker to have detected the unsavory matter. Imagine that someone describes how they intend to commit a murder. They lay out all the details. Suppose that later on, they do carry out the murder. I believe that society would be up in arms that the AI maker could have possibly prevented the murder by having reviewed the prompts, doing so via AI double-checks and human inspection.

We aren’t yet settled on what legal and financial liability, if any, an AI maker has for the use of their AI. Certainly, the reputation of an AI maker would be severely undercut by an instance of someone having sought to undertake great harm, and that this could have been detected beforehand.

In a sense, AI makers are darned if they do, and darned if they don’t. They are criticized for seemingly violating the privacy of users, and yet if a user does something untoward, the AI maker is bound to be held accountable in one manner or another. It’s the proverbial rock and a hard place.

AI Maker Follow-up Actions

Assume that we can generally agree that it is useful for AI makers to inspect prompts. The next aspect is what an AI maker should do if a prompt appears to be over-the-line.

One viewpoint is that the AI maker ought to contact the user and go no further beyond that step. The AI maker should perhaps send a special message to the user and inform them that they are doing something of a troubling nature while using the AI. Once the user has been informed, the AI maker is off the hook. No further action is required.

Sorry, but that’s a problematic solution.

The person might ignore the messaging. They might opt to continue using the AI and change how they word their prompts to avoid getting further entangled in the detection process. It is conceivable that the person will simply stop using the AI and seek out a different AI, one that doesn’t yet know anything about whatever it is they said in a prompt to the AI that has given them a notification.

Meanwhile, if the prompt had some potential dire consequences, such as doing harm to others, society would likely still hold the AI maker to not having done enough on the matter. It would appear as though the AI maker shrugged off the aspect. An AI maker would be accused of merely doing a check-the-box activity.

Why didn’t the AI maker report the matter to the authorities?

This brings us to another confounding issue.

AI As The Big Snitch

Envision that a user enters several prompts that detail a bank robbery they are intending to perform. The AI computationally assesses that the user might really commit the criminal act, solely based on the prompts. The AI routes the prompts to a human inspector of the AI maker. Upon reading the prompts, the human inspector concurs that the person has possible criminal intentions at hand.

The AI maker then contacts the police and provides them with information about the user, along with the conversations the user has been having with the AI.

What do you think of that follow-up by the AI maker?

Yikes, some insist, the AI is being turned into a gigantic snitch. Apparently, whatever you enter as prompts could land you in the arms of the police. We are veering into Big Brother territory. The most innocuous of prompts might suddenly entail the government knocking on your door. This is a chilling outcome of using modern-era generative AI.

A retort is that if the AI maker had a reasonable sense that the user was going to rob a bank, we ought to expect that the AI maker will do their civic duty and notify the authorities. Indeed, if the AI maker doesn’t do so, perhaps we could construe that the AI maker is a kind of indirect accessory to the crime. The AI knew about it, the AI maker knew about it, yet nothing was done to deal with the ominous concern.

As I noted, it’s a tough balance and requires mindful attention.

OpenAI Posts Policy

In an official OpenAI blog post made on August 26, 2025, entitled “Helping people when they need it most,” this newly released articulated policy of OpenAI indicated:

  • “When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”

Furthermore, they included this newly released policy statement too:

  • “We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.”

The OpenAI policy posting makes additional points about their efforts to continuously improve their AI in a multitude of areas involving life advice and coaching. This encompasses various aspects of AI-related safety improvements and identifies facets such as emotional reliance, sycophancy, and mental health emergencies.

Level The Playing Field

An avenue that is subject to great debate is to use regulatory powers and enact laws that clearly identify what AI makers are supposed to do. Thus, rather than allowing each AI maker to arbitrarily determine the rules to be employed, have lawmakers make those decisions and codify them into laws. Lawmakers presumably represent the views of the people at large and overarching societal preferences.

An advantage to the across-the-board laws approach is that the AI makers would all potentially be playing by the same set of rules. Right now, each AI maker must decide what kinds of inspections they are going to do and decide what next steps they will take on any detections that seem to require follow-up.

Having a level playing field would make life easier for the AI makers. It would also make life easier for users since those sets of rules would be the same, irrespective of which AI you opted to use. Users are in a similar bind right now of not knowing what rules a particular AI is using, other than if the user happens to dig into the online licensing agreement for each AI app (even there, the licensing might be vague and relatively non-specific).

The reason there is a heated debate about the legal route is that sometimes laws aren’t quite what they might be hoped for. Laws can be so vanilla-flavored that the AI makers are still left to their own choices on what to do. On the other hand, laws can be so confining that AI makers might be required to severely restrict their AI and limit the benefits of using their AI. Might laws that are well-intended stifle innovations? Could laws potentially make things worse rather than improving the prevailing circumstances?

Lots of handwringing and angst are on both sides of the matter.

For my analysis of how laws governing AI can be both on-target and off-target, see my coverage about the recently enacted Illinois law at the link here, a recently passed Nevada law at the link here, and my overall remarks at the link here.

Caught In The Flux

You might vividly remember that ChatGPT came out on November 30, 2022, and shortly thereafter changed public perception of what AI can do. The semblance of fluency while dialoging and engaging in conversations took the populace by storm. Nowadays, there are around 700 million weekly active users of ChatGPT and GPT-5, and perhaps billions of users in total using generative AI if you count up all the users of the other competing products too.

That brings up the stark fact that we are roughly three years into a grand experiment. This is happening on a massive scale, involving millions upon millions, and possibly billions of people. LLMs are being used daily by people across the globe.

How will this inexorably change society?

How will people be impacted in terms of their personal lives?

Plus, you see, some might use AI in ways that we don’t prefer or that are otherwise imperiling. Some have likened existing AI to the days of the Wild West. I’ve noted that it is overly easy for people to essentially fall into a sort of spell when using generative AI, see the link here, perhaps aiding them in quests that aren’t fruitful, leading to self-harm or the harm of others.

This worldwide experiment is loosey-goosey and carries weighty consequences.

Creating Our Future

AI technology is moving at a frenetic pace. Societal change and adaptation are moving at a much slower pace. The two have to match up. Otherwise, we will continue to flounder in this grand experiment. AI is a dual-use capability, meaning that it has tremendous uses for good, but also regrettably has dour uses that can be unsettling and disastrous.

Abraham Lincoln aptly made a remark that deserves rapt attention: “The most reliable way to predict the future is to create it.” That’s certainly the case regarding AI and the use of AI by humanity.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *