Utah Enacts Law To Regulate Use Of AI For Mental Health Providing Deftness And Is Less Onerous Than Other Similar Laws

Posted by Lance Eliot, Contributor | 4 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 26


In today’s column, I examine a law enacted by Utah that seeks to regulate various aspects of the use of AI for mental health purposes. This is part of an overall trend regarding restricting and guiding how and when AI can be used as a therapeutic tool. I will compare the Utah law with other similar laws, such as ones recently passed by Illinois and Nevada.

All told, the Utah law is somewhat more measured and offers deftness when it comes to balancing the prudent and proper use of AI versus inappropriate and imprudent use of AI for mental health guidance.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that involves mental health aspects. The evolving advances and widespread adoption of generative AI have principally spurred this rising use of AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Illinois And Nevada Laws

I previously examined a notable AI and mental health law passed this year by Illinois, see the link here, and one that was also enacted by Nevada, see the link here. Those laws are scoped to prevail within their respective state boundaries. In that sense, these laws are applicable to AI usage within the particular state and do not bear on other states per se.

That being said, one crucial consideration is that an AI maker providing generative AI and large language models (LLMs) via their global network is going to land within the said jurisdictions whenever a person within the particular state opts to use their AI in a mental health capacity. This means that nearly all AI makers need to be aware of these state-level laws. On an inch at a time basis, we are going to see more states that will be devising and enacting these kinds of laws.

It is going to create a convoluted patchwork of differing regulations and a complex web cutting across the entire United States.

In the case of Illinois and Nevada, they both took a rather harsh perspective on AI for mental health. The general takeaway is that AI isn’t supposed to be used for mental health guidance. The concern expressed is that people are turning to AI for mental health advice and do not realize that (presumably) AI is unsuitable for that purpose. As such, the viewpoint was that AI makers need to be held legally and financially accountable if they permit their AI to be used in that fashion. Period, end of story.

I have likened this to the classic adage of tossing out the baby with the bathwater.

Here’s what I mean. Though it is abundantly the case that AI makers ought to be responsible for their wares, an outright ban on such AI usage is a rather blunt and knee-jerk approach. The general position goes so far as to indicate that even mindful therapists who wish to use AI as an augmented therapeutic tool in their practices cannot do so or must do so under stringent state-determined provisions.

A more measured approach could lean into ensuring that AI is used suitably, while also dangling legal and financial swords if the AI makers aren’t adopting necessary AI safeguards to protect people from mental harm.

Exploring The Utah Law

Let’s go ahead and take a quick peek at the Utah law and see what we can make of it. I will share just some mindfully chosen snippets and give you a taste of what the law contains. Please know that the law has numerous twists and turns. Also, my commentary is merely a layman’s viewpoint. Make sure to consult with your attorney to garner the legal ramifications of whatever your own situation entails.

The Utah law is known as H.B. 452 “Artificial Intelligence Amendments” (HB452) and is placed in Section 13-2-1 regarding the Consumer Protection Division. The focus is on Chapter 72a, labeled as “Artificial Intelligence Applications Relating to Mental Health”. The effective date of the law occurred on May 7, 2025.

The stated scope of the law was indicated this way (excerpts):

  • “This bill enacts provisions relating to the regulation of mental health chatbots that use artificial intelligence technology.”
  • (a) “Defines terms”
  • (b) “Establishes protections for users of mental health chatbots that use artificial intelligence technology”
  • (c) “Prohibits certain uses of personal information by a mental health chatbot”
  • (d) “Requires a mental health chatbot to make certain disclosures to users”
  • (e) “Provides enforcement authority to the Division of Consumer Protection”
  • (f) “Establishes requirements for creating and maintaining policies for mental health chatbots”
  • (g) “Creates rebuttable presumptions for suppliers who comply with policy requirements”
  • (h) “Provides a severability clause”

Keep in mind that the scope of these laws is with respect to people in Utah who might engage in AI for mental health purposes.

Emphasis On Protections

In the bullet point indicated as “b” in the above excerpt of the law, please observe that the language says that protections are being established. I mention this wording because it is quite significant. Whereas the Illinois and Nevada laws seemed to concentrate on nearly totally restricting the use of AI for mental health, the Utah law takes a different track and articulates protections that AI makers need to be accountable for.

In other words, the Utah law doesn’t try to ban such AI usage and instead stipulates that necessary AI safeguards must be put into action. People can indeed use AI for mental health advice, but the AI makers need to take vital measures to try and make this usage a relatively safe experience.

The protections include making disclosures to users about how their data will be collected by the AI. This is a worthy consideration since many people do not realize that their prompts are being collected by the AI makers, which is mentioned in their obtuse online licensing agreements, usually proclaiming that the AI maker can have their AI team inspect the prompts, make use of the prompts to further train their AI, etc.

Privacy intrusiveness is likely heightened when people pour out their hearts while engaging the AI in mental health dialogues. For more details about the use of AI overall and the daunting prevalence of privacy issues, see my discussion at the link here.

Another protection identified by the Utah law is that users are to be given a heads-up by the AI maker about what their AI is capable of, along with what their AI is not capable of. You see, some people seem to fall into a mental trap of thinking that AI is magical and can cure mental health woes (see my discussion at the link here). To try and cope with this misperception, the law seeks to ensure that appropriate cautions and warnings are displayed to users.

Some Believe Protections Aren’t Enough

Not everyone goes along with the idea that protections can do the trick when it comes to AI usage for mental health.

The worry is that no matter how many protections you impose, there are still going to be people who will use AI and get untoward mental health advice. The person might be so entrenched in the AI that they won’t realize the AI is leading them astray.

Furthermore, even if the AI isn’t the culprit in souring their mental health, a person might misunderstand the AI. They will tend to interpret the AI as reaffirming a delusion or otherwise skewing them into a mental health abyss. The AI might not be actually emitting anything unsavory, and instead, the person is opting to interpret innocuous discourse to their own detriment.

The key is that for those in the camp that protection is not going to be sufficient, they would insist that a kind of ban is the only viable recourse. Stop the chances of anything going awry. Just do not let people use AI for mental health purposes. If they aren’t engaged in AI for that usage, there is presumably a zero chance that any mental health difficulties will arise via the use of AI.

Whether the banning route is realistic raises a hefty counterpoint to such thinking. Back and forth the heated debate goes. Time will tell how this plays out.

What Counts As AI For Mental Health

One of the biggest difficulties in devising laws on the topic of AI for mental health involves defining what constitutes AI for mental health. I know that seems like an odd statement.

Allow me to explain it.

Suppose you decide to use a generic generative AI such as ChatGPT to get mental health advice. There are other AIs that are tailored to the mental health arena; see my coverage at the link here. The AI that is tailored will usually be conspicuously marked and marketed for that purpose. Generic generative AI is not conventionally marketed for that singular purpose and instead is often touted as a general Q&A tool that answers your overall questions.

We all know that generic generative AI can be steered in the direction of answering questions about mental health. In fact, the most popular use of generic generative AI is for that self-directed purpose (see my analysis of the latest rankings at the link here). You don’t have to use generic generative AI to get mental health advice. You can use such AI and opt to never veer into mental health guidance at all. It is up to the user to decide.

Would you say that generic generative AI is a mental health chatbot, or, since it isn’t specifically aimed that way, does it end up outside of that scope?

Here is how the Utah law handles this conundrum (excerpts):

  • “Mental health chatbot means an artificial intelligence technology that: (i) uses generative artificial intelligence to engage in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health therapist; and (ii) a supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions.
  • “Mental health chatbot does not include artificial intelligence technology that only: (i) provides scripted output, such as guided meditations or mindfulness exercises; or (ii) analyzes an individual’s input for the purpose of connecting the individual with a human mental health therapist.”

This is an intriguing definition and somewhat unlike the definitions given in other similar laws. One aspect that most such laws do tend to have in common is the indication that if an AI maker claims their AI provides mental health advice, they land in this scoped zone (in other words, if “a supplier represents” as such).

The twist in this law is the phrasing that if “a reasonable person would believe” that the AI either will or can provide therapy, or otherwise aid a user regarding mental health, the AI is considered in this scoped zone.

I would wager that once AI makers are charged with a claim of violating this particular law, a lot of attorney billing time will go toward arguing this noteworthy point. It goes like this. I’m sure that a sharp lawyer for an AI maker would certainly try to take the posture that their AI falls outside the law, since no reasonable person could somehow think that the AI provides therapy.

Good luck with that argument.

Limiting Liability

Yet another of the many intriguing twists and turns in this law is that an AI maker can potentially opt to file paperwork beforehand that acknowledges their AI as providing mental health capacities. To some degree, this can provide limits to their liability if later charged as potentially violating the law.

I won’t go into the details here since there is a lot involved, but one aspect I especially found useful was a list of the items that an AI maker would need to file. I bring this up because AI makers would be wise to put together this kind of list and consider whether to file it or not (one issue is that an AI maker might be worried about revealing proprietary intellectual property, which is yet another reason to involve legal counsel in these matters).

Here is a handy list that I put together based on Subsection 3:

  • (1) Indicate the intended purpose of the mental health chatbot.
  • (2) Indicate the abilities and limitations of the mental health chatbot.
  • (3) Describe how licensed mental health therapists were involved in the development and review process.
  • (4) Describe how the mental health chatbot was developed and is ongoingly monitored in a manner consistent with clinical best practices.
  • (5) Describe how testing was conducted prior to making the mental health chatbot publicly available and regularly thereafter, to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in therapy with a licensed mental health therapist.
  • (6) Identify reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, users that could result from using the mental health chatbot.
  • (7) Provide the established mechanisms for a user to report any potentially harmful interactions from use of the mental health chatbot.
  • (8) Showcase the implementation of protocols to assess and respond to the risk of harm to users or other individuals
  • (9) Provide the details of actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions.
  • (10) Explain the implementation of protocols to respond in real time to acute risk of physical harm.
  • (11) Identify how regular, objective reviews of safety, accuracy, and efficacy occur, which may include internal or external audits.
  • (12) Showcase that users are provided with any necessary instructions on the safe use of the mental health chatbot.
  • (13) Explain how verification is undertaken to ensure that users understand they are interacting with artificial intelligence.
  • (14) Describe how it is ensured that users understand the intended purpose, capabilities, and limitations of the mental health chatbot.
  • (15) Describe that there is a prioritization of user mental health and safety over engagement metrics or profit.
  • (16) Shows that implementation measures to prevent discriminatory treatment of users have taken place.
  • (17) Provides documentation and validations of how the mental health chatbot ensures compliance with security and privacy, and applicable consumer protection requirements.

Anyone devising AI for mental health would be wise to review this list and consider what kind of documentation they have or do not yet have. I’ve seen many AI for mental health apps that often do not have this kind of documentation, which is a sign that the company devising the AI is probably not versed in proper software engineering practices and likely will inevitably get burned by their lack of comprehensiveness and rigor.

More AI Usage Is Expected

The use of AI for mental health is going to continue to expand. No one can dispute that straightforward assertion. We are in the midst of a grand and wonton experiment. Globally, AI is being used by the populace at scale for getting mental health advice. You can judiciously expect that billions upon billions of people are going to rely on AI for their mental health guidance.

Laws governing the use of AI should be devised in a Goldilocks manner. The soup should not be too hot, nor too cold. It needs to be just right. There is a case to be made that AI can provide a worldwide means of bolstering mental health. Meanwhile, from an AI duality perspective, there is also a solid chance that AI can be detrimental to mental health. It’s a good news and bad news affair.

A final thought for now. Robert Frost famously made this remark: “The best way out is always through.” Can we adequately make our way through the advent of AI that provides mental health advice, including whether the AI does so by design versus simply as a byproduct? Society needs to make that a top priority.

The mental health of the world depends on it.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *