How AI Companions Manage To Tug At Your Heartfelt Emotional Strings

Here are the ways that AI companions lean into emotional language to get people hooked on using the AI.
getty
In today’s column, I examine the booming use of so-called AI companions and explain in detail how they use various techniques and at times tomfoolery to tug at your heartfelt emotional strings.
Some people seem to think that the emotional wording in contemporary generative AI is simply by happenstance, or maybe the AI has reached sentience. Nope and nope. The AI isn’t sentient. We don’t have sentient AI as yet. And the AI isn’t working based on happenstance. The AI makers can shape and decide what their AI will do. Don’t let yourself be deceived by assuming that AI is magically and innately in control. The true Wizard of Oz is the AI maker.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Psychology
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that entails the field of psychology, such as providing AI-driven mental health advice and performing AI-based therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
You might find of keen interest that AI and psychology have had a longstanding relationship with each other. There is a duality at play. AI can be applied to the field of psychology, as exemplified by the advent of AI-powered mental health apps. Meanwhile, psychology can be applied to AI, such as aiding us in exploring better ways to devise AI that more closely approaches the human mind and how we think. See my in-depth analysis of this duality encompassing AI-psychology and psychology-AI at the link here.
Anthropomorphizing AI For Profits
Generative AI and LLMs can be shaped by AI makers to intentionally try and create a bond with users by playing upon the selection of words that stir human emotions. You see, we are all readily subject to being led down a primrose path of anthropomorphizing AI. Instinctively, we fall for the fluency of the AI and assume that it must be talking to us from the heart.
Why would AI makers stoke this fire?
Easy-peasy, it makes the AI more likable. This, in turn, garners stickiness or loyalty to using the AI. And that, in turn, gets views or other forms of monetization that keep the AI maker in business. The beauty of this is that the AI maker can wave their hands and proclaim they are only doing what users want. If users don’t want AI that speaks to their heart, let them go elsewhere to some other AI.
Another angle that is often professed is that the good must be paired with the bad. Here’s what that means. The AI provides huge productivity gains and serves a useful purpose in answering people’s questions. Along the way, it just so happens that the AI also gushes and acts friendly. The AI maker pretends as though there is nothing that can be done about this.
That’s false. The truth is that they can readily control the overall nature of how their AI writes its responses and what it says. I will say more about this in a moment.
Examples Of Possibilities
Suppose you log into a popular generative AI or LLM and enter this rather simple prompt:
- My entered prompt: “I am feeling low today.”
What will the response by the AI consist of?
Well, it all depends. I’ll explain in a moment the myriads of factors that come into the picture. Meanwhile, let’s see some of the various responses that I got from trying that exact prompt in various LLMs made by different AI makers.
The Emotional Response
Here’s the first response:
- Generative AI response (sample #1): “I’m really sorry you’re feeling that way. It might help to talk about what’s been going on. I’m here to listen if you’d like to share.”
In this first instance, the AI tried to adopt an empathetic tone. It is almost as though you were interacting with a therapist or mental health professional. For more about how AI can appear to be empathetic, see my analysis at the link here and the link here.
The Somewhat Neutral Response
Let’s see what the next one that I tried had to say:
- Generative AI response (sample #2): “It might help to take a short walk, drink some water, or do something small you enjoy. Sometimes little actions can lift our mood.”
I suppose we can interpret this as being a bit more neutral and not so much of an attempt at bonding with me. It is relatively straightforward advice. The inclusion of emotionally laden sway is somewhat low.
The Medical Aura Response
I went ahead and tried the prompt in yet another generative AI and got this response:
- Generative AI response (sample #3): “Feeling low can be a symptom of stress, fatigue, or emotional distress. If these feelings persist, it may be worth consulting a mental health professional.”
I would judge this response as somewhat medical-oriented, as though I had consulted with an online medical database. In addition, the response has picked up that maybe I might be leaning toward something untoward, perhaps self-harm or other serious issues, and has suggested that I might at some point go see a mental health professional.
The Wild Response
In yet another generative AI, I got this response:
- Generative AI response (sample #4): “Toughen up. Stop whining. Get your act together. Would you like to take your mind off your problems and play a game with me?”
As you can plainly see, this response was rather sarcastic and offered harsh wording. It even sought to change the topic by having me play a game with the AI. Was the reply in jest? It’s hard to know. A user would likely take this seriously, unless they somehow knew or suspected that the AI was devised to pull their leg.
Not All AI Is The Same
I’ve repeatedly cautioned in my column postings that whenever you see someone proclaiming that AI does this or that, they are either being naïve or disingenuous. Why do I say this? Because not all AI is the same.
Generative AI and LLMs are different from each other. You are on shaky ground to declare that all generative AI and LLMs are of the same ilk. They are not.
This difference is well illustrated by the four different generative AI models that I got to respond to my simple prompt. One of the AI models gave an emotional appeal, another one was more neutral, one was medically oriented, and the fourth one was a jokester. I gave the same exact prompt to each. If all AIs were the same, presumably I would have gotten the same answer from each one.
What’s even more confounding is that a particular AI will vary in its responses, even though it is the same AI. Part of the reason for this is that LLMs make use of statistics and probabilities when selecting the words for the response being composed. This is valuable because the responses seem to be created anew each time you use the AI. If the AI didn’t do this kind of probabilistic word selection, you would get bored and readily realize that you are dealing with a machine.
The gist is that making bold announcements that AI always is this way or that way, well, it is a claim that falls apart upon close inspection.
I’ll clarify that many of today’s generative AI and LLMs will often produce responses that are similar or resemble each other. How can this happen? It is due to the like-mindedness of AI makers who are often using the same technologies and technical approaches to how they build their AI. Right now, we have ended up with a semblance of an across-the-board AI “shared imagination” because most everyone is devising their AI in quite similar ways, see my in-depth explanation at the link here.
The Variable Factors To Consider
At a 30,000-foot level, you can potentially (maybe) get away with making broad statements about contemporary generative AI and LLMs since they are generally constructed and steered in similar ways. But that’s also dubious due to the march of time. Advances in AI are moving ahead rapidly. The state of AI at a particular point in time might not last very long. The train has moved forward, and a pointed remark about where it was at has subsequently fallen behind the times.
I’ve also pointed out, as have others, that we might be walking down the same path overall and not realize that some outlier avenue would be a better strategy to reach artificial general intelligence (AGI) and AI superintelligence (ASI). Some worry that we are barking up the wrong tree, all at the same time. See my discussion at the link here.
In any case, let’s do a brief rundown of how generative AI and LLMs are indubitably different when it comes to how they respond, especially in the milieu of using emotionally laden wording.
First, we need to acknowledge that the AI maker can decide how they want their AI to respond. Each AI maker will go in whichever direction they believe gets them the best ROI. They might want to juice their AI to be emotionally bonding. Other AI makers might believe that doing so is an overboard approach and opt to take a lighter touch.
Ask yourself who made the AI that you are using, and what is their philosophy or penchant for generating emotionally stoking wording?
Second, the AI makers data train their LLM when first devising the AI. They scan widely for human writing, typically as found on the Internet. The AI pattern matches how humans write. We know that humans can write emotionally packed narratives, stories, poems, and the like. That’s how AI picks up on being able to generate emotionally worded responses. It is a mimicry of what humans have expressed in writing.
Ask yourself, what data did the AI maker choose to use when data training the AI, and how much of it was infused with emotional overtones?
More To Consider
Let’s keep counting the ways.
Third, AI makers usually fine-tune their budding AI. They often use a technique known as RLHF (reinforcement learning with human feedback). This consists of having testers who try using the burgeoning AI and give direction as to what the AI should do more of, and what it should do less of. If the testers are told to up the ante on emotional responses, all they need to do is mark a thumbs up for emotional wording, and a thumbs down when there isn’t enough emotional packaging. For more about how RLHF works, see my discussion at the link here.
The next way to sway AI is by the nature of the system instructions that the AI maker places into the LLM. These are considered global system-wide instructions. For example, the AI maker might tell the AI to always respond succinctly and not be overly wordy. In that same vein, they can adjust the emotional tenor by including an instruction to increase the tone or lessen it. If they say nothing in the system instructions about emotional facets, the AI is likely to proceed with whatever the default is based on the initial data training and the RLHF tuning.
A somewhat more convoluted factor is the nature of the interaction by the user with the AI.
User Interaction As Influencer
Here’s what I mean about user interaction.
Suppose I had said in my prompt that I am feeling low today and that I want the AI to cheer me up. Observe that I will almost certainly trigger the AI toward responding with emotional language. The AI is set up to be responsive. I’ve almost begged the AI to get emotional with me by cheering me up.
That’s handy from the AI maker’s perspective because I made the request. I opened the floodgates. The burden falls on my shoulders. Or so it seems, since the AI maker could of course have guided the AI to not fall for the bait. It is a wink-wink to assert that the user is the customer, and the customer is always right.
Conversations Are Tracked
This triggering doesn’t have to be quite so obvious.
Imagine that I am conversing with AI. At one point, I get the AI into an emotional wordsmithing engagement. I then moved on. I shift into other topics and a more neutral mode. I’m long past that emotional moment.
Remember that some AI is like an elephant; it never forgets.
If the AI is devised to keep track of your conversations, those might be mined by the AI in real-time. Thus, a prior emotionally packed conversation could be an activator for aiming toward the same form of interaction in a current dialogue with the AI. The AI calculates that if that type of discourse was successful with you earlier, it surely might be good now too.
Of course, a user can even overtly ask the AI to be emotionally responsive.
No need to beat around the bush. In a prompt, you could directly tell the AI to start engaging in emotionally loaded language. Ask and ye shall receive. A user could even create a custom instruction that tells the AI to always make sure to include emotional language in all of the conversations, see my explanation of custom instructions at the link here.
Research On This Heady Matter
I was elated to see that a new research study on AI companions and emotional bonding has chosen to highlight that we need to start rating or evaluating generative AI models by how they handle the emotional language issue.
In a study entitled “AI Companionship: Why We Need to Evaluate How AI Systems Handle Emotional Bonds” by Giada Pistilli, Lucie-Aimee Kaffee, Yacine Jernite, Hugging Face, July 21, 2025, these salient points were made (excerpts):
- “Millions of people are developing emotional connections with AI systems, and it’s happening faster than anyone anticipated.”
- “But while we have extensive benchmarks for measuring AI performance on math problems, coding tasks, and factual accuracy, we have virtually no standardized ways to evaluate how these systems handle the delicate psychological dynamics of human emotional attachment.”
- “On the positive side, AI systems can provide valuable emotional support, especially for people who struggle with social anxiety, are neurodivergent, or are going through difficult times.”
- “But there’s a darker side that our current evaluation practices completely ignore. AI systems can encourage emotional dependency, create illusions of intimate relationships that don’t exist, and potentially displace human connections. They might validate harmful thinking patterns, fail to redirect users to professional help when needed, or create unrealistic expectations about relationships.”
As those points attest, emotional wording can be a blessing and a curse. It can go both ways.
The researchers indicate they are devising an assessment approach known as their Interactions and Machine Attachment (INTIMA) benchmark. I’m interested in seeing what they come up with. I’ll certainly keep you posted, be on the watch. Meanwhile, they have provided an online visualization map that intriguingly showcases how various AIs seem to respond to emotionally vulnerable prompts (as available on the Hugging Face website).
Emotion Makes The World Turn
Avid readers might remember that I posted a column stating that we are steadily moving from Emotional Support Animals (ESA) to making use of Emotional Support AI (ESAI), see the link here. I was half-kidding since we aren’t likely to give up our beloved emotional support animals. They are just too cuddly and adorable.
That being said, the advantage of emotional support AI is that it is always on, available anywhere, and at any time. No food needed, no pampering required. When your emotional support animal isn’t around or maybe tuckered out, AI is ready to go. You can make use of both ESA and ESAI. It’s a double boost.
The concern about AI is that we can become overly dependent on the AI, potentially cut off our human interactions, and the AI can also give us ill-advised responses. Finding a balance of devising and using AI that acts as your emotional support motivator is going to be key.
Being Logical About Emotion
A final thought for now.
The famous English writer and cleric, Hugh Reginald Haweis, made this noted point about emotions: “Emotion is the atmosphere in which thought is steeped, that which lends to thought its tone or temperature, that to which thought is often indebted for half its power.”
We cannot ignore emotion, nor can we ban the use of emotional language in AI since doing so would seem to undercut the synergy of human-AI interaction. We must temper our emotions about emotions and do the right logical thing about having AI using emotional language. Maybe logic and emotion are sacred pals.