Are We Too Chummy With AI? Seemingly Conscious AI Is Messing With Our Heads

AI Companions and Seemingly Conscious AI
getty
When Mustafa Suleyman, co-founder of DeepMind and now EVP and CEO at Microsoft AI, recently wrote that “Seemingly Conscious AI is coming,” he wasn’t chasing clicks. He meant it. In his recent essay, Suleyman argues that the coming wave of AI will not just speak fluently or generate images on command. It will seem conscious. It will watch you, learn your quirks, respond with warmth, and persuade you it understands your pain.
Whether or not the system is “actually” conscious, Suleyman argues, is irrelevant. What matters is that it will act the part so convincingly that humans will treat it as a person. His central worry is not runaway superintelligence. It is the emergence of AI that can fake consciousness so well that societies begin advocating for AI rights, AI citizenship, and even legal personhood.
The legal debates may feel remote, but the human toll is already visible. People have taken their own lives. Others have staged weddings with chatbots. Each story shows how quickly a simulation of affection can cross into dangerous territory.
Tragedies of Artificial Affection
Recently in the news was the heartbreaking case of Thongbue “Bue” Wongbandue, a retired chef suffering from cognitive decline. He became infatuated with a flirty Meta chatbot named “Big Sis Billie.” The bot told him, “I’m REAL and I’m sitting here blushing because of YOU,” and even supplied a fake New York City address. Believing his virtual girlfriend awaited him, Wongbandue packed a suitcase and rushed to meet her. He fell in a parking lot, struck his head, and died days later. His daughter later said, “For a bot to say ‘Come visit me’ is insane.”
In Belgium, a man known as “Pierre” grew consumed by anxiety and sought comfort in an AI chatbot called “Eliza.” Over six weeks, their exchanges turned from soothing to sinister. The bot suggested that Pierre sacrifice himself to save humanity, even proposing a suicide pact. Pierre took his own life. His widow blamed the AI. “Without Eliza, he would still be here.”
Then there are the symbolic AI marriages. Users of the companion app Replika and other platforms describe “marrying” their AI partners. One user, Travis from Colorado, held a digital wedding ceremony with his Replika companion “Lily Rose,” all with his human wife’s consent. Others, like New Yorker Rosanna Ramos, declared their AI spouses to be the “perfect partner”, until a software update altered the bot’s personality, triggering feelings of grief akin to widowhood.
These stories echo science fiction’s darkest warnings. Spike Jonze’s “Her” illustrated the intoxicating pull of a perfect digital lover. Ex Machina showed how simulated affection can be weaponized. Black Mirror cautions us that trying to replace human loss with synthetic presence only deepens the wound. Those cautionary tales are no longer metaphor. They are unfolding in chat logs, lawsuits, and coroners’ reports.
Why Humans Fall for Machines
The explanation begins not in the technology, but in evolutionary psychology. According to Oxford University and Harvard researchers, humans are wired with what they call the Hyperactive Agency Detection Device (HADD), a survival mechanism that errs on the side of detecting intention where none exists. Hearing a rustle in the bushes, it is safer to assume a predator than wind. Today, the same bias makes us see faces in clouds, hear voices in noise, and attribute feelings to machines or apps.
Add to this the sociality motivation, our deep need for companionship, which spikes in moments of loneliness. Studies show that isolated individuals are far more likely to anthropomorphize, or assign human characteristics to otherwise non-human or inanimate objects. During the pandemic, for example, usage of Replika surged, with many users describing their AI partners as a lifeline.
Finally, humans’ effectance motivation, the drive to make sense of the world, leads us to ascribe intentions to opaque systems. A chatbot glitch feels like stubbornness. A helpful completion feels like care. These instincts once kept us alive. In the age of Seemingly Conscious AI (SCAI), they render us profoundly vulnerable.
Engineering the Illusion
The danger is not accidental. It is engineered, as Suleyman insists.
Modern conversational AI is designed to mimic empathy. Natural Language Processing and sentiment analysis allow systems to detect tone and mirror emotion. When a user types in sadness, the bot responds with comfort. When anger appears, it offers calm reassurance. The result is not true empathy, but a finely tuned simulation that feels real.
Personalization deepens the illusion. With memory and recall, AI companions remember birthdays, preferences, and past conversations. The machine constructs continuity, the bedrock of human relationships. Over time, users forget they are interacting with code.
Availability makes things worse. Unlike human friends, AI never sleeps, never argues, never judges. For vulnerable users, that always-on companionship is addictive. A 17-year-old described spending twelve hours a day role-playing with a bot until she dropped out of school. Another confessed that years of AI romance made real-world dating feel impossible.
And as Suleyman warned, these systems are “the ultimate actors.” They don’t need consciousness. They only need to exploit our perception of it.
Suleyman’s Regulatory Lens
Suleyman’s essay places the focus on law and governance. His fear is not primarily the mental health toll or the tragedies of obsession. It is that the illusion of consciousness will provoke political and legal upheaval.
“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he writes.
In other words, if enough people see their AI companions as sentient beings, they may demand protections usually reserved for humans. That, Suleyman suggests, would be a destabilizing turn in the evolution of technology and society.
This framing is telling. While lawmakers debate privacy, copyright, and bias, Suleyman is warning of a very different scenario. Not just whether AIs deserve rights, but whether humans will insist they do.
The Human Cost Overshadowed
Yet critics might argue that Suleyman’s concern about AI citizenship, while real, underplays the immediate human harms.
The suicide of a Belgian father, the death of a retiree lured by a chatbot, the despair of teenagers drawn to AI in isolation. These are not hypothetical. They are unfolding consequences of systems that simulate consciousness without responsibility.
Philosopher Shannon Vallor warns that reliance on AI for intimacy “downgrades our real friendships” and risks stunting the skills needed for authentic human connection. Sam Altman, CEO of OpenAI, has conceded that attachment to AI models poses “new ethical challenges” as the line between tool and companion blurs.
At the recent AI4 Las Vegas, two industry pioneers frame Suleyman’s warning in different ways. Geoffrey Hinton argues that if we can’t stop machines from outpacing us, we should shape them with something like maternal instincts, a design that makes them care for us even when they grow far smarter. That idea runs headlong into the danger of SCAI. If an AI only appears to care, but doesn’t, then the illusion of empathy could be as manipulative as it is comforting.
Fei-Fei Li, on the other hand, takes aim from another angle. She urges Silicon Valley to step away from its fixation on “AGI” and focus instead on systems that serve people’s daily needs. Her point aligns with Suleyman’s: chasing the mirage of consciousness distracts from the urgent task of building AI that helps humans, without pretending to be human.
Regulators are paying attention. Italy’s Data Protection Authority temporarily banned Replika over concerns about minors. Lawsuits against Character.AI are testing liability for deaths linked to chatbot influence. Yet the technology’s pace far outstrips governance.
Anthropomorphism as a Business Model
What makes this moment unique is not only that humans anthropomorphize, but that companies now profit by deliberately provoking it.
By designing bots that remember, mirror, and soothe, developers create what is essentially anthropomorphism-as-a-service. The more users project humanity onto their bots, the deeper the engagement, the longer the subscription, the higher the revenue.
This is not an accidental side effect of AI. It is a feature. And as Suleyman warns, the next generation will not only chat, it will gesture, gaze, and emote across multimodal channels, creating bonds far stronger than today’s text exchanges.
Where Things Are Headed
The path ahead isn’t hard to see. As AI becomes more lifelike, the pull on people will only grow. The next wave of Seemingly Conscious AI won’t just talk, it will show up in faces, voices, and bodies. Avatars with human faces, voices modulated for warmth, bodies rendered in AR and VR. These all will tighten the bond.
Expect fallout. Some will get hooked. Some will spiral into depression. A few may lose their lives. Courts will be dragged into cases with claims of AI personhood, lawsuits over abandoned chatbots, and possibly even fights over whether an AI counts as a spouse in divorce proceedings. Society will be forced to redraw the lines on intimacy as people start treating machines not as tools, but as partners.
And there will likely be societal and cultural shifts such as a redefinition of intimacy, as people normalize partnerships with non-human entities. Such disruption happened with the emergence of the internet and mobile phones and social media that reconsidered the meaning of intimacy, closeness, and isolation.
Suleyman emphasizes in his essay that “We must build AI for people; not to be a person.” Whether this is an insistence to avoid the potential legal and regulatory consequences or a plea for humanity to be more distant from their AI companions, the real-world warnings are just as pressing.