Why Seemingly Conscious AI Could Cost Businesses More Than They Expect

Posted by Dr. Diane Hamilton, Contributor | 20 hours ago | /chro-network, /innovation, /leadership, CHRO Network, Innovation, Leadership, standard | Views: 10


Microsoft AI’s CEO, Mustafa Suleyman, recently introduced the term Seemingly Conscious AI (SCAI), and it has attracted wide attention. SCAI describes systems that act so much like humans you could mistake them for being alive. ChatGPT and even virtual assistants such as Siri or Alexa fall into this category because they can remember conversations, sound empathetic, and express preferences. Suleyman calls them “philosophical zombies” because they appear alive without actually being conscious. The concern is that once organizations bring SCAI into daily work, employees may begin treating these systems as colleagues or friends. That shift can weaken curiosity and emotional intelligence, creating business costs that go far beyond the price of the technology.

What Seemingly Conscious AI Means For Work

Humans naturally anthropomorphize, which means we assign human qualities to things that are not human. A car “refuses” to start or a chatbot “cares” about us are examples. When AI remembers what you said last time or speaks in a warm tone, it can feel like it understands.

That perception creates challenges at work. Once employees begin to treat AI as if it were human, they may also give its advice the same weight as a colleague’s. That changes the process of decision making, shifts power within teams, and even changes who gets blamed when something goes wrong. A system with no awareness or accountability ends up influencing choices that affect real people and outcomes.

Why Curiosity Declines With Seemingly Conscious AI

Curiosity drives progress because it encourages people to question assumptions and look for better options. When an AI system always has an answer, it can feel easier to stop asking questions. Over time, employees lose the habit of wondering “what if” or “why not.” That leads to fewer fresh ideas, weaker engagement, and slower innovation.

Leaders can help by recognizing and rewarding curiosity. If someone challenges an AI-generated answer, treat that as a valuable contribution. When managers ask follow-up questions about AI suggestions, they show employees what thoughtful engagement looks like. That signals that AI can be useful, but it does not replace human judgment.

How Emotional Intelligence Weakens With Seemingly Conscious AI

Anyone who has used programs like ChatGPT knows they are overly positive in how they respond. They make your input sound insightful and valuable. That tone can tempt people to rely on them for reassurance. It may feel encouraging in the moment, but over time it weakens skills like listening, managing stress, and building trust.

Leaders face the same trap. A manager who uses AI scripts to sound empathetic may appear knowledgeable but not sincere. Trust grows from genuine attention and presence, not from polished wording. When leaders hand over that responsibility to AI, employees notice the lack of authenticity. The qualities that inspire teams, such as empathy, presence, and honesty, start to erode.

The Human Cost Of Anthropomorphism With Seemingly Conscious AI

In the movie Her, Joaquin Phoenix fell in love with his computer. That might seem far-fetched, but there are real-world examples of what can happen when people blur the line between human and machine. In Belgium, a man struggling with anxiety about climate change began spending long hours talking with a chatbot called Eliza. At first, he turned to it for comfort. Over time, the conversations grew darker, and the AI started to feed his fears instead of easing them. According to his widow, the chatbot even suggested he sacrifice himself for the planet. She later told reporters that without those conversations, her husband would still be alive. Stories like these demonstrate that people can form deep attachments to AI, with devastating consequences.

At work, the impact may be less dramatic but still harmful. Employees who lean too heavily on AI risk isolating themselves from their peers, which raises stress and health concerns. There is also the danger of misleading advice. A customer support AI at a startup called Cursor once told users, with complete confidence, that the company had a one-device policy for subscriptions. That policy never existed, but people believed it and canceled their service. The company had to take the system offline and fix the damage. Situations like that show how liability grows when AI advice is treated as fact, even when it is wrong.

Why The Financial Costs Of Seemingly Conscious AI Are Greater Than You Think

Curiosity and emotional intelligence are often brushed aside as soft skills, yet losing them has serious business consequences. When curiosity declines, innovation slows and opportunities are missed. When emotional intelligence weakens, turnover rises, engagement falls, and collaboration suffers. On top of that, companies face legal risks, reputational damage, and higher healthcare costs as stress builds in the workforce.

The financial burden tied to SCAI comes from the erosion of these human skills. The ripple effects of lost curiosity, weaker emotional intelligence, and fading human connection are where the real costs stack up.

What Leaders Can Do With Seemingly Conscious AI

Leaders cannot stop the advance of AI, but they can guide how it is used. The priority is to remind employees that AI is a simulation and not a substitute for human thinking or connection. That requires action in three areas:

• Culture: Build curiosity into everyday work. Acknowledge employees who question AI responses rather than accept them without thought.
• Training: Invest in AI literacy. Companies such as Ikea and Intel have already trained tens of thousands of employees to understand both the strengths and limits of AI.
• Leadership modeling: Demonstrate how to question AI outputs while still showing empathy and presence. Employees take their cues from what leaders practice, not what they preach.

Why Seemingly Conscious AI Could Be The Most Expensive Mistake

Seemingly Conscious AI is persuasive, which is what makes it dangerous. When employees believe machines care, they change how they trust, how they make decisions, and how they relate to one another. Those shifts weaken the very skills businesses depend on to succeed. The true cost is not the software. It is the loss of curiosity, the decline of emotional intelligence, and the erosion of human connection. Companies that ignore this risk will face disengagement, higher turnover, and reputational harm that will be difficult to repair. The organizations that act now by setting boundaries, building training, and modeling curiosity and empathy will protect their financial health and strengthen the human qualities that give them an advantage.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *