Why AI Will Never Truly Understand Your Feelings (And Why That Matters)

Posted by Bernard Marr, Contributor | 2 weeks ago | /ai, /enterprise-tech, /innovation, AI, Enterprise Tech, Innovation, standard, technology | Views: 6


AI is undoubtedly reshaping our lives, but there’s still a great deal of hype surrounding it. One of today’s most popular narratives is that machines are learning to understand human feelings and emotions.

This is the domain of affective computing, a field of AI research and development concerned with interpreting, simulating and predicting feelings and emotions in an effort to navigate the complex, often unpredictable landscape of the human psyche.

The idea is that emotion-aware AI will lead to more useful, accessible and safer applications.

But can a machine ever truly understand emotion? After all, they can’t truly feel — they can only analyze, estimate, or mimic based on limited and often superficial models of human behavior.

This hasn’t stopped businesses from pouring billions into building tools and systems designed to recognize our feelings, respond empathetically, or even make us fall in love with them.

So, what are we really talking about when we talk about emotional AI? With therapy and companionship emerging as top use cases for generative AI, can we trust emotion-sensing systems to handle our inner lives responsibly when they can’t actually experience emotion themselves?

Or has the whole concept been concocted by marketers keen to push the “final frontier” concept of sentient, human-like machines on us? Let’s take a look:

Understanding Artificial Emotional Intelligence

First, what do emotions even mean in relation to machines? Well, the simple answer is that emotions are just another form of data for machines.

Affective computing focuses on detecting, interpreting and responding to data on human emotional states. This can be gathered from voice recordings, image recognition algorithms trained on facial data, analyzing written text, or even the way we move our mouse and click when shopping online.

It can also include biometric data like heart rate, skin temperature and the body’s electrical activity.

Emotional AI tools analyze patterns in this data and use it to interpret or simulate emotional interaction with us. This could include customer service bots detecting frustration or vehicle systems that detect and react to a driver’s state of mind.

But emotions are complicated things that are highly open to interpretation (including across different geographies and cultures), and it’s often critically important that they aren’t misread.

The more data an affective or emotional AI app has, the more closely it will simulate human emotion, and the more likely it will be to accurately predict and respond to our emotional needs.

Data alone isn’t enough for a machine to be able to truly “feel”. In fact, research suggests that machines already process data much more quickly than our brains do.

Instead, it’s the far greater complexity of our brains, when compared to even the most sophisticated artificial neural networks and machine learning models, that makes us capable of truly feeling and empathizing.

The Ethics Of Emotional AI

This raises some important ethical questions: Is it right to allow machines to make decisions that could affect our lives when we don’t fully comprehend their ability to understand us?

For example, we might allow a machine to make us feel cautious or even scared in order to warn us against doing something dangerous. But will it know not to scare us too much, in proportion to the threat, in a way that could cause us trauma or distress?

And will chatbots and AIs designed to act as virtual girlfriends, partners or lovers understand the implications of provoking or manipulating human emotions like love, jealousy or sexual attraction?

Overstating the ability of machines to understand our emotions poses particular risks that will have to be given serious thought.

For example, if people believe that AI understands or empathizes with them to a greater extent than it actually does, they can’t be said to be fully informed when it comes to trusting its decisions.

This could be considered a form of manipulation – particularly, as will inevitably happen when the AI’s true purpose isn’t really to help the user but to drive spending, engagement or influence.

Risks And Rewards

Developing emotional AI is big business, as it’s seen as a way to deliver more personalized and engaging experiences, as well as to predict or even influence our behavior.

Tools like Imentiv are used in recruitment and training to get a better understanding of how candidates will react to stressful situations, and cameras were used on the Sao Paulo subway to detect the emotional response of passengers to advertising.

In one controversial use case, UK rail operator Network Rail reportedly sent video data of passengers to Amazon’s emotional analytics service without gathering their consent.

The increasing prevalence and potential for invasion of privacy (of our thoughts, no less) has prompted lawmakers in some jurisdictions to take action. The European Union AI Act, for example, bans the use of AI that detects emotions in workplaces and schools.

One reason for this is the risk of bias—it’s already been shown that the ability of machines to accurately detect emotional responses varies according to race, age and gender. In Japan, for example, a smile is more frequently used to disguise negative emotions than in other parts of the world.

This opens the possibility of AI driving new forms of discrimination—clearly, a threat that has to be understood and prevented.

It’s Been Emotional

To conclude, while it’s clear that AI can’t truly “feel,” dismissing the implications of its ability to understand our feelings would be a serious mistake.

The very idea of letting machines read our minds by understanding our emotional responses will rightly set alarm bells ringing for many. It clearly creates dangerous opportunities that will be jumped on by the ill-intentioned.

At the same time, affective computing may hold the key to unlocking therapies that can help people, as well as improving efficiency, convenience and safety in the services we use.

It will be up to us, as developers, regulators or simply users of AI, to ensure that these new technological capabilities are integrated with society in a responsible way.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *