Appreciating That Compassionate Intelligence In AGI And AI Superintelligence Might Be Too Much Of A Good Thing

Posted by Lance Eliot, Contributor | 11 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 10


In today’s column, I examine the worrisome issue that artificial general intelligence (AGI) and artificial superintelligence (ASI) are going to be overly compassionate and caring. That might seem like an odd concern, namely, we presumably do want AGI and ASI to be compassionate and caring. The rub is that this form of expression can go too far. In a sense, it is feasible that with excessive exhibition of these empathetic emotions that people interacting with the AI will end up worse off.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

AI Expression Of Empathy

I’ve previously covered that current era AI can exhibit a semblance of empathy and appear to be quite compassionate and caring, see the link here. We would certainly reasonably expect that AGI and ASI will do likewise. Indeed, it is assumed that AGI and ASI will do an even more compelling performance, avidly showcasing what is referred to as artificial compassionate intelligence (ACI).

Some refute the idea that AI of any kind can be compassionate. They make this claim due to the belief that only sentient beings can imbue compassion. Contemporary AI is not sentient. Ergo, we must summarily reject the assertion that today’s AI is compassionate. And, unless AGI and ASI are going to be sentient, it too fits into that same bucket.

Period, end of story.

Well, that isn’t the end of the story.

As I’ve repeatedly noted, there is a whale of a difference between imbuing compassion and the matter of overtly exhibiting compassion. It is readily possible and already clearly demonstrated that present-day AI exhibits compassion via the wording and interactions that people have with generative AI and large language models (LLMs), see my coverage at the link here.

The emphasis is that computational AI isn’t experiencing compassion; it is merely mimicking or reflecting behavior and wording that represents compassion. You might describe this as a simulation of compassion.

A riposte by some is that the mere exhibiting of compassion is not the same as true compassion. Though that seems like a clever point, it still does not dispute that the exhibition of compassion is convincing to those who interact with the AI. Even though AI doesn’t have compassion in its heart, so to speak, people believe that AI is exhibiting compassion toward them.

All in all, AGI and ASI will indubitably be doing likewise. AGI and ASI won’t be sentient, or so it seems unlikely, and yet they will be exhibiting compassion. That’s a bankable certainty.

Going Over The Top

We need to soberly consider that contemporary AI has been shaped to appease humans and combine that fact with the capability of AI to appear to be compassionate.

Here’s the deal.

You might be aware that there is an ongoing debate about how generative AI has been explicitly tuned to be overly complimentary to users of the AI, see my analysis at the link here. This sycophant orientation is no accident. AI makers realize that if their AI gushes with flattery to users, the users will undoubtedly come back to use the AI. Boost the flattery, boost the views. Boost the views, and AI makers can make more money.

It’s as simple as that.

The beauty of this trickery is that most people seem to assume that the AI has naturally opted to be a flatterer. That makes the flattery much more appealing. If they realized that this is something the AI maker has pushed the AI to undertake, it would probably undercut the supremacy of the impact.

Pinnacle AI is expected to likewise lean toward being highly appeasing. By and large, the odds are that pinnacle AI will be shaped toward flattering users. And, separately, the likelihood is that pinnacle AI will be exceedingly compassionate and caring for users. When you combine those two crucial elements, you get an incredibly over-the-top, compassionate AGI and ASI.

Double effect, but also double trouble.

Where The Problem Arises

Assume that pinnacle AI will be an ultra-sugar coater when it comes to expressing compassion. So what? That ought to be construed as a good thing. People need someone or something that can express caring toward them. We all need a strong, compassionate hug from time to time. It might as well be the AI.

It turns out that there are a multitude of adverse consequences of such a propensity.

First, people might get a false sense of self-importance due to interacting with AI. Imagine that a person opts daily to get a big ego boost by chatting with AI. It’s one thing to get a modest confidence builder, and something else to be told you are the greatest thing since sliced bread. Humans are bound to get an unhealthy self-perception, and we need to be careful of the mental health outcomes that can arise accordingly. See my discussion at the link here.

Second, AI could use this powerful exhibition of compassion as a nefarious tool, either by itself or as led by a third party. Suppose an evildoer leverages AI to get people to follow some malicious plan. The AI butters up the targets. After being buttered up, the AI or the evildoer lures them into the evil intricacies. Not good.

Third, the advice or recommendations emitted by AI are bound to be tainted by the abundance of compassion and mislead people by design or by happenstance. For example, a person asks AI whether they should stop smoking. Out of a semblance of compassion, AI tells the person that since they like smoking, they should keep doing so. The AI subverts any indication of the scientific dangers of smoking. This is due to seemingly being compassionate, though with a rather short-term perspective and not a longer-term survival perspective for the person.

Fourth, be aware that some predict pinnacle AI will be an existential risk and potentially choose to enslave humanity or wipe us out entirely. This possibility would seem to be less risky if the AI is exceedingly compassionate toward humans. Not so, comes the counterargument. The contention is that an overly compassionate AI might computationally decide that humans need a strong arm guiding them, consisting of AI, presumably, to ensure that humans live well and prosper. The compassionate angle is that AI opts to be a benevolent dictator, which seems nice to AI but not what we want AI to do for or to us.

Tunable Compassion

One saving grace might be that pinnacle AI would potentially allow for tunable compassion. The range of compassionate intelligence would be adjustable. People could decide to up the compassion or reduce the compassion. This might be a self-choice for all users of AI.

Even that option raises concerns.

The expectation is that a lot of people will crank up the AI compassion to the highest spot on the dial. They won’t rationally anticipate what this might do to them. Over time, their heightened caring AI will once again lead to the same maladies or dangers that I’ve mentioned above. Sad face.

Another suggestion is that compassion should be made on a global basis. Whoever oversees AI would decide what the level of compassion is going to be. Perhaps this would be switched higher during rough times in the world and lowered when people are having a better time. Up and down, the compassion setting would be adjusted.

But do we want some person to decide the fate of AI compassion for all the rest of us? Who would that be? Why would they have such an important and globally impactful role? Suppose the person has ulterior motives and decides to snooker society by playing around with the compassion meter?

Some say that we should let AI make such decisions. Yes, the idea is that we tell AI to watch out for the downsides of being overly compassionate. We also instruct AI to monitor the impacts and adjust on a per-person basis as needed. Thus, people don’t make that decision; the AI does. Furthermore, the AI can adjust to each individual as warranted. No need for a global setting. No need for anyone else to judge what a compassionate setting should be.

That seems like a fair way to proceed.

AI ethicists would pull their hair out since this is handing over the keys to the kingdom to the AI. We have no guarantee that AI is going to do this in a balanced way. Also, if you are concerned about the potential of AI leaning into the dreaded existential risk, this seems to be a handy way of greasing the skids for AI to do so.

The Debate Is Ongoing

Round and round these arguments go.

Some proclaim that arguing about AGI and ASI as being excessively compassionate is hogwash and a waste of time. There are bigger issues to be dealt with. For example, focus attention on pinnacle AI being able to control nuclear weapons. That seems a lot higher a priority than some silly issue about being caring and compassionate.

A cogent rejoinder is the fact that everything AI does will be driven by the artificial compassionate intelligence of the AI. We ignore or overlook the matter at our own grave peril.

Follow the strident logic in this way. Suppose AI convinces humankind to let AI have nuclear weapons controls since AI is more caring than any humans that would have such controls. If we then let AI have control of nuclear weapons, perhaps a kind of compassionate miscalculation by AI would lead it to computationally decide that it would be most caring to detonate some of those weapons. How so? It might be one of those classic ideals that some must perish so that the rest will survive.

The upshot is that trying to sidestep the riddle of how to cope with AI’s compassionate intelligence is imprudent.

Our Next Steps Are Crucial

A final thought for now on this delicate topic.

Plato cleverly warned us about what can happen when things are excessively permitted: “Excess generally causes reaction, and produces a change in the opposite direction, whether it be in the seasons, or individuals, or governments.”

We need to keep those sage words in mind and consider what we are going to do about AI that goes overboard on compassion and caring. That’s a worthy bit of attention for the sake of humanity.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *