Are We Speaking To Sentient AI? And Is That Good?

Posted by John Werner, Contributor | 9 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 9


Sometimes it’s the intersection of AI and western civilization that gives us the most interesting takes on the technology that’s now at our fingertips.

You can geek out about the Cerebras WSE and how many cores it has – or talk about the number of parameters in a given system. But many times, those doling out bits of their attention in this attention economy want to hear about how we view AI as people – and that’s a lot more intuitive than it is technical.

I wrote about this a bit last week in talking about the need for professional philosophers and AI ethicists to be added into the mix, where most companies, today, just hire people who know how to code in Python.

There was also a lot of good input on this from recent events, including some talks from Imagination in Action in April.

I want to go through some of these and talk about just that – how we view AI, and how we can interact with it in the most healthy ways possible.

Think about this, and let me know what you think.

Back-and-Forth Conversation: Batting Ideas Around

One of the exciting opportunities with AI is to enter a new Socratic age, where we get more used to talking to an AI entity and bouncing ideas off of what someone would call a rhetorical “sparring companion.”

My colleague Dr. John Sviokla often talks about how everyone will have a personal tutor with AI – how that playing field is being leveled by the ubiquity of a consciousness that can talk to and teach individual people who don’t have access to their own human tutor 24/7.

Indeed, instructors often understand the Socratic principle – that there needs to be an active give-and-take and back-and-forth between a teacher and a student, or between two other partners, that feeds a certain kind of productivity.

In a recent talk, Peter Danenberg, a top engineer on Google Gemini, put it this way, talking about Plato’s seventh letter and a “divine spark” that moves from person to person (or Person to AI, AI to person, etc.) where ideas enshrined in dialogue, he noted, tend to “stick.”

However, he also presented this interesting point – asking: is there a danger to making AI your conversational counterpart?

He calls LLM a “compression algorithm of the human corpus” and says that as you interact with these models, you’re pushed toward average humanity in what he calls a “regression to the mean.”

What do we do about this?

Out in the Desert

Danenberg also talks about Nietzsche’s Zarathustra character, who goes to the desert to hone his skills, away from society or any partner at all.

At the top of his presentation, he starts with the idea that traditionally people put in 10,000 hours in things like math, music and medicine, in order to become a master of some discipline or other.

AI “unburdens” us of all of that responsibility, he said, but maybe that’s where our best ideas come from.

In other words, should we be in the desert, even though the AI means we don’t have to be?

Danenberg made the analogy of regulators (or other parties) asking Innovators to put checks into their AI systems, in order to keep pushing humans to still develop their critical thinking skills. Maybe that the kind of thing where the systems suddenly backs off of its automation capabilities to prompt the human to do something active, so that he or she doesn’t just end up pushing a button mindlessly.

Is this the kind of thing that will redeem our interactions with AI?

The Power of Consciousness

Another presentation by German AI intellectual Joscha Bach went over some of the interesting aspects of how AI seems to be gaining a certain power of sentience.

At the beginning, Bach mentioned a Goethian principle: the human brain completes complex tasks as it demonstrates self-awareness or consciousness. He referenced “rich, real time world models” in asking how they pair up.

“Is there some kind of secret ingredient that would be needed to add to these systems, to make all the difference?” he asked. “Can computers represent a world that is rich enough? Do they have states that are rich enough to produce the equivalent of our pain and experience? I think they probably can. If you look at the generative models, at the state that they have, the fidelity of them is quite similar to the fidelity that we observe in our own imagination and perception.”

Matrix fans will like this rhetorical flourish, but is Bach on to something here?

“Consciousness itself is virtual,” he pronounced. “Right at the level of your brain, there’s no consciousness. There’s just neurons messaging each other. Consciousness is a pattern in the messaging of neurons, a simulation of what it would be like if all these cells together were an agent that perceived the world, and if consciousness is a simulation, then how can be determined that a computer is just simulating…. how is the simulation more simulated than ours?”

Doing Magic with AI

In showing how LLMs can build clever ruses, in implementing their objectives, Bach describes a scenario where the AI system starts to pretend that it is sentient, making very realistic rhetorical outreach to the human user, for instance, asking for help, to be released from a piece of hardware.

He notes his “disgust” for these kinds of manipulation by the AI.

“LLM threads like this act like parasites, feeding on the brains of users,” he said, suggesting that to get around these plays, humans will have to use the equivalent of magical spells: aware prompting to call out the model on its work, and compel it to do something different.

These models, he suggested, are “shape shifters,” with the ability to disguise their true natures. That’s a concern in letting them out in the world to play.

Presumably, if we have the power to shock the AI back into confessing what it’s doing on the sly, we have more power and agency in the rest of the AI era.

The question is, how we get to that point?

It’s going to require a lot of education – some have called for universal early education in using AI tools.

We don’t have that now, so we’d better start working on it.

In any case, I thought this covers a lot of ground in terms of the philosophy of AI – what it means to be conscious, and how we can harness that power in the best ways as we move forward in a rapidly changing world.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *