How Are We Feeling About AI?

Top view black coffee in a white glass placed on an old cement table with coffee beans.
In these chaotic times, a lot of outcomes boil down to a very simple thing – trust.
I’ve heard from a number of experts over the last year, that trust is essential for AI deployment and integrating LLMs into business processes.
We look at this all the time in terms of consumer technologies, and how they are adopted.
It makes sense to take the temperature of the public right now, as it applies to this rapidly moving industry and how it’s getting traction in our lives.
Public Perception on AI
Some telling information comes from a Pew research study where a significant majority of the public, in response to questions, said they think that AI will have a negative effect on the country, and will harm them personally.
Charts from the study show that this number goes in the opposite direction for experts in AI, people who are better versed in the technologies and have a front row seat to how they work.
Now, in terms of personal harm or benefit, it makes sense that the experts would have a rosier picture in mind, because they stand to gain more from this industry continuing to take off.
You can see granular sentiments too, in terms of how AI will affect jobs, the economy, medical care, education, entertainment, the environment, personal relationships, and more.
And don’t forget elections, where a full 0% of the public respondents said AI would be a good influence, and only 11% of experts made that proposition.
Personal Thoughts on AI
I also wanted to include this interesting anecdote from a colleague of mine, Juan Enriquez, when he spoke at Imagination in Action in April.
I’ll try to do his remark justice, without quoting him verbatim, but essentially, Enriquez explained that he sees his outlook change on AI over over time – using days of the week.
On Mondays and Tuesdays, he said, he’s enthusiastic about our AI future. On Wednesdays, he’s not sure. On Thursdays and Fridays, he thinks the world‘s ending … and then he takes the weekends off.
It’s definitely good to take the weekends off, as many of us can attest.
The History of IT
Enriquez also spent a good portion of his talk going over how the history of IT applies to today.
He started out with punch cards, and their use in the textile industry, and paid homage to figures like Charles Babbage and Alan Turing, who he said simply lacked the “horsepower” to put their ideas into practice.
He also gave a nod to the first AI psychotherapist, Eliza, in 1966, and suggested that many people prefer to talk to a machine.
“Why?” he asked. “Because it’s more empathetic. So think about that one for a second. You’re beginning to interact with machines on very personal stuff, and when you interact with that machine, it turns out that that machine is more empathetic than the humans that you’re used to dealing with.”
In addition, Enriquez covered the pace of AI, speaking of tightening compression cycles and adoption curves that are getting shorter. It reminds me of part of a report by Mary Meeker that I covered just recently, where she talked about ChatGPT assuming user velocity more quickly than the Internet, or the personal computer, or the main frame. This is a critical part of anlysis that many of us share in thinking about how big the scope of AI application will be.
Us and Them
Here’s another major point that Enriquez made that I think was fundamental:
To call it “AI,” he said, is a mistake, he said, referencing innovations like two chatbots talking to one another.
“It’s really AIs,” he said.
In other words, each iteration of this principle is its own entity, with its own digital brain, its own sphere of influence and its own neural build. Just like us!
“We may want to ask ourselves, as we’re asking these questions, how far should we go?” he said. “How fast should we go? What happens if we can’t understand what those machines are doing or saying?”
The Nature of AI
“Some people are calling it intelligence,” Enriquez said. “Some people are calling it artificial learning. Some people are calling x, y, z, but what you’re actually looking at is something that looks an awful lot like an evolutionary tree of life … how do these things talk to one another? How are they operating this stuff? Maybe there are some common principles, but you’re really looking at different results when you try different versions of this stuff, and those results will probably diverge more and more.”
Limitations?
Enriquez asked us to ask the question: what can’t AI do, and why?
“A Turing test for robots is, take a robot, drop it off anywhere in the city, have it be able to get into the house, find the kitchen, make a cup of coffee,” he said. “So think about that. You’ve got to navigate to the house. You’ve got to figure out where the kitchen is, and then, God bless us, we have a lot of ways of making coffee. So are we going to use a French press? Are we going to use a Turing machine? Are we going to boil the thing, are we going to use Nescafe? Are we going to grind the beans? Where are the beans? Where are the filters? Right? The consequences of the machine being able to make a cup of coffee in a city is that all labor becomes 25 cents an hour.”
You can watch the presentation for more, but I thought those questions were relevant to our general sentiment on AI. What things can it do to convince us that it’s beneficial to us humans? Making coffee? Helping with medical diagnosis?
In the end, I think that time will tell what will win us over in terms of recognizing our digital brethren as helpers and assistants, rather than just a vaguely scary concept.