Are We Paying Too Much Attention To Machines?

Posted by John Werner, Contributor | 4 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 13


Are we paying too much attention to machines?

As we delve into everything that artificial intelligence can do today, we also run into some questions about what we choose to offer to the technology.

In some ways, you could boil this down to talking about the attention mechanism.

Stephen Wolfram is a renowned data scientist and mathematician who often talks about the ways that AI curates human attention, or the ways that we can direct it to focus on what’s useful to us.

Here’s a bit of what he said in a recent talk that I wrote about a few weeks ago:

“Insofar as we humans still have a place, so to speak, it’s defining what it is that we want to do, and that’s something that you can specify that more precisely computationally,” Wolfram said. “That’s how you get to (the answer) that is precisely what we want.”

Interested in the intersection of human and AI attention, I typed the following question into Microsoft Copilot: “are we paying too much attention to machines?”

Here are the five fundamental sources that the model used to reply.

Machine Feedback: Various Contributions

The first one is from one of our own authors at Forbes, Curt Steinhorst, who asked: how will we keep people at the center of business?

“We seem to believe that we are only one ‘life hack’ away from limitless productivity, that the skilled use of human focus can be reduced to a productivity system, and that if we simply want it bad enough, we can beat the machines at their own game,” Steinhorst writes. “But this attitude amounts to a passive indictment of our innate humanity, and it is a problem. We will never catch machines and digital tools in the ways they excel—but there is reason to believe that technology will never catch up to humanity in the ways that we excel. The key is to understand and nurture the differences, rather than pursue the parallels.”

The second source Copilot shows is a scientific paper in the International Journal of Information Management that asks: what is it about humanity that we can’t give away to intelligent machines?

I’m going to quote from the conclusions of the study:

“Humans must retain the role of meaningful, responsible critique of the design and application of AI, and the intelligent machines it can create. Critique is a vital concept that humanity can retain as a means to ensure liberation from intelligent machines. Suppose intelligent machines are used to help shape decision processes in life-changing situations, such as criminal court proceedings, or to aid emergency care first responders in disaster situations. In that case, they should serve only as referees or expert guides and not as decision-makers. It is critical that such machine ‘referees’ or ‘guides’ should be subject to constant human critique. Second, a human must be kept in the loop of intelligent machine decision-making processes. This involvement is vital to preserve our ability to systematically reflect on the decisions we make, which ultimately influence our individuality, a central feature of humanism.”

I think that’s useful, too.

The third source is a LinkedIn piece from Shomila Malik noting that the brain looks for information about 4 times per second, and talking about how our human attention is paid. I think this is leading toward the next piece that I’ll summarize next. Here, there’s sort of an emphasis on prolific media and stimulus “flooding the zone” and overwhelming our human attention spans.

GIGO: We Need More Attention Curation

There’s an interesting proposition in the fourth link that I found talking about the recent work of pioneers like Ezra Klein. The author also reveals a theory from professor of psychiatry Joel Nigg. In a nutshell, it’s that our attention is being degraded through attentional deficits caused by things like a pathogenic environment, inadequate sleep, unhealthy diets, air pollution, lack of physical activity, other health conditions, overwork, excessive stress, early trauma, relationship strains, and smoking cigarettes.

In the last of the links at the New York Times, Stephen Hawking is quoted, saying artificial intelligence could be a real danger and explaining the problem that way:

“It could design improvements to itself and outsmart us all,” Hawking theorized.

I’ll let that comment speak for itself. (Be sure to check out Hawking’s words on “killer machines” and frightening scenarios, and remember, this guy is a renowned scientist.)

Watson, Jeopardy and Computer Reasoning

In a recent talk at Imagination in Action, David Kenny talked about applying lessons from IBM Watson‘s performance on Jeopardy, and other landmarks of AI design.

In general, he noted, we’re moving from the era of inductive reasoning, to one of deductive and affective reasoning.

He mentioned a weather app giving probabilities in percentages, rather than a clear answer, and the need to prompt engineer in order to get results from LLMs, instead of just accepting whatever they say the first time.

A new generation, he said, is generally becoming more trustful of AI for data on medical conditions, relationships, financial strategies, and more.

“There’s just been an enormous trust put in this,” he said. “It’s all working for them on a very personalized basis. So we find that there are people getting their own information.”

Human interactions, he said, like dating and marriage, are reducing, and people trusting the machines more can be good, or in his words, “super-dangerous.”

“(Humans need to) build critical thinking skills, build interpersonal skills, do things like this that bring them together with each other, spend time with each other in order to take full advantage of the AI, as opposed to ceding our agency to it,” he said. “So while the last 15 years were largely about technical advances, and there’s a lot of technical advances we’re going to see today and invest in, I think it’s even more urgent that we work on the human advances, and make sure that technology is actually bringing communities back together, having people know how to interact with each other and with the machine, so that we get the best answer.”

And then he went back to that thesis on inductive versus deductive reasoning.

“It takes a humility of being able to understand that we’re no longer about getting the answer, we’re about getting to the next question,” Kenny said.

Humans in the Loop

For sure, there’s a need to celebrate the human in the loop, and the inherent value of humanity. We can’t give everything away to machines. All of the above tries to make some through lines in what we can give away and what we can keep. Maybe it’s a little like that Marie Kondo thing, where if it sparks joy, we reserve it for human capability, and if we need help, we ask a machine. But this is going to be one of the balancing acts that we have to do in 2025 and beyond, as we reckon with forces that are, in human terms, pretty darn smart.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *