What Does Super-Powerful AI Look Like?

Posted by John Werner, Contributor | 5 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 11


Imagine you’re going to visit the “big AI boss,” the one that controls all kinds of systems: the post office, utilities, government systems, business operations, etc. You walk into the command room and there it is, one big beautiful brain, as it were, pulsating away in vibrant color while a set of wires connects its body to the network.

Okay, I’ll pull the curtain on this scintillating vision, because according to quite a few experts, this is not what you’re going to see with general artificial intelligence.

Instead, as we near the singularity, some of our best minds argue that you’re likely to see something that looks more like what you would have if you visualized the Internet: something’s happening over here, and something else is happening over here, and there are efficient conduits connecting everything together in real time, and it’s all working smoothly, like a Swiss watch.

Minsky’s Mind

If you read this blog with any regularity, you’re probably tired of me referencing one of our great MIT people, Marvin Minsky, and his book, Society of the Mind, in which he pioneers this very idea. Minsky said, after much contemplation and research, that the human brain, despite being a single biological organ, is not one computer, but a series of many different “machines” hooked up to one another.

You might argue that this is a semantic idea: because we know, for example, how the cortex works, how the two halves of the brain coordinate, and the role of sub-organs like the amygdala. But that’s not all that Minsky gave us in his treatise: he helped to introduce the idea of “k-lines” or knowledge lines, the trajectories by which we remember things. Think of the next-hop journeys of packets along the Internet – there are similarities there.

Minsky also referenced something some call the “immanence of meaning” – the idea that meaning does not come to us inherently in the data, but arises from our processing itself.

This is, in my view, very zen – in fact, if you look up the meaning of the word “immanence,” you get the idea that it’s sort of the opposite of transcendence: instead of rising above something, you go deep inside it.

That makes a lot of sense, and I would argue, it gives us another useful lens with which to look at AI. When people argue about whether AI is “real” or “sentient” – I would say that in some ways, it’s the ripples from the rock that are more real than the rock itself (to use a physical metaphor) – that the “reality” of AI is in how we process its products.

To be fair, as agents evolve, they’re going to get pretty real in other ways, too. They’ll be doing things and manipulating systems 24/7, getting into whatever they can get their digital hands on.

That’s where I wanted to cover a presentation given by Abhishek Singh at IIA in April. Here, Singh talks about our likely reaction to new digital “species” of intelligence in a pretty compelling way.

The Three-Fold Cord

Singh talks about a “trilemma” in intelligence: the intersection of three ideas or, in some cases, goals. One is scalability. Another is cooperation or coordination. The third is heterogeneity: how different are the tasks each agent completes? How fungible is one from another?

Singh gives the example of a swarm of birds, and a tribe of wolves. The birds are highly homogenous, operating somewhat in unison, in a very large and scalable group.

Wolves, he says, don’t scale like that.

“Individuals are taking different roles, and different responsibilities,” he says. “But at the same time, they are not (operating on) a large scale. … And what distinguishes our species, Homo sapiens, in this case, is the capability to do both high heterogeneity as well as scalability.”

He mentions something used in distributed systems theory (think databases) called the CAP theorem, which says that out of three criteria, consistency, availability and partition tolerance, databases can only solve for two at once.

“You get a trilemma between these three,” he says, “and it turns out we have a similar trilemma. It does not map exactly to distributed systems, but (there’s a) similar notion in this ecosystem, of different intelligent species trying to work with each other.”

Enter CHAOS

Singh then cites chaos theory and its contribution to this study.

“What I’m going to introduce to you is chaos theory 2.0, which is in the context of these coordinating agents,” he explains. “What we get to see in a centralized system is, as soon as you try to go for two of (the criteria), you are losing out on the other, and one way to get over this trilemma, not entirely, but at least (to)bridge the boundaries, is by operating in a decentralized manner.”

Decentralization itself, he suggests, is not enough.

“You need to come up with algorithms (and) protocols that actually allow you to achieve these three goals in a decentralized fashion,” he says. “And the way we are approaching this problem of bridging this trilemma is through two ideas: local protocols (and) emergent behavior.”

Here’s where Singh illustrates the idea that I brought up, in my own way, at the beginning of this post, perhaps in his case in a more articulate way:

“One way to think about how these two mental models fit together is the way we are solving intelligence right now,” he says. “It’s this idea of one big, large brain sitting at one large, big tech company and being capable of doing all the tasks at same time. And the other perspective, which is more coming from the decentralized angle, is these many small brains interacting with each other. None of the single small brains is powerful enough. But then together, using those protocols that I was mentioning before, there’s an emergent phenomenon.”

And then, interestingly, he touches on that same idea I mentioned above, that heterogeneity of tasks might be sort of a semantic idea, in that, within that one big brain, lots of different things are happening adjacent to each other. In other words, because of brain anatomy, the brain cells are not entirely fungible.

“This one big brain approach also has this notion of the trilemma, but in a fractal way,” he notes, “where inside that one large neural network, you have lots of parameters – they’re coordinating with each other, and they’re solving different sub tasks, and that’s why you have heterogeneity.”

Watch the part of the video where Singh covers things like financial markets, social mores, and knowledge transfers, and you’ll see more practical application of these ideas to real life. He also brings up the similarities between agent systems and the early Internet, where humans had to game out networking and connection with items like HTTP, SSL, etc. Singh mentions model context protocol, MCP, and sure enough, he drops the acronym NANDA, which represents MIT’s own project to build an AI agent Internet protocol.

Do we need more CHAOS in AI? Watch the video, and let me know.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *