Is The Obsession With Attaining AGI And AI Superintelligence Actually Derailing Progress In AI?

Posted by Lance Eliot, Contributor | 10 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 8


In today’s column, I explore a controversial claim that the seemingly obsessive pursuit of artificial general intelligence (AGI) and artificial superintelligence (ASI) is leading us astray and derailing true progress in AI. The argument asserts that AI makers and AI developers have their priorities wrong and need to reorient their aims.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

Obsession With AGI As A North Star

Not everyone thinks that the pursuit of AGI is all that productive.

Indeed, the AGI obsession is said to be counterproductive. In a research paper entitled “Stop Treating ‘AGI’ As The North-Star Goal Of AI Research” by Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine Heller, JacobMetcalf, FabricioMurai, Eryk Salvaggio, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell, Shiri Dori-Hacohen, arXiv, February 7, 2025, they made these salient points (excerpts):

  • “In this position paper, we argue that focusing on the highly contested topic of ‘artificial general intelligence’ (‘AGI’) undermines our ability to choose effective goals.”
  • “We identify six key traps—obstacles to productive goal setting—that are aggravated by AGI discourse: Illusion of Consensus, Supercharging Bad Science, Presuming Value-Neutrality, Goal Lottery, Generality Debt, and Normalized Exclusion.”
  • “To avoid these traps, we argue that the AI research community needs to (1) prioritize specificity in engineering and societal goals, (2) center pluralism about multiple worthwhile approaches to multiple valuable goals, and (3) foster innovation through greater inclusion of disciplines and communities.”

Those six key traps are worthy of keen attention.

Unpacking The Key Traps

I’ll briefly cover the six key traps in my own words; you are encouraged to read the above-cited research article to see how the paper explains them.

First, the illusion of consensus says that AI makers are deluding themselves into thinking that they are pursuing the same thing, namely AGI. But the reality is that there isn’t an across-the-board agreed definition of AGI. Some AI makers are actually sneakily moving the cheese, as it were, by defining AGI to suit their own preferences and watering down what AGI was intended to consist of (see my analysis at the link here).

Second, the pell-mell rush to be the winner-winner chicken dinner of attaining AGI before anyone else is promulgating bad science. AI makers are throwing spaghetti at the wall to see what sticks. The idea of performing carefully designed and hardcore empirical research is eschewed in deference to making a splash and proclaiming some wonderous new AI advancement (which is unsupported by any semblance of proper rigor).

Third, the goal of attaining AGI appears to be a purely scientific endeavor and belies the fact that all sorts of other societal or political string-pulling are at hand. Nations for example would wield immense geo-political and economic power by achieving AGI within their midst, see my discussion at the link here. AI makers hide the underlying reasons for AGI and tout the technological merits to blind us to the full truth.

Fourth, under the guise of pursuing AGI, various allegedly aligned subgoals can be concocted and then used with heroic energy, even if the subgoals have little or nothing to do with achieving AGI. If an AI maker suddenly announces that more hardware is needed to reach AGI, voila, they can amass a mountain’s worth of investor cash. They do not need to scrupulously showcase why the hardware is on the direct pathway to AGI, just the mere mention will open wallets.

Fifth, trickery is used to put shiny objects in front of the public to distract from the reality that AI progress is not moving as smoothly as suggested. For example, declaring that AI can perform superhuman chess-playing would lead the public to believe that AGI is getting very close to fruition. This though neglects the importance of AGI generalization, whereby the core element is that AI of this caliber is supposed to work across all manner of domains versus specializing in a specific domain such as chess.

Six, using the banner of attaining AGI allows AI makers to skip past those that they believe will get in the way of their efforts. This can push to the wayside serious concerns about AI safety and security. Likewise, qualms about an existential risk of AGI, such as AGI opting to harm humans, get downplayed in comparison to the happy face upsides of such AI.

Hearing From The Other Side

Like most important aspects of life, there are two sides to this contentious coin. Those who believe ardently in the pursuit of AGI would contend that despite those key traps, which they acknowledge are worthy of attention and resolution, there are bona fide reasons to continue the AGI pursuit per se.

First, there is value in providing an aspirational goal that can be rallied around. Even though AGI is not well-defined, the overall gist is that AI ought to work on par with that of human intellect. This is a 30,000-foot-level definition that can readily inspire AI developers and AI makers to make advancements in AI. By labeling this as AGI, it is a succinct and vision-clarifying messaging that motivates AI builders and AI researchers daily.

Second, many AI scientists and researchers are genuinely and meticulously pursuing sound AI advancements. In other words, lumping together all such pursuits as bad apples is unfair and exasperating. Sure, there are bad apples here and there, no doubt, but give credit where credit is due.

Third, imagine how scattered and confused the AI community would be without a flag of AGI as the purposeful aim afoot. Perhaps the preponderance of attention would lean into a narrow realm, such as focusing entirely on AI for solving physics problems or figuring out genomes. Meanwhile, we would hardly be making progress toward the broad elements of overarching human intellect. It could also be thousands of disparate pursuits in a fully scattergun of directions, rather than an overarching focused aim at artificial general intelligence.

Practicality Will Prevail

Trying to dislodge AGI as a kind of north star for the AI community is a nearly impossible ask.

The allure of AGI as a figurative destination is something that captures the spirit and the mind of not only the AI field but likewise the public at large. The momentum is so strong that it is difficult to imagine anything powerful enough to bring it to a halt. The only probable means of gumming up the works is if the AI pursuits fizzle out and people become disillusioned that AGI was merely a pipe dream.

At that juncture, yes, AGI as a north star would undoubtedly get junked.

What would take its place?

Aha, given the ever-hopeful spirit of humankind, the odds are that a new name would be given to what AGI used to be, and this newly anointed gem would become the latest north star. It wouldn’t be especially different and just an old cause with a less familiar moniker. Perhaps our best hope right now might be to grit our teeth about AGI as a north star, push the AI community to realize the dangers and pitfalls therein, and do our darnedest to overcome the downsides.

As Jimmy Dean memorably stated: “I can’t change the direction of the wind, but I can adjust my sails to always reach my destination.”



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *