The Secrecy Debate Whether AI Makers Need To Tell The World If AGI Is Actually Achieved

Can or should the attainment of artificial general intelligence (AGI) be kept a secret or should the … More
In today’s column, I explore the ongoing debate about whether AI makers can or should hide their attainment of artificial general intelligence (AGI) from the world if or when they arrive at the revered achievement.
The controversial consideration is that on the one hand, they might wish to quietly leverage AGI to their benefit and not reveal the source of their budding power, meanwhile, the rest of us are unaware of and unable to also prosper correspondingly. There’s also the qualm that once the world realizes AGI has been achieved, perhaps mass panic will arise, or evildoers will seek to turn AGI toward global extinction. It’s all a quite complicated matter.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
AGI As A Big Secret
Imagine that an AI maker manages to attain AGI. That’s pretty exciting news. It would be earth-shattering news. Some believe that AGI will enable us to cure cancer and solve many if not all of humankind’s pressing problems. Happy face.
An AI maker would presumably tout to the rooftops that they miraculously have achieved AGI. Nobel Prizes certainly would be awarded. Vast riches would flow to the AI maker and their employees would be acclaimed and undoubtedly become incredibly wealthy. This would be perhaps the greatest accomplishment of humanity and deserves suitable recognition.
But supposing an AI maker decided to keep their AGI under lock and key, secretly profiting via their invention.
Not fair, some exhort. AGI is so consequential that an AI maker would be ethically or morally obligated to inform the world. They can’t just hog it for themselves. Furthermore, they hold in their hands something that could be incredibly dangerous. It is possible that the AGI might find new poisons or ways to destroy humanity. An AI maker should not solely possess that kind of power.
Wait a second, the retort goes, the old-time adage is that to the victor go the spoils. If an AI maker arrives at AGI, it is theirs to decide what to do with it. They don’t need to tell anyone what they’ve accomplished. They can use it for their purposes as they see fit. This includes not using AGI at all, maybe opting to deactivate the AGI under the belief that the world isn’t ready for what AGI portends.
Forcing Their Hands
Some assert that we ought to have laws that specifically would compel AI makers to reveal when AGI is reached. A legal requirement would force an AGI out into the open. The AI maker ought to face harsh criminal charges if they keep AGI a secret. Essentially, AGI is construed as a public good and the public has a right to know that AGI is floating around.
Not only does this apply to achieving AGI, but the notion would also be that AI makers must provide status updates as they get near to attaining AGI. Rather than waiting until AGI has arisen, AI makers would have to announce that they are getting close. The requirement would be that they continue to keep the world informed as AGI inches toward reality.
The stepwise notification gives us all a chance to be reflective and get ready for the grand moment that AGI exists. In contrast, a sudden announcement that AGI is here would seemingly prompt mass panic. Confusion would reign. People might riot or do other panicky acts. The better approach is to ease everyone into the realization that AGI is near.
AI makers might not be keen on the stepwise proclamations.
First, they might not even know whether they are getting nearer to AGI. There is a possibility that AGI will suddenly materialize, such as a rapid and unanticipated so-called intelligence explosion (see my analysis of this possibility, at the link here).
Second, there are bound to be people gravely worried about AGI and they might try to stop or at least delay the AGI-making efforts of the AI maker. This could include legal pursuits such as civil lawsuits intended to prevent AGI from being reached. Imagine the immense headaches and added costs the AI maker would incur. The odds are that even if the prevention efforts weren’t successful, the actions would distract the AI maker and deplete their attention to achieving AGI.
Third, other AI makers, the competition as it were, might opt to hire away the AI developers and seek to “steal” the AGI from the AI maker. This would be a sensible strategy. A competing AI maker would nearly have to do something radical to keep up with the Joneses. The stock value of all other AI makers would otherwise plummet to the doldrums since they aren’t on the same footing and near AGI.
Governmental Takeover
A nation that has an AI maker in its midst that is nearing AGI would almost certainly decide not to stand idly by while the AI maker arrives at AGI. The hosting country would naturally want a sizable say in how the AGI is going to be used. Thus, the moment an AI maker teases that they are nearing AGI; governmental authorities would be fiercely tempted to declare a takeover of the AI maker.
That’s an important point. Having one company owning AGI seems somewhat disingenuous. Can the company truly protect the AGI from evildoers? Will the company itself go rogue and opt to use AGI for evil deeds? A nation would feel compelled to exert its authority over the firm.
A related aspect is that AGI would undoubtedly change the balance of national geo-political power, see my discussion on this at the link here and the link here. Nations will inevitably wield AGI to showcase the strength of their nation. They would also potentially become drunk with glee and let the matter go to their head, possibly threatening other nations and relishing being on top of worldwide power dominance.
All of this creates a likely domino effect.
Think of the cascading impacts. A nation takes over an AI maker that either has AGI or is right at the cusp of AGI. Other nations plainly see that this power move is taking place. Some of those nations opt to undertake a first-strike approach. Rather than waiting until the AGI-wielding nation gets its ducks in order, an attempt is made to prevent the AGI from being attained.
Chaos ensues as nations get embroiled in an AGI-focused battle over who has AGI and who does not.
Give AGI To The World
One viewpoint is that AGI would have to be considered a worldwide resource for all to share. No specific company ought to own AGI or control AGI. Neither should one particular nation own AGI or control AGI.
The ownership and control of AGI must be a universal consideration.
How would that work?
Nobody can say for sure. Perhaps the United Nations would be the place to put AGI and have the UN then decide how AGI would be utilized. Not everyone is keen on that idea. Some assert that a new entity would need to be formed, somehow established on behalf of all humankind.
An AI maker that attained AGI might not be pleased with the forced taking of their AGI. Why should they have to give up the immense profits that would come from possessing AGI? Well, the reply comes, maybe some form of payment could be arranged to compensate the AI maker for what they had grandly accomplished.
Activation Of Evildoers
Any kind of announcement that AGI is nearing would stridently spur evildoers into immediate action.
Envision that AGI could be turned toward bad deeds and allow criminals to have in their hands the best criminal mastermind ever conceived. These malefactors would do whatever they could to get a copy of the AGI so they would have their own version of it. This would allow them to circumvent security provisions or human-value AI-alignment intricacies that might have been built into the AGI by the AI maker, see my analysis at the link here.
If somehow the AGI was so well protected that it couldn’t be stolen or copied, another angle would be to corrupt the AI developers into coming on board with the criminal side of things. Entice them or threaten them into compromising whatever prior ethical leanings they might have had.
A sneaky path would be to acquire the AI maker that is on the verge of AGI. Perhaps establish an innocent-looking shell company that comes along and buys up the AI maker. Voila, in one fell swoop, AGI is now in the hands of others.
Keeping A Big Secret
Those various arguments about the pros and cons of keeping AGI secret are difficult to tabulate in terms of which way is the best route to go. Some might claim that the dangers clearly indicate that AGI must be kept secret. Others see things the exact opposite, namely that the bottom line adds up to showcasing that AGI must not be kept secret.
Here’s an interesting twist.
Even if an AI maker wanted to keep their AGI a secret, could they effectively do so?
No way, some would emphasize. There is absolutely no way that an AI maker could be sitting on AGI and that the word would not get out that they are doing so. AI developers would invariably brag about having attained AGI, maybe initially just to close friends and family. Then word would spread. It would spread like wildfire.
Another leakage would be that if the AGI is actively being used for some beneficial purpose, an AI maker would have a challenging time explaining how they suddenly became so brilliant. Everyone would certainly be suspicious and assume that AGI had been achieved. The gig would be up quite quickly.
If the government took over the AI maker, that would be another telltale clue that AGI might be in the works. Why else would the government out-of-the-blue decide to possess the firm? Excuses might be laid out to mask the real reason. Nonetheless, enterprising inquisitors would figure out the truth.
Do you think an AI maker could end up with AGI and realistically keep it a secret?
Seems like a tall order.
Sophocles, the legendary Greek playwright, possibly said it best: “Do nothing secretly; for Time sees and hears all things and discloses all.”