People Will Lose Their Minds When AI Such As Artificial General Intelligence Suffers Blackouts

Posted by Lance Eliot, Contributor | 6 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 9


In today’s column, I examine the concern that once we advance AI to become artificial general intelligence (AGI), there will be an extremely heavy dependency on AGI, and the moment that AGI glitches or goes down, people will essentially lose their minds.

This is somewhat exemplified by the downtime incident of the globally popular ChatGPT by OpenAI (a major outage occurred on June 10, 2025, and lasted 8 hours or so). With an estimated 400 million weekly active users relying on ChatGPT at that time, the news outlets reported that a large swath of people was taken aback by the fact that they didn’t have immediate access to the prevalent generative AI app.

In comparison, pinnacle AI such as AGI is likely to be intricately woven into everyone’s lives and a dependency for nearly the entire world population of 8 billion people. The impact of downtime or a blackout could be enormous and severely harmful in many crucial ways.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

AGI Will Be Ubiquitous

One aspect about AGI that most would acknowledge is likely would be that AGI is going to be widely utilized throughout the globe. People in all countries and of all languages will undoubtedly make use of AGI. Young and old will use AGI. This makes abundant sense since AGI will be on par with human intellect and presumably available 24/7 anywhere and anyplace.

Admittedly, there is a chance that whoever lands on AGI first might horde it. They could charge sky-high prices for access. Only those who are rich enough to afford AGI would be able to lean into its capabilities.

The worries are that the planet will be divided into the AGI haves and have-nots.

For the sake of this discussion, let’s assume that somehow AGI is made readily available to all at a low cost or perhaps even freely accessible. I’ve discussed that there is bound to be an effort to ensure that AGI is a worldwide free good so that it is equally available, see my discussion at the link here. Maybe that will happen, maybe not. Time will tell.

Humans Become Highly Dependent

Having AGI at your fingertips is an alluring proposition.

There you are at work, dealing with a tough problem and unsure of how to proceed. What can you do? Well, you could ask AGI to help you out. The odds are that your boss would encourage you to leverage AGI. No sense in wasting your time on flailing around to solve a knotty problem. Just log into AGI and see what it has to say.

Indeed, if you don’t use AGI at work, the chances are that you might get in trouble. Your employer might believe that having AGI as a double-checker of your work is a wise step. Without consulting AGI, there is a heightened possibility that your work is flawed and will proceed unabated. AGI taking a look at your work will be a reassurance to you and your employer that you’ve done satisfactory work.

Using AGI for aiding your life outside of work is highly probable, too.

Imagine that you are trying to decide whether to sell your home and move up to a bigger house. This is one of those really tough decisions in life. You only make that decision a few times during your entire existence. How might you bolster your belief in taking the house-selling action? By using AGI. AGI can help you to understand the upsides and downsides involved. It likely can even perform many of the paperwork activities that will be required.

People are going to go a lot deeper in their AGI dependencies. Rather than confiding in close friends about personal secrets, some will opt to do so with AGI. They are more comfortable telling AGI than they are another human. I’ve extensively covered the role of contemporary AI in performing mental health therapy; see the link here. Chances are that a high percentage of the world’s population will do likewise with AGI.

When AGI Goes Down

A common myth is that AGI will be perfect in all regards. Not only will AGI seemingly provide perfect answers, but it will also somehow magically be up and running flawlessly and perfectly at all times. I have debunked these false beliefs at the link here.

In the real world, there will be times when AGI goes dark.

This could be a local phenomenon and entail servers running AGI in a local region that happen to go down. Maybe bad weather disrupts electrical power. Perhaps a tornado rips apart a major data center housing AGI computers. All manner of reasons can cause an AGI outage.

An entire worldwide outage is also conceivable. Suppose that AGI contains an internal glitch. Nobody knew it was there. AGI wasn’t able to computationally detect the glitch. One way or another, a coding bug silently sat inside AGI. Suddenly, the bug is encountered, and AGI is taken out of action across the board.

Given the likelihood that AGI will be integral to all of our lives, those types of outages will probably be quite rare. Those who are maintaining AGI will realize that extraordinary measures of having fail-safe equipment and operations will be greatly needed. Redundancy will be a big aspect of AGI. Keeping AGI in working condition will be an imperative.

But claiming that AGI will never go down, well, that’s one of those promises that is asking to be broken.

The Big Deal Of Downtime

It will be a big deal anytime that AGI is unavailable.

People who have become reliant on AGI for help at work will potentially come to a halt, worrying that without double-checking with AGI, they will get in trouble or produce flawed work. They will go get a large cup of coffee and wait until AGI comes back online.

Especially worrisome is that AGI will be involved in running important parts of our collective infrastructure. Perhaps we will have AGI aiding the operation of nuclear power plants. When AGI goes down, the human workers will have backup plans for how to manually keep the nuclear power plant safely going. The thing is, since this is a rare occurrence, those human workers might not be adept at doing the work without AGI at the ready.

The crux is that people will have become extraordinarily dependent on AGI, particularly in a cognitive way.

We will rely upon AGI to do our thinking for us. It is a kind of cognitive crutch. This will be something that gradually arises. The odds are that on a population basis, we won’t realize how dependent we have become. In a sense, people will freak out when they no longer have their AGI cognitive partner with them at all times.

Losing Our Minds

The twist to all of this is that the human mind might increasingly become weaker and weaker because of the AGI dependency. We effectively opt to outsource our thinking to the likes of AGI. No longer do we need to think for ourselves.

You can always bring up AGI to figure out things with you or on your behalf.

Inch by inch, your proportion of everyday thinking gets reduced by your own efforts of relying on AGI. It could be that you initially began with AGI doing 10% and you doing 90% of the heavy lifting when it came to thinking things through. At some point, it became 50% and 50%. Eventually, you allow yourself to enter the zone of AGI at 90%, and you do only 10% of the thinking in all your day-to-day tasks and undertakings.

Some have likened this to worries about the upcoming generation that is reliant on using Google search to look things up. The old ways of remembering stuff are gradually being softened. You can merely access your smartphone and voila, no need to have memorized hardly anything at all. Those youths who are said to be digital natives are possibly undercutting their own mental faculties due to a reliance on the Internet.

Yikes, that’s disconcerting if true.

The bottom-line concern, then, about AGI going down is that people will lose their minds. That’s kind of a clever play on words. They will have lost the ability to think fully on their own. In that way of viewing things, they have already lost their minds. But when they shockingly realize that they need AGI to help them with just about everything, they will freak out and lose their minds differently.

Anticipating Major Disruption

Questions that are already being explored about an AGI outage include:

  • Will people globally stop whatever they are doing and shift into a zombie mode, waiting for AGI to come up?
  • Will the world’s infrastructure come to a halt due to AGI being knocked temporarily out of service?
  • Will AGI “suffer” during downtime, such that when it gets back into gear, will there be internal damage and the AGI be different than it once was?
  • Will evildoers purposely seek to disrupt AGI, allowing them to freak out the populace and perhaps perform other dastardly deeds?
  • What safeguards can protect AGI, including that AGI itself participates in protecting against AGI outages?

There are notable concerns about people developing cognitive atrophy when it comes to a reliance on AGI. The dependencies not only involve the usual thinking processes, but they likely encompass our psychological mental properties too. Emotional stability could be at risk, at scale, during an AGI prolonged outage.

What The Future Holds

Some say that these voiced concerns are a bunch of hogwash.

People will actually get smarter due to AGI. The use of AGI will rub off on them. We will all become sharper thinkers because of interacting with AGI. This idea that we will be dumbed down is ridiculous. Expect that people will be perfectly fine when AGI isn’t available. They will carry on and calmly welcome whenever AGI happens to resume operations.

What’s your opinion on the hotly debated topic?

Is it doom and gloom, or will we be okay whenever AGI goes dark? Mull this over. If there is even an iota of chance that the downside will arise, it seems that we should prepare for that possibility. Best to be safe rather than sorry.

A final thought for now on this weighty matter. Socrates notably made this remark: “To find yourself, think for yourself.” If we do indeed allow AGI to become our thinker, this bodes for a darkness underlying the human soul. We won’t be able to find our inner selves.

No worries — we can ask AGI how we can keep from falling into that mental trap.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *