From AI Winters to Generative AI: Can This Boom Last?

Posted by Paulo Carvão, Contributor | 12 hours ago | /ai, /innovation, /markets, /money, AI, Innovation, markets, Money, standard | Views: 9


I propose to consider the question, “Can machines think?”

Alan Turing, Computing Machinery and Intelligence, 1950

More than 70 years after British mathematician and computer science pioneer Alan Turing posed the question of whether machines can think, the world is investing billions to answer it. Artificial intelligence dominates headlines, venture capital portfolios and boardroom discussions. The possibility of another AI winter may sound far-fetched. However, history shows that AI’s trajectory has never been linear. It has moved in cycles of exuberance and disillusionment, with periods of progress followed by long freezes.

Understanding AI Winters

An AI winter is a period characterized by a major decline in funding, interest, and excitement for artificial intelligence. These downturns are defined by decreased investment, slower research progress, and reduced commercial interest. The term was first used in 1984 during a debate at the annual meeting of the American Association of Artificial Intelligence. At that event, researchers Roger Schank and Marvin Minsky, veterans of the 1970s freeze, warned that the wave of enthusiasm then sweeping business and research circles was unsustainable. They predicted a chain reaction beginning with pessimism among scientists, followed by skepticism in the press, sharp cuts in investments, and ultimately the collapse of research efforts. Their warning proved correct: within a few years, the billion-dollar AI industry of the mid-1980s began to unravel.

The First AI Winter: Mid-1970s To 1980

The first AI winter lasted from about 1974 to 1980. One of the earliest warning signs came from the field of machine translation, which had captured attention during the Cold War. Back then, U.S. agencies, including the CIA, invested heavily, hoping that computers could instantly translate Russian documents. By the mid-1960s, however, progress lagged. The Automatic Language Processing Advisory Committee (ALPAC) stated that machine translation was slower, less accurate, and more expensive than human work. Their report, issued in 1966, effectively ended U.S. backing of the area and derailed many careers.

In the United Kingdom, Sir James Lighthill, a leading British applied mathematician, wrote in 1973 a report that was very critical of the field. Commissioned by Parliament, the report concluded that AI had failed to meet its “grandiose objectives.” He argued that most of the work could be done more effectively in other scientific disciplines and highlighted the problem of “combinatorial explosion.” This meant that algorithms that seemed efficient on small, controlled problems quickly became unmanageable when faced with the complexity of the real world. As the number of possibilities grew, the time and resources needed to compute answers ballooned and progress ground to a halt. In the wake of this report, the government dismantled most U.K. AI research programs, leaving only a few universities active until new funding appeared a decade later.

In the United States, funding pressures also mounted. During the 1960s, the Defense Advanced Research Projects Agency (DARPA) had poured millions into AI with minimal oversight. That changed with the Mansfield Amendments of 1969 and 1973, which restricted federal research dollars. The shift cut back long-term, open-ended university research and redirected money toward short-term, applied work. By the early 1970s, DARPA began demanding concrete results and judging AI proposals against stringent goals. Many projects fell short, and by 1974, the agency had sharply reduced support. What had once been generous, flexible funding gave way to narrowly targeted investments, signaling the end of an era of easy money for AI.

The Second AI Winter: Late 1980s To Mid-1990s

The second AI winter started during the late 1980s and lasted into the mid-1990s. It began with the implosion of the market for specialized computers, built to run the programming language favored by AI researchers. By 1987, general-purpose workstations matched or exceeded the performance of specialized systems at a fraction of their price. With no reason to buy costly hardware, that entire market disappeared almost overnight, forcing many of its manufacturers out of business.

At the same time, the commercial promise of expert systems began to fade. These rule-based programs, designed to replicate the decision-making of specialists, had enjoyed early success. But as adoption spread, limitations became clear. Expert systems were brittle, costly to maintain, and unable to adapt when conditions changed. Updating rules often required armies of programmers, and systems could make basic mistakes. By the early 1990s, interest decreased, maintenance costs rose, and deployments became less frequent.

The slowdown was global. Japan’s ambitious Fifth Generation project, launched in 1981 to build machines that could converse, translate, and reason like humans, fell short of expectations. In the United States, DARPA’s Strategic Computing Initiative, which once funded more than 90 projects, also scaled back after leadership dismissed AI as “clever programming” rather than the next technological wave.

While the field never went entirely dormant, the collapse of dedicated hardware, the brittleness of expert systems, and the failure of national mega-projects combined to bring about the second AI winter.

Reigniting: The Late 1990s And Beyond

The AI landscape had changed by the late 1990s and early 2000s, thanks to a convergence of rising computing power, large digital datasets, and innovation in data-driven learning methods. Instead of relying on handcrafted rules, AI began to learn from examples. This statistical approach set the foundation for modern machine learning.

A breakthrough came in 2012 when a system loosely based on the human visual cortex outperformed every rival in a major image-recognition competition by using large amounts of training data and powerful processors. A few years later, researchers introduced the Transformer. This model design focuses on patterns of attention, essentially teaching AI to decide which words or pieces of information matter most in context. This approach enabled the efficient handling and understanding of immense amounts of text, transforming language-based applications and laying the foundation for large language models.

Since the early 2010s, interest, funding, and adoption have surged, marking a resurgence that is different than past freezes. This revival, often called the AI boom, continues to broaden its reach and influence.

Why We’re Unlikely To Face Another AI Winter

Earlier AI winters were triggered by a common pattern: heavy reliance on government funding, overhyped promises, and brittle technologies that broke down under real-world demands. Today’s environment looks different.

AI no longer relies on a handful of government agencies. Public funding still matters, but a well-developed venture capital ecosystem now drives much of the investment, spreading the risk across startups and private labs. This diversification makes a sudden collapse less likely.

The economics of computing have changed. In the past, hardware was too expensive or too limited. Now, costs continue to fall while cloud platforms, specialized chips, and massive datasets are widely accessible. AI technology is more robust, with modern architectures adapting across domains and advances in deep learning and natural language processing delivering real value.

Governance is finally getting attention. Enabling policies with infrastructure investments are being enacted across the world, notably in the U.S., with the recently announced AI action plan. Standards and accountability measures are being discussed to create guardrails for growth.

Together, these factors reduce the odds of another AI winter. The field is more resilient, diverse, and embedded in the economy than in past cycles.

Avoiding The Next AI Winter

History shows that momentum can turn quickly if promises exceed results. Overstating the nearness of human-level intelligence or ignoring ethical, safety, and energy concerns erodes trust and triggers backlash.

The risks today are higher because AI is embedded in critical infrastructure and national strategy. A loss of confidence could invite stricter regulation, investor retreat, and public skepticism with consequences far greater than in past decades.

Avoiding another AI winter requires pairing innovation with realistic expectations, steady investment in infrastructure, and a willingness to be transparent about both progress and limits. Risks remain as investors chase bubbles, computing and energy costs climb, and policy struggles to keep pace with market hype. Yet there are reasons for optimism. Researchers now build systems that handle text, images, and sound together, while engineers design more efficient algorithms.

Looking forward, Turing’s question, “Can machines think?”, remains unresolved. What history makes clear is that the real challenge is matching ambition with responsibility so that the current AI spring endures and does not give way to another AI winter.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *