Author Image

Is the AI bubble about to burst? Five causes and five scenarios

Jovan Kurbalija
Published on December 1 2025
This text outlines five causes and five scenarios around the AI bubble and potential burst.

State of AI in November by the author: Geopolitics, Diplomacy, and Governance

Will the AI bubble burst? Is AI now ‘too big to fail’? Will the U.S. government bail out AI giants – and what would that mean for the global economy?

These questions are now everywhere in the media, boardrooms, and policy circles. Corporate AI investment hit around USD 252 billion in 2024 – more than 13× higher than a decade ago – while the global AI market is projected to jump from about USD 189 billion in 2023 to nearly USD 4.8 trillion by 2033.

The image shows a bar chart entitled Global Corporate Investment in AI by investment activity, 2013-24

Source: Stanford HAI+1 

The gap is widening between, on the one hand, the trillion-dollar valuations of AI companies and, on the other hand, the slower-than-expected adoption of AI by businesses and wider society.

The AI bubble is inflating to the point of bursting, as examined in this text, which focuses on five causes of the current situation and five future scenarios that inform us how to prevent or deal with a potential burst.


Five causes of the AI bubble

The frenzy of AI investment did not happen in a vacuum. Several forces have contributed to our tendency toward overvaluation and unrealistic expectations.

1st cause: The media hype machine

AI has been framed as the inevitable future of humanity – a story told in equal parts fear and awe. This narrative has created a powerful Fear of Missing Out (FOMO), prompting companies and governments to invest heavily in AI, often without sobering reality checks.

The result is market capitalisation of AI-driven companies reaching multi-trillions USD. For emple, Nvidia alone has been valued in the range of USD 4-5 trillion, comparable to the combined annual GDP of many regions.  Hype has often run ahead of business logic: investments are driven less by clear use cases and more by the fear of being left behind.

2nd cause: Diminishing returns on computing power and data 

The dominant, simple formula of the past few years has been:

More compute (read: more Nvidia GPUs) + more data = better AI.

This belief has led to huge investment into massive AI factories and hyper-scale data centres. Yet, we are experiencing a diminishing returns phenomenon as more computing power does not result in better AI. The exponential gains of early deep learning have been flattening. 

The AI paradigm is shifting from a focus on hardware and computing power towards practical AI applications and domain knowledge, which require more institutional changes and less financial investment.

3rd cause: LLMs’ logical and conceptual limits

Behind the diminishing return are conceptual and logical limitations of Large Language Models (LLMs), which cannot be resolved simply by scaling data and computing. Despite the dominant narrative of imminent superintelligence, many leading researchers are sceptical that today’s LLMs can simply be ‘grown’ into human-level Artificial General Intelligence (AGI). Meta’s former chief AI scientist, Yann LeCun, put it this way:

On the highway toward human-level AI, a large language model is basically an off-ramp — a distraction, a dead end.

Future neural architectures will certainly improve reasoning, but there is currently no credible path to human-like general intelligence that consists simply of ‘more of the same LLM, but bigger.’

4th cause: Slow AI transformation

AI technologies have advanced faster than society’s ability to absorb them. Organisations need time to:

Emerging evidence is sobering:

The time lag between technological capability and institutional change is a core risk factor for an AI bubble. There is no shortcut: social and organisational transformation progresses on multi-year timescales, regardless of how fast GPUs are shipped.

History reminds us what happens when hype outruns reality. Previous AI winters in the 1970s and late 1980s followed periods of over-promising and under-delivering, leading to sharp cuts in funding and industrial collapse.

5th cause: Massive cost discrepancies

AI is becoming cheaper to train, especially after the release of DeepSeek’s open-source models, whose development cost was around USD 5.5 million, which is 100 times less than comparable to proprietary models, such as GPT 4.5 (USD 500 million).

This 100-to-1 cost ratio raises brutal questions about the efficiency and necessity of current proprietary AI spending. If open-source models at a few million dollars can match or beat models costing hundreds of millions, what exactly are investors paying for?

It is in this context that low-cost, open-weight models, especially from China, are reshaping the competitive landscape and challenging the case for permanent mega-spending on closed systems.


Five possible scenarios: what happens next?

AI is unlikely to deliver on its grandest promises – at least not on the timelines and business models currently advertised. More plausibly, it will continue to make marginal but cumulative improvements, while expectations and valuations adjust. Here are five plausible scenarios for resolving the gap between hype and reality. In practice, the future will be some mix of these.

1st Scenario: The rational pivot (the textbook solution)

The classic economics textbook response would be to revisit the false premise that “more computing automatically means better AI” and instead focus on:

In this scenario, AI development pivots toward systems that are:

This shift is already visible in policy. The U.S. government’s recent AI Action Plan explicitly encourages open-source and open-weight AI as a strategic asset, framing ‘leading open models founded on American value’ as a geostrategic priority. 

However, a rational pivot comes with headwinds:

A serious move toward open, knowledge-centric AI would immediately raise profound questions about intellectual property law, data sharing, and the ownership of the ‘raw material’ used to train these systems.

2nd Scenario: “Too big to fail” (the 2008 bailout playbook)

A different path is to treat AI not just as a sector but as critical economic infrastructure. The narrative is already forming:

If AI is framed as a pillar of national competitiveness and financial stability, then AI giants become ‘too big to fail’. In other words:

If things go wrong, taxpayers should pick up the bill

In this scenario, large AI companies would receive explicit or implicit backstops – such as cheap credit, regulatory forbearance, or public–private infrastructure deals – justified as necessary to avoid broader economic disruption.

3rd Scenario: Geopolitical justification (China ante portas)

Geopolitics can easily become the master narrative that justifies almost any AI expenditure. Competition with China is already used to argue for:

The U.S. now frames open models and open-weight AI as tools of geopolitical influence, explicitly linking them to American values and global standards. 

At the same time, China is demonstrating that low-cost, open-source LLMs, such as DeepSeek R1, can rival Western frontier models, sparking talk of a ‘Sputnik moment’ for AI.

In this framing, a bailout of AI giants is rebranded as:

An investment in national security and technological sovereignty

Risk is shifted from private investors to the public, justified not by economic considerations but by geopolitical factors.

4th Scenario: AI monopolisation (the Wall Street gambit)

As smaller players run out of funding or fail to monetise, AI capacities could be aggressively consolidated into a handful of tech giants. This would mirror earlier waves of monopolisation in:

Meanwhile, Nvidia already controls roughly 80–90% of the AI data-centre GPU market and over 90% of the training accelerator segment, making it the de facto hardware monopoly underpinning the entire stack.

In this scenario:

The main risk is a new wave of digital monopolisation. Power would shift even more decisively from control over data to control over knowledge and models that sit atop that data.

Open-source AI is the main counterforce. Low-cost, bottom-up development makes complete consolidation difficult, but not impossible: large firms can still dominate by owning the distribution channels, cloud platforms, and hardware.

5th Scenario: AI winter and new digital toys 

The digital society appears to require a permanent “frontier technology” to focus attention and capital. Gartner’s hype-cycle metaphor captures this: technologies surge from a “peak of inflated expectations” to a ‘trough of disillusionment’ before stabilising – often over 5–10 years.

We have seen this before:

AI has already lasted longer at the top of the hype cycle than many of those digital “toys”. In this scenario, we would see:

The global frontier-tech market will remain enormous, but AI will share the spotlight with new innovations and narratives.


Main actors: Strengths and weaknesses

OpenAI

Strengths: ChatGPT is still the most recognisable AI brand globally. OpenAI claims hundreds of millions of weekly active users; external estimates put that figure in the 700–800 million range in 2025, with daily prompt volumes in the billions.

OpenAI also enjoys:

Weaknesses: OpenAI is highly associated with the most aggressive AI hype – including its CEO’s focus on AGI and existential risks. It is structurally dependent on:

OpenAI’s revenue remains overwhelmingly tied to AI services (ChatGPT subscriptions, API usage). OpenAI is unlikely to generate revenue by 2030 and still requires an additional USD 207 billion to fund its growth plans, according to HSBC estimates.

In a bubble-burst scenario, OpenAI is a prime candidate to be restructured rather than destroyed:

Google (Alphabet)

Strengths: Alphabet has the most vertically integrated AI lifecycle:

Its market capitalisation is racing toward USD 4 trillion on the back of AI optimism, with Gemini 3 seen as a credible rival to OpenAI’s and Anthropic’s top models.(Source: Reuters)

Weaknesses:

Yet among the big players, Google may be best positioned to weather a bubble burst because AI is layered across an already profitable, diversified portfolio rather than being the core business itself

Meta

Strengths: Meta has pursued an aggressive open-source strategy with its Llama family of models. Llama has become the default base for thousands of open-source projects, start-ups, and enterprise deployments. Meta also controls:

This mix enables Meta to ship AI features at scale – from AI assistants in messaging apps to generative tools in Instagram – while utilising open weights to shape the ecosystem in its favour.

Weaknesses:

Meta is less dependent on selling AI as a product and more focused on using AI to deepen engagement and ad performance, which may make it more resilient in a correction.

Microsoft

Strengths: Microsoft made the earliest and boldest bet on OpenAI, embedding its models across Windows, Office (via Copilot), GitHub (Copilot for developers), and Azure cloud services. This gives Microsoft:

Together with other giants, Microsoft is part of a club expected to invest hundreds of billions of dollars in data centres and AI infrastructure over the next few years.

Weaknesses:

In a mild bubble burst, Microsoft is more likely to reshuffle its partnerships than to retreat from AI: OpenAI could be integrated more closely, while Microsoft simultaneously accelerates its own in-house models.

Nvidia

Strengths: Nvidia has become the picks-and-shovels provider of the AI gold rush:

Its CUDA software ecosystem and networking stack form a moat that competitors (AMD, Intel, Google TPUs, Amazon chips, Huawei, and other Chinese challengers) are still struggling to cross.

Weaknesses:

In a scenario where the emphasis shifts from brute-force computing power to smarter algorithms and better data, Nvidia would still be central – but its growth and margins could come under serious pressure.


The critical battle: Open vs Proprietary AI

The central tension in AI is no longer purely technical. It is philosophical:

Centralised, closed platforms vs. decentralised, open ecosystems.

On one side:

On the other:

Historically, open systems often win in the long run – think of the internet, HTML, and Linux. They become standards, attract ecosystems, and exert pressure on closed incumbents.

Two developments are especially telling:

As tech giants remain slow and reluctant to fully embrace open-source, a future crisis could give governments leverage:

Any bailout of AI giants could be conditioned on a mandatory shift toward open-weight models, open interfaces, and shared evaluation infrastructure.

Such a deal would not just rescue today’s players; it would amount to a strategic reset, nudging AI back toward the collaborative ethos that powered the early internet and many of the great US innovations.


What is going to happen? A crystal-ball exercise

Putting it all together, a plausible outlook is:

In this view:

The main global competition will increasingly be between proprietary and open-source AI solutions. Ultimately, the decisive actor will be the United States that faces a fork in the road:

The AI bubble will not be decided only by markets or by technology. It will be decided by how societies choose to balance:

The next few years will show whether AI becomes another over-priced digital toy – or a more measured, open, and sustainable part of our economic and political infrastructure.


cross-circle