This week, in the conference rooms of the AI Impact Summit in New Delhi, a large elephant will be lurking. It’s an elephant in the defining mantra of the modern AI era: The more GPU computing power we put in, the better AI we will have.
This single belief has built the foundation for the entire contemporary AI narrative. It justifies an insatiable hunger for energy to fuel that computing power, and it rationalises the massive investments in giant AI hyper-farms.
But is this mantra actually true? This article aims to challenge that core assumption, arguing that, at best, it is naive and, at worst, dangerous for the modern economy, the future of our society.
Here are the practical, mathematical, economic, and ethical dimensions of the AI elephant in the room that policy dialogue spaces like the AI Impact Summit should address more carefully.
Let’s think back to our daily routines before the AI era. Did we need a full set of Encyclopedia Britannica open on our desks to do our jobs? Of course not.
Cognitive research shows that in our daily work, we consistently use only about 2,000 unique words and a toolkit of roughly 20-30 cognitive tools (like deductive and inductive thinking, analogies, etc.) to frame our reality.
So, why do we believe we need Large Language Models (LLMs) with trillions of parameters to help with these same tasks? Encyclopedias were not essential for our daily routines decades ago, and a massive, unwieldy frontier model doesn’t help us now. The reality is that small AI models, integrated with tools for tasks such as drafting emails, summarising meeting notes, or classifying incoming requests, can do the job perfectly well. We need to ask ourselves whether we need frontier models if small ones can be equally useful for everyday work.
Beyond practicality, there’s a mathematical ceiling. Large language models are hitting a productivity plateau. The sheer brute force of more computing power cannot overcome the fundamental mathematical limitations in generating better AI answers. The more computing power, the more diminishing returns.
Despite the dominant Silicon Valley narrative of imminent superintelligence, many leading researchers are deeply sceptical that today’s LLMs can simply be ‘grown’ into human-level Artificial General Intelligence (AGI). Meta’s former chief AI scientist, Yann LeCun, put it in stark terms:
“On the highway toward human-level AI, a large language model is basically an off-ramp—a distraction, a dead end.”
Even if computing can amplify what an approach is already good at, it doesn’t automatically invent the missing capabilities.
Common sense is not so common in answering why society is doubling down on a path that may lead nowhere.
Computing isn’t just a technical choice. It’s an economic strategy—and one with enormous consequences. The financial viability of AI tech companies is far from certain. Even the most optimistic prognoses, such as those for OpenAI, suggest the company may become financially sustainable around 2030. Overall, forecasts for the economic viability of frontier AI are very sceptical.
The ground is shifting beneath the feet of these giants. AI is becoming dramatically cheaper to train, especially after the release of open-source models like DeepSeek, which cost around USD 5.5 million to develop. That is roughly 100 times less than comparable proprietary models, such as GPT-4.5 (estimated at USD 500 million).
This 100-to-1 cost ratio raises brutal questions about the efficiency and necessity of current proprietary AI spending. If open-source models costing a few million dollars can match or beat models costing hundreds of millions, what exactly are investors paying for?
The rise of low-cost, open-weight models is reshaping the competitive landscape and fundamentally challenging the case for permanent mega-spending on closed models, which are less efficient than we’re led to believe.
Even if ever-larger compute could eventually produce something like superintelligence, there’s a deeper question that rarely gets centre stage: Do we actually need it?
The pursuit of superintelligence is often framed as destiny—an inevitable scientific trajectory. But “possible” is not the same as “desirable,” and “impressive” is not the same as “beneficial.”
Mary Shelley’s Frankenstein remains a lasting cultural warning about scientific hubris: not because knowledge is bad, but because creation without responsibility can be catastrophic. So the ethical question becomes: Before asking whether superintelligence is achievable, we should ask why we want it and what kind of impact it would have on society at large.
If the AI Impact Summit in New Delhi helps participants identify and openly debate the elephant in the AI room, that alone would be a major success.
In this way, the Summit would strengthen innovation by shifting from hype-driven escalation to needs-driven design:
With a bit of common sense, many answers are hiding in plain sight. We don’t need an Encyclopedia Britannica on every desk—digital or otherwise. We need AI tools that help people work, govern, learn, and cooperate better, without burning through the planet’s resources or society’s trust.
A less inflated AI debate won’t slow progress. But it may be the only way to ensure progress remains sustainable—and genuinely impactful—anchored in humanity’s priorities.