Author Image

The war we’re not watching: The fight for the future of human knowledge

Jovan Kurbalija
Published on March 16 2026
Amid global focus on the Middle East conflict, a more consequential shift for humanity is unfolding quietly: the privatization of intelligence itself. Sam Altman's vision of AI as a metered utility, like electricity, reveals a future where knowledge is no longer a shared human capacity but a centralized service sold back to us. This represents a fundamental break from millennia of civilizational tradition where the pursuit and sharing of knowledge defined what it means to be human. The true danger lies not in the technology itself, but in the choice before us—whether to accept this enclosed, monopolized future or to build a decentralized AI ecosystem that strengthens rather than subordinates human communities.

As news about the Middle East conflict dominates the world’s attention, another story has slipped by almost unnoticed, one that may prove even more consequential for humanity’s future than the current Iran war. A few days ago, Sam Altman, the CEO of OpenAI, said:

We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for.

Altman is not just describing a business model; he is also outlining a new social order, one in which intelligence is centralised, privatised, and sold back to humanity by major AI companies. 

The remarkable part is not that he said it. The remarkable part is how little attention it received. Altman’s statement offers a startlingly candid view of the direction in which many AI platforms are racing. 

This is bigger than just tech

The AI monopolisation of intelligence challenges one of the pillars of civilisation built over millennia by humans sharing knowledge about nature, hunting, and predicaments.  

During the Axial Age, two millennia ago, the world’s major religious and philosophical traditions placed knowledge, wisdom, and spirituality as pillars for understanding who we are, what we know, and how to live an ethical life. In Greek philosophy, reason became the defining feature of the human being. In Christianity, the Gospel of John starts with ‘In the beginning was the word’, marking the centrality of language as the carrier of knowledge. 

In Hinduism, self-inquiry and knowledge are paths towards deeper truth. In Buddhism, wisdom overcomes ignorance, opening the path to liberation. In Islam, the first revelation of the Quran begins with the call: ‘read’, marking the centrality of holy texts and knowledge. In both Judaism and Confucianism, learning and interpretation are inseparable from moral life.

Different traditions. Different vocabularies. The same civilisational instinct: Knowledge defines what it means to be human.

The same week Altman outlined his vision of intelligence as service, Peter Thiel, another tech mogul, challenged Christianity during his lecture in Rome on the antichrist, technology, and the future. The debate is expanding beyond tech, boiling down to the question of preserving human agency to cultivate, debate, and share knowledge. Will we address the ‘elephant in the room’ (the question of human agency) directly, or slide into a dystopian future ‘gradually and suddenly’?

The image shows an AI generated image of Peter Thiel's lecture in Rome
AI generated image of Peter Thiel’s lecture ‘The antichrist, technology, and the future’ in Rome, Italy (March 2026)

The real danger

Some will shrug and say, So what? Our knowledge is already becoming commodified by tech companies and the advertising industry.  Recommendation systems tell us what to watch. Search engines rank information. Social media shapes public opinion. 

But the envisaged AI systems go further. They shape what we see, what we trust, and how we make choices. They determine whom we date, what we buy, how we vote, how we learn, and how we judge the world.

The first glimpse of emerging AI reality is hinted at by social media. But at least, until now, human agency has not disappeared. We can still compare, question, and interpret for ourselves. Altman’s vision goes further. Much further. It suggests a world in which intelligence itself is outsourced to a handful of platforms. A world in which AI doesn’t simply answer our questions, but quietly determines what we ask and think. A world in which society is held in the hands of a few platforms centralising human knowledge. 

AI stops being an innovation and becomes an enclosure. 

There is another path

This future, outlined by Altman, is not inevitable. Communities, universities, companies, and countries can build bottom-up AI rooted in their own languages, values, and knowledge systems. Open-source models have made human-centred AI technically possible and financially affordable.

The real alternative to monopolising and metering our knowledge back to us is not ‘no AI’; the real alternative is to have AI as an extension of our personal knowledge shared communities, countries, and humanity as per our preferences. This would lead to a distributed ecosystem in which AI strengthens human communities rather than subordinates them.

The choice

And so we return to where we began: missiles reshaping geopolitics in the Middle East, algorithms reshaping the mind and future everywhere else. Both are battles for control, one over oil, the other over the very fabric of human cognition.

The war in the Middle East will, eventually, find some resolution. Peace treaties will be signed. Borders will hold or shift. But the battle for human intelligence and knowledge  – for who owns the capacity to think, to know, to decide – will determine much more than redistribution of power. It will define what it means to be human. 

The good news is that we still have time to make the right choices. The question is whether we will.


cross-circle