The article examines how sci-fi narratives have been inverted in contemporary AI discourse, increasingly positioning technology beyond regulation and human governance. It introduces the concept of the ‘science fiction native’ (sci-fi native) to describe how immersion in speculative imaginaries over several generations is influencing legal and governance assumptions about control, responsibility, and social contracts. We have entered the Singularity. – Elon Musk, on the morning of 4 January 2026 What a spoiler. For sci-fi fans, this is the end of the future. Reality is disrupting sci-fi in real time, and the news is now replacing sci-fi. It feels like a compilation of many different storylines into one. At its core, it’s about the relationship between humanity and the utopian, or dystopian, world we create with technology. In the framework of the Singularity, technology has gone beyond human control and understanding. Musk’s statement reflects a broader narrative pattern in contemporary AI discourse. Elon Musk on AGI control and AI safety (real-life statement) Musk’s announcement of the arrival of the Singularity came only a few days after deeply disturbing depictions of violence against women and children made the headlines. Following the logic that AI has gone beyond human control and understanding, Musk asked users for collaboration: ‘Please help us make Grok as perfect as possible’, while simultaneously warning that creators of illegal images are solely liable. The company solved the problem by limiting the feature to subscription holders. In parallel, tens of thousands of people have been protesting against the US government after dramatic events led to the fatal shooting of a woman by an immigration officer in the USA. Furthermore, the president of the USA is moving beyond international law by intervening in the affairs of Venezuela. Looking at the world today, it seems that we have lost science fiction to reality. Sci-fi stories like the HBO series Westworld are confronting us with a world on the verge of chaos and human extinction. This kind of science fiction takes us deep into the dystopian fallacy of a technocratic solution. A solution aimed at restoring peace and order, solving all of humanity’s problems with the help of a superior intelligence. Serac’s Monologue from Westworld (science fiction) Following speeches by leading voices in the AI industry, AI governance discourse increasingly begins to resemble sci-fi scripts. When Sam Altman speaks of coherence, or when Siemens’ CEO Roland Busch speaks about the potential of the industrial metaverse to solve the world’s problems, they are practically bringing a sci-fi script into our present-day lives. The world is complex and complicated, but life does not have to be. – Liam Dempsey Sr, Westworld
Incite Anthem from Westworld (science fiction) Westworld is a sci-fi series that uses a hyper-realistic theme park to explore control, freedom, and prediction. Human guests exercise consequence-free power over android hosts until the narrative shifts from individual transgression to systemic design: behaviour is observed, repeated, and modelled. As the series evolves, both humans and machines are revealed to be governed by a higher system – Rehoboam, a godlike artificial superintelligence designed to impose coherence on a chaotic world. Even its creator ultimately submits to its authority, just as the Singularity is allegedly unfolding in our reality now. What was once framed as speculative fiction increasingly mirrors the narratives through which real-world challenges and solutions are imagined and framed. These developments put our social contracts under fundamental pressure, raising severe concerns about the establishment of tech feudalism, human enslavement, and even extinction. Following the narrative of UN 2.0 and the Pact for the Future, the world is in such a deep crisis that we need to harness technology to change the trajectory of our future and build a better future for all of us. UN 2.0: Embracing Innovation to Overcome Global Crises (real-life statement) We are already seeing what is possible as the first steps to fulfil the Pact are underway through the UN Virtual Worlds and Citiverse initiative. Such initiatives are designed to lay the path towards a better future for all across different levels and dimensions of governance. On a fundamental level, the term ‘governance’ refers to all patterns of rule and therefore concerns the construction of social order, coordination, and practices, regardless of their specific content. In our socio-technological systems, such rules span from regulating human behaviour through technology, especially through technical standards. Ultimately, any regulation or governance concerns human behaviour and relations, regardless of the intermediary used. Based on this legal reality, this article raises the question: how is it even possible that we are considering that technology such as AI could be beyond regulation and human governance? Why have we inverted the purpose of science fiction, allowing speculative narratives to challenge the reality of our social contracts and the governability of technologies such as AI? The appeal of science fiction is that it is an intentional fiction, a form of literature created from imagination, that allows us to expand our awareness of the possible consequences and impact of technology and scientific aspirations on our societies and human life. As technology progresses, reality seems to have caught up with sci-fi today. Or are we just imagining it, or even worse, are we trapped in our own collective imagination? The way we approach the future is through storytelling. We are building scenarios to imagine what lies ahead. Sci-fi stories, and especially their cinematic realisation, can be framed as collective scenarios or foresight exercises. This might be our fundamental problem: we have become ‘science fiction natives’ (sci-fi natives). Especially in Western societies, people have been born into an environment deeply intertwined with sci-fi narratives for several generations. Long before we had the opportunity to interact with algorithms or AI systems, we learned what artificial intelligence means through stories like The Terminator, Blade Runner, Westworld, Star Trek, and many others. Our imagination was formed before the technology arrived. This might be our fundamental problem: we have become ‘science fiction natives’ (sci-fi natives). In fact, the idea of artificial life predates modern sci-fi altogether. Long before AI became a technical reality, artificial creation already existed in myth. In Greek mythology, Hephaestus crafts autonomous tripods and intelligent golden handmaidens. Even Pandora can be read as an engineered human form, an early imaginary of artificial life. These narratives established a template for how agency, creation, and control are imagined long before they could be engineered. With the rise of mass media, such imaginaries have been translated into science fiction and fantasy, as well as into real-life research and innovation. The digital transformation and the latest achievements in emerging technologies, such as AI, have created a paradoxical situation: the fantasy seems to be reality. At least parts of it. Looking at the discussion surrounding AI, both the utopian promises and dystopian warnings, it has become challenging to distinguish between reality and fiction. Have we become too fluent in the language of the future? As discussed, several generations have been born into a social environment deeply intertwined with sci-fi stories. The influence of these stories goes much further back than the influence of actual digital services and products. In 2001, education expert Marc Prensky identified a new generation born into an environment that is deeply intertwined with technology. He argued that these digital natives process information fundamentally differently from digital immigrants, who had to adapt to the new reality. Prensky claimed that digital natives possess an intuitive, native-speaker-like fluency in the language of socio-technical possibilities. Prior generations are only immigrants in the digital world, and their accent usually remains visible in their digital language. Prensky used this framing to adapt education to the needs of this generation. The term ‘digital natives’ has become a sought-after skill on the job market, as it is assumed that this generation has native digital fluency to work intuitively with emerging technologies. Especially regarding governance, this native immersion has a disadvantage. The moment ‘you “go native” […] you lose yourself within the identity of the system you’ve entered and become subject to it’ (The Grammar of Systems by Patrick Hoverstadt). This is why we are currently seeing the boundaries of our social contracts broken through narratives that elevate AI (such as the Singularity) beyond human control and the legal system. To reclaim our limits, we need to become aware of our ‘sci-fi native’ thinking patterns; otherwise, we will be trapped in a kind of feedback loop of inverted sci-fi narratives. A vital foresight skill is the ability to break ‘native’ scenarios and re-establish the boundaries needed to create new patterns that are truly life-centric. How? This is good news for sci-fi enthusiasts, as the story continues and we can rewrite the scripts. — Declaration of AI Use As an emerging technology expert, the author uses AI as part of the research process. Full authorship remains with the author; AI is used to refine the English of a non-native speaker and to assist with research and editing. The conceptual framework, including the introduction of the ‘science fiction native’ (sci-fi native), and the ultimate authorship of the final text remain entirely with the author. As AI systems learn from their users, any similarities to this text may emerge as a result of this relationship.Welcome to the future?
Patterns of governance
The paradox of being native