Today’s CrowdStrike failure jolted us into a harsh reality. For the past two years, the tech world has been abuzz with discussions about the potential risks of AI. Conferences, initiatives, and think tanks have dedicated countless hours to debating the hypothetical dangers AI might pose in the future. However, while our attention was fixed on the AI horizon, an immediate threat lurked in the shadows of our increasingly complex digital infrastructure. This oversight came at a cost. While we were busy contemplating the ethical implications of superintelligent AI, we’ve neglected the very real and present dangers posed by our current technological dependencies. The CrowdStrike incident is a stark reminder that our most pressing vulnerabilities lie not in the realm of science fiction but in the intricate web of digital systems that power our daily lives. The CrowdStrike failure was triggered by what should have been a routine software update. It affected systems running Microsoft Windows. Particular features of the software update contributed to the mass effect: cybersecurity tools and services are typically granted high privileges within the IT system, allowing it to make systemic changes, as they increasingly rely on cloud services and automatic updates driven remotely by the supplier. Finally, highly popular and sophisticated cybersecurity solutions like CrowdStrike, which target big enterprises and systems, are well trusted and deployed in complex and important critical systems worldwide, increasing the surface and effects of possible havoc. This seemingly innocuous event cascaded into a crisis that affected critical infrastructures across the globe. Airports ground to a halt, hospitals scrambled to maintain essential services, and businesses found themselves paralysed. What makes this situation particularly alarming is its potential for recurrence. Is the next failure – with no malicious intent – looming due to how our computer systems operate? Can we base our relationship with cloud service providers on trust and a ‘shared responsibility’ approach? The ‘unknown’ aspect of these risks makes them even more dangerous. What can be done? The cyber failure was triggered by the over-reliance of Microsoft-driven systems on CrowdStrike services. Thus, a simple update evolved into a major cyber incident. Companies and governments must diversify their critical digital and telecom resources, as argued by Australia, Canada, UK, and Canada. We have to start ‘de-complexifying’ our digital systems as new risks can emerge due to the sheer complexity and interplays of components and applications. Reducing or, at least, monitoring digital complexity will be counter-intuitive to the current trend in the IT sector, where new systems are mashed together using hundreds of components, apps, and services. Taming complexities will require a mix of regulatory, standardisation, and awareness-building actions and initiatives. In the wake of this crisis, we must reassess the legal liability of tech companies. Just as car manufacturers are held liable for vehicle failures, tech companies should bear responsibility for cyber risks associated with their products and services. Those who benefit from the digital industry must also shoulder the burden when things go wrong. This has become evident in recent years concerning the security of digital products due to several high-effect cyberattacks and breaches based on an insecure ICT supply chain. Consequently, regulators and policymakers have started introducing legislation (e.g., in the EU NIS2, China’s Regulation on Critical Information Infrastructure and other examples) and policies (e.g. in the USA with the adoption of the national cybersecurity strategy) that call for the liability of critical infrastructure operators and owners for managing third-party risks. At the same time, the EU has already introduced legislation such as the Cyber Resilience Act, which established liability for the security of digital products. Defining supplier responsibility (and liability) for the security and stability of their digital products through legal instruments in the digital realm is essential for ensuring accountability and safety. It would incentivise companies to invest more in robust security measures, thorough testing protocols, and fail-safe mechanisms. Moreover, it would enforce a culture of responsibility within the tech industry, where the potential real-world impacts of digital failures, their solutions, and their broader impact on society are given the serious consideration they deserve. While technical protections are crucial, they alone are insufficient to address the complex landscape of cyber risks. The vulnerability of digital systems can be exploited by a wide range of malicious actors, from individual hackers to state-sponsored groups. Similar wide-scale adverse effects are possible even due to technical omissions without malicious intent. To mitigate these risks, we need a comprehensive approach combining technological solutions with robust legal and policy frameworks. Developing and enforcing legal and policy instruments, such as the 11 UN cyber norms, is imperative. These norms provide a foundation for responsible state behaviour in cyberspace and can help create a more secure global digital environment. However, their effectiveness hinges on widespread adoption and consistent implementation. Moreover, they depend on cooperation between different stakeholders – governments and regulators to shape the legal environment, industries to introduce security practices, and civil society and academia to provide knowledge and controls. A Geneva Manual, the outcome of a global Geneva Dialogue on Responsible Behaviour in Cyberspace, outlines specific roles various actors should play in implementing certain cyber norms related to vulnerabilities in digital products and the security of the supply chain. The CrowdStrike failure illustrates how a simple update in one part of the world can disrupt critical systems globally. Despite current geopolitical tensions, international cooperation is vital for achieving and maintaining cyber stability. Implementing the 11 UN Cyber Norms and Confidence Building Measures (CBMs), as well as existing international law, are necessary steps toward global cyber security. These initiatives provide a framework for collaborative efforts to combat cyber threats, share best practices, and respond collectively to major incidents. While important, the intense focus on AI risks has inadvertently overshadowed the immediate and significant threats posed by cyber failures. Today’s incident serves as a wake-up call, highlighting the urgent need to reassess our priorities and address the vulnerabilities inherent in our current digital systems. This is not to say that we should abandon discussions about the potential long-term risks of AI. Rather, we must strike a balance, allocating appropriate resources and attention to both future possibilities and present realities. Today’s threats from cyber vulnerabilities are not hypothetical – they are real, immediate, and potentially devastating. Moreover, this incident reminds us that technology updates and upgrades – particularly ground-breaking innovations and products – must pass through a thorough technical and societal check before being widely applied. This requires close industry cooperation with tech and society researchers, civil society organisations, and regulators. The CrowdStrike failure is more than just a temporary disruption – it’s a clarion call for a fundamental shift in how we approach cybersecurity. We can work towards a safer and more resilient digital future by holding tech companies accountable, implementing robust legal frameworks, fostering international cooperation, and rebalancing our focus between future risks and present vulnerabilities.Reduce over-dependence on one system
Prevent the domino effect of failure by cyber complexity
A call for liability of tech companies
Implementing existing and introducing new policies and legal instruments
Accelerating international cooperation
Rebalancing focus between immediate threats and future concerns
Conclusion