Author Image

Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms

Vladimir Radunović
Published on September 15 2025
The world is changing fast — but how fast is diplomacy keeping up? The Hili Forum in Abu Dhabi (8–9 September 2025) brought together policymakers, diplomats, and experts to explore how technology, geopolitics, and multilateral governance intersect in an era of uncertainty. How do states navigate an era where algorithms can decide who lives or dies on the battlefield? Where AI shapes both opportunities for peace and new forms of conflict? And where shifting demographics, economic power, and coalitions turn the global stage into a constantly moving, unpredictable chessboard? This report focuses on the session titled ‘Geneva vs Algorithms: Redefining […]

The world is changing fast — but how fast is diplomacy keeping up?

The Hili Forum in Abu Dhabi (8–9 September 2025) brought together policymakers, diplomats, and experts to explore how technology, geopolitics, and multilateral governance intersect in an era of uncertainty. How do states navigate an era where algorithms can decide who lives or dies on the battlefield? Where AI shapes both opportunities for peace and new forms of conflict? And where shifting demographics, economic power, and coalitions turn the global stage into a constantly moving, unpredictable chessboard? This report focuses on the session titled ‘Geneva vs Algorithms: Redefining Laws of War and Peace‘, which examined the role of AI and Lethal Autonomous Weapon Systems (LAWS) in conflict, and how Geneva-based institutions can guide responsible governance. Reflections presented here are informed by insights from across the Forum — from debates on geopolitical instability to the rise of new coalitions, and the challenges of keeping diplomacy effective in a rapidly evolving, tech-driven world.

Diplomacy in beta on a shifting chessboard

Diplomacy today resembles a beta version: functional but still being tested, adapted, and iterated in real time.

Diplomacy now operates in a fast-moving, experimental environment. Rules are incomplete, institutions lag behind, and norms are tested while technologies are already deployed. States, international organisations, and other actors must navigate uncertainty, iterate governance models, and experiment with coalitions to respond to fast-moving challenges in geopolitics, technology, and security.

Geopolitical disruption may be permanent — there seems to be little expectation of a return to a pre-existing order. The only certainty is uncertainty. Demographic and economic changes — including the rise of China, Indonesia, Brazil, Nigeria, and regions in Africa and the Middle East — are reshaping influence and opportunity. Middle powers face a delicate balancing act: when major powers clash, they may be forced to choose sides or risk becoming targets; in calmer periods, they navigate a shifting chessboard of alliances, coalitions, and partnerships.

The Global South’s voice remains underrepresented, even as legitimacy increasingly depends on it. BRICS economies are gaining influence relative to the G7, and new formations such as the G20 and other plurilateral arrangements are emerging. Yet the UN remains central to governance, with reform needed to maintain relevance. Core principles — non-intervention, international law, peace, and security — remain essential, while smaller, issue-based coalitions are increasingly prominent in economic and security affairs.

Algorithms of war and peace: Risks and opportunities

AI is not just a tool — it is a force reshaping war, defence, and international security.

AI in conflict is a central concern, with risks extending far beyond LAWS. AI is integrated into target identification, decision-support systems, and intelligence, surveillance, and reconnaissance (ISR) operations. Maintaining humans in the loop is essential to ensure oversight; yet, human decision-making is slower and is becoming emotionally detached from consequences. Emerging threats include automated disinformation, rapid exploitation of cyber vulnerabilities, and convergence of AI with robotics, biology, chemistry, and nuclear-adjacent technologies. These developments illustrate systemic risk, where speed, opacity, and cross-domain interaction multiply systemic risks.

Yet AI can also save lives, but only if governance keeps pace with deployment. AI also has defensive and peace-enhancing applications, from conflict prevention and mediation to force-protection systems like the Iron Dome. As defence budgets increase and include AI, key question is whether they will allocate sufficient resources to safeguards — red-teaming, audits, explainability research, and ethical oversight — or whether protective measures will lag behind deployment. Responsible is critical to ensure AI serves stability rather than exacerbates conflict.

Governance beyond the battlefield: Geneva + algorithms

EspriTech de Geneve allows international principles to meet standards and practice.

The UN has been unusually active on issues related to AI in the military context. The Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on LAWS, UN General Assembly (UNGA) resolution 79/239, and UN Human Rights Council (UNHRC) resolution A/HRC/60/63, highlight global momentum. Even the UN Security Council debated on AI’s impact on global peace and security. An idea of a ‘Fourth Geneva Convention’ on AI and technology was floated; on the other hand, states agreed that International Law already applies in the context of AI. However, operationalising principles of the International Humanitarian Law (IHL), such as distinction, proportionality, and precaution, is complicated by black-box AI, biased datasets, and complex supply chains.

Lessons from cybersecurity negotiations under the UN Open-Ended Working Group (OEWG) on ICTs show that international agreements can combine existing law, voluntary norms of responsible state behaviour, confidence-building measures, and capacity-building initiatives. The challenge, however, is that many states show limited willingness to implement even binding obligations, let alone voluntary commitments. Here, the role of other stakeholders — industry, civil society, academia — becomes indispensable, as highlighted by the Geneva Dialogue on Responsible Behaviour in Cyberspace, which emphasises multi-actor responsibility and implementation.

Governance must extend across the full AI lifecycle: pre-design, design, development, evaluation, testing, procurement, deployment, operation, and decommissioning. Ethics- and human-rights-by-design, explainability, enforceable limits for high-risk uses, and mandatory human oversight at critical decision points are essential. Responsibility must be shared: ‘the machine did it’ cannot be an acceptable excuse. Various stakeholders and sectors must take their part of responsibility: international organisations and governments, as well as vendors, integrators, the tech community, civil society and academia. Geneva has a rich international and multistakeholder digital policy ecosystem, including CCW GGE, UNHRC, ITU, ISO, Conference on Disarmament, ICRC, UNIDIR, the Geneva Internet Platform, and the Geneva Dialogue, to name but a few. The tech spirit of Geneva allows principles, technical standards, ethics, and diplomacy and multistakeholder engagement to converge, shaping norms and rules and translating them into operational guidance and governance frameworks. Geneva’s role is critical to ensuring that algorithms serve peace rather than exacerbate conflict.

Capacity development for diplomacy in the AI age

Preparing diplomats for the beta version of diplomacy is a priority.

One cross-cutting question comes to mind: how do we, then, prepare diplomats and policymakers to operate in this ‘beta’ environment? They must navigate shifting geopolitical landscapes, complex technology risks, and evolving coalitions. Capacity-building must, therefore, focus on equipping diplomats with the knowledge and skills to respond to new technologies, multistakeholder partnerships, and emerging governance challenges. Importantly, learning is not enough; unlearning outdated assumptions is equally important.

Innovative training approaches were emphasised: scenario-based games, storytelling, and AI-assisted simulations can prepare diplomats for high-stakes negotiation and crisis situations. Engaging younger professionals is also essential: Gen Z brings technical literacy and a demand for fair, transparent governance, helping institutions prepare for the next generation of challenges. Diplomats need fluency in AI, cybersecurity, algorithmic risk, and the ways technology intersects with international law and multistakeholder governance. The AI apprenticeship programme and courses on AI governance and cybersecurity policy are necessary. Training methods must be innovative — scenario-based games, storytelling, and AI tools themselves — to simulate fast-moving crises and high-stakes negotiation. Young professionals must be involved early: Gen Z brings both technical literacy and a demand for transparent, fair governance, preparing institutions for the next generation of challenges.

From debate to action

The Hili Forum illustrated that diplomacy today is experimental, adaptive, and iterative — a true ‘beta version’ in the tech jargon. Algorithms shaping the future of geopolitics and geoeconomy, international security, war and peace. 

Combining inspiring Abu Dhabi deliberations on changes in diplomacy with established Geneva principles about humanity and its vibrant digital ecosystem provides a pathway to ensure AI is governed responsibly. Flexible, principled, and risk-aware diplomacy can ensure algorithms serve peace, stability, and shared international security objectives. Are we up to the task?

Disclaimer: To walk the talk, the writing of this text relied on AI. While AI helped make the text more readable and engaging, the substance comes from human expertise — the participants of the Forum and the author.


cross-circle