Code of opacity: How AI is enabling corruption

Policymakers worldwide are caught between awe and apprehension over AI. They recognise its potential to accelerate productivity and scientific progress while worrying about threats to jobs, human rights, and social cohesion. Yet they’re missing a critical risk: AI is becoming a code of opacity within government. Without adequate oversight, AI systems can facilitate corruption—eroding public trust in both the technology and the institutions deploying it.

Many governments are using AI to inform decisions, to prepare budgets and analyses, and to make decisions about the allocation of resources.  For example, the Dutch government relied on a self-learning algorithm to evaluate childcare benefit claims. Unfortunately, the AI system was built on an inaccurate and incomplete dataset.  Over time, the algorithm “learned” to flag immigrants, lower-income individuals, and people with non-Western appearances as fraudsters. Civil servants approved these determinations without careful review and offered no mechanisms for affected families to challenge the decision. After years of complaints, a Parliamentary inquiry stopped this misuse of AI. Soon thereafter, the Dutch government collapsed.

The Netherlands’ misuse of AI was a form of AI-enabled corruption. While there is no universal definition of corruption, many international organisations define it as the abuse of entrusted power for private gain—including non-financial gain for oneself or others. There are many ways AI and corruption are linked, leading to unfair processes or uneven outcomes. Furthermore, individuals, firms, and governments that misuse AI can deny responsibility and blame problems on AI’s opacity and complexity.

Unfortunately, the Netherlands’ experience is a cautionary tale and an early signal of what other countries could soon face. According to a 2023 review of policies and programs reported to the OECD, more than 60 countries are using taxpayer dollars to create, disseminate, or conduct research on AI.  Many of these same governments increasingly rely on AI systems to allocate resources such as access to credit, education, and information. If such systems rely on biased, inaccurate, incomplete, or unrepresentative data, or the algorithm is poorly designed or based on pseudoscience, AI use can create risks for both human rights and human autonomy.

Governments are also using AI for national security purposes, such as targeting support, intelligence analysis, proactive forecasting, and streamlined command and control in conflicts in both Europe and the Middle East. Policymakers in the US and China increasingly describe competition in AI as an existential race, which each side must win leading to huge sums off private and public investment. Even as governments seek to expand AI’s role across sensitive domains, a different set of risks has begun to draw attention. In 2025, the UN Office of Drugs and Crime and the Organization of American States co-convened a panel to examine whether AI could yield the greatest corruption.

AI deployers and users deserve a better understanding of how AI misuse might yield corruption.

First, AI systems are already corrupting political processes. In Indonesia, the political party Golkar used AI in 2024 to reanimate Suharto, the longtime dictator who died in 2008. The deepfake Suharto endorsed several candidates, including his son-in-law, who won.

Second, governments must exercise caution when integrating AI systems into their decision-making. As the Dutch government found, relying on inaccurate, incomplete, or unrepresentative data sets can cause real harm and further erode trust. Individuals, too, should be cautious integrating AI into their lives. AI systems can share non-factual information and deceive or manipulate individuals. The US National Institute of Standards and Technology recently noted that generative AI presents a wide range of risks, and there is no foolproof method for protecting AI from misdirection, a form of corruption of AI.

Third, AIs opacity can undermine trust in both the technology and in the policies and institutions that govern it.  AI, like corruption, operates as a black box. Developers often cannot explain how systems reach certain conclusions, and struggle to correct unwanted outcomes—such as when a driverless car killed a pedestrian. The OECD, warns that “insufficient transparency, explainability and public understanding of AI can erode accountability and cause public resistance; and the over-reliance on AI can widen digital divides and allow systemic errors to propagate, weakening citizen trust in government.” Recent polls signal the decline in trust in AI, AI companies, and AI governance around the world.

Policymakers and their constituents must break through this code of opacity and pay closer attention to the risk to democracy and good governance presented by AI. They should collaborate to draft clear guidelines for developing, procuring, and deploying AI that address areas of vulnerability, including corruption. They should incentivise transparency in model development, which can facilitate external audits of government use of AI. Finally, they should empower individuals and civil society organisations to challenge AI misuse through accessible appeal mechanisms.

Susan Ariel Aaronson is a professor at George Washington University and co-PI of the NSF-NIST Institute for Trustworthy AI in Law and Society.

The New Delhi AI Summit: Inclusive rhetoric, fractured reality

When India hosted the AI Impact Summit in New Delhi, from 16-20 February, it seized the moment to demonstrate its growing influence in the digital and AI field. But beyond India’s own ambitions, the gathering also served as a revealing showcase of how ‘middle powers’ are positioning themselves within the fast-changing dynamics of geopolitics. Within this broader category, ‘digital middle powers‘ – countries with established or rapidly growing influence in digital technology, often serving as regional leaders or possessing niche global strengths – deserve particular scrutiny. India clearly fits this description, heightening expectations for the outcomes of the AI Summit in New Delhi.

As host, India sought to shape global AI discourse by re-centering it on development and inclusion. It did so by weaving policy into a narrative rooted in its own cultural and philosophical heritage, framed by the concept of MANAV (a Sanskrit word for humanity) and seven chakras (pillars), which represented the goals of the Summit: social empowerment, trustworthiness; energy efficiency; use of AI in science; democratising AI resources; and use of AI for economic growth and social good. Yet beneath this tapestry of inclusive rhetoric, the summit also laid bare the deepening fractures in global AI governance. Six observations illustrate this fragmentation.

The mushrooming of AI-related initiatives

Each previous host of the AI summit has chosen a distinct framing for the discussion – the UK focused on safety and existential threats, South Korea on the management of risk and innovation, and France on investment, AI governance, and the public good. This framing matters because it sets the tone for the global conversation. For India, this created an opportunity to mainstream issues often left at the margins: democratising access to AI, investing in human capital through worker re-skilling, and pursuing social empowerment. Yet these topics failed to translate into tangible high-level consensus. Instead, they were largely relegated to working group discussions, generating a flurry of voluntary initiatives. Whether any of these initiatives bear fruit will depend entirely on post-summit interest and goodwill of an amorphous community of stakeholders.

The lack of continuity between initiatives advertised at each summit is striking. In South Korea, a network to accelerate the advancement of the science of AI safety was mainstreamed in an Annex to the Seoul Declaration. At the Paris Summit, officials announced ‘Public Interest AI‘, a global platform that can serve as an incubator for artificial intelligence serving the public good, and a network of observatories to monitor AI’s effects on job markets.

One year on, the absence of meaningful attention to these projects is telling. It does not bode well for the many new charters, coalitions, and platforms launched in Delhi, including a Charter for the Democratic Diffusion of AI, the Global AI Impact Commons, the Trusted AI Commons, and a Network of AI for Science Institutions. By multiplying initiatives without mechanisms for follow-through, these summits risk spreading already scarce resource – attention, human capacity, political will – so thin that nothing takes root.

The adoption of increasingly weak commitments

AI Summit Declarations are not legally binding. Against this already low level of constraint, the language employed in the Delhi Declaration stands out for its prolific use of ‘take note’ – a diplomatic way of avoiding clear endorsement – and of ‘voluntary and non-binding’ in reference to all practical initiatives. This linguistic evasion suggests that even the modest ambition of previous declarations may be eroding, replaced by a growing reluctance among states to commit to anything that could later constrain their freedom of action.

The sidelining of UN-led processes

With the approval of the Pact for the Future and the Global Digital Compact, the United Nations has centered international discussions on digital technologies around a people-centered and development-oriented future. This is no minor achievement, given the current climate of geopolitical and geoeconomic tension. The UN is carrying out work, with the involvement of multi-stakeholder expert groups, not only in the AI scientific panel but also on topics intimately related to the AI stack, such as the CSTD working group on data governance.

The Delhi Declaration makes no direct mention of any of these processes. While this omission may have facilitated endorsement by countries that oppose global AI governance – notably the United States – it weakens both UN processes and the evidence-based approach the organisation champions. In the words of UN Secretary General, António Guterres, “If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype – or disinformation. We need facts we can trust – and share – across countries and across sectors. Less noise. More knowledge”.

In a scenario in which AI’s impact on global GDP, labour markets, and education remains insuffiently understood, it is irresponsible to race toward AI adoption without gathering evidence. Building understanding is a mid- to long-term effort that will not be achieved by the disparate and discontinuous political priorities adopted at each summit, but by consistent work – precisely the kind the UN and its specialised agencies are well-placed to coordinate.  As a host of the 2027 AI Summit, and a hub for international organisations and multilatreralism, Switzerland will have the chance to redress this imbalance.  

Economic security and the spreading of weaponised interdependence to the Global South

The main outcome of the AI Summit was not the Delhi Declaration, but India’s decision to join the Pax Silica, a US-led effort to foster “AI and supply chain security, advancing economic security consensus among allies and trusted partners.” This development matters because it further consolidates the extension of great-power competition into the Global South, embedding AI governance within broader geopolitical and economic security alignments.

For the United States, the Pax Silica is a logical extension of a longstanding strategic positioning. Although the US lacks a stand-alone economic security strategy, its contours can be inferred from the National Security Strategy (NSS), from ‘economic security’ clauses in recent trade deals, and from bilateral understandings, such as the one established with Greece, which declares that “economic security is national security.”

Four of the twelve 2025 NSS objectives have an economic security dimension, including preserving leadership in frontier domains like AI. Successive NSS documents, including those from past Trump administrations, make explicit that China is the primary contender of the US. The Pax Silica is, therefore, a coalition designed to maintain that leadership by securing supply chains and coordinating with trusted partners.

For India, the calculation is pragmatic. Joining the Pax Silica consolidates its long-standing security partnership with the US – already institutionalised through arrangements like the QUAD. From the private sector’s perspective, the initiative offers something equally valuable: a US-government-blessed framework for investment in India’s huge, English-speaking IT workforce. In volatile times, this predictability is a significant asset.

The division of labour inside the alliance is clear. India will serve as a provider of skilled workforce and minerals, while the US grants access to advanced technology. India appears to be betting that supply chain alignment will translate into precisely this – investment and technology access – helping it bypass the limitations the US is introducing to control AI diffusion, such as tariffs and export controls.

Yet these bilateral advantages should not obscure the broader strategic outlook. The Pax Silica is a bloc aimed at countering Chinese influence over critical supply chains, particularly in sectors where China currently holds bottlenecks, such as rare earth mineral refinement. Chinese control is a common concern for Washington and Delhi, and the alliance aims to position its members to deter the use of these pressure points while strengthening their own capacity to exert leverage if needed. The securitisation of supply chains is one of the most visible signs of how economic interdependence has become weaponised, and the Pax Silica reinforces this dynamic into the domain of AI governance.

A display that digital/AI sovereignty is whatever governments want it to be

Strengthening sovereignty in the digital domain has become almost commonplace. In Europe, there are initiatives (still incipient, but growing) to develop the European digital industry and replace American technology services with European providers. In the Global South, sovereignty takes on post-colonial connotations and is associated with a way of maintaining control, agency, and self-determination over AI systems.

India shows, however, that the expression has been filled with so many contradictory political projects that it is losing its meaning. The country is often cited as one of the champions of digital sovereignty, due to the way it has created its public digital infrastructure. However, ‘digital sovereignty’ is timidly mentioned in the New Delhi declaration.

This is not surprising. On the one hand, the US instructed its diplomats to fight against digital sovereignty (and data sovereignty) initiatives in capitals around the world. On the other hand, India focused on attracting significant investment from large American technology companies during the summit. This movement shows that digital sovereignty projects remain contradictory in different parts of the world and disconnected from public debate. The way countries are defining what digital sovereignty entails in practice is closely related to the objectives of the political forces in power and depends on how sensitive these elites are to lobbying and international pressure.

If there are no strategies, goals, metrics, and participatory governance mechanisms, sovereignty in AI can be implemented in a way that is disconnected from individual, social, and economic rights. Clear strategies could also help countries identify concrete opportunities for joint action, beyond abstract discussions about digital sovereignty.

BRICS are sensitive to fragmenting trends  

The BRICS grouping offers a window into how middle powers navigate competing pressures in AI governance. India’s decision to join the Pax Silica illustrates this dynamic. India wants to compete with Beijing for primacy in strategic sectors such as AI and for regional clout, and it fears strategic encirclement by China’s growing infrastructure partnerships. It has banned Chinese apps like TikTok and restricted vendors such as Huawei.

In this context, deepening ties with the US through the Pax Silica is less a strategic choice based on a mix of security and economic considerations. It is a decision that may pull India away from its BRICS partners, but reflects the reality that dependence on Beijing is not an option.

Although the situation presents a challenge for the group, BRICS members are carefully navigating these tensions at the diplomatic level, showing that they significantly value the alliance. China was almost absent from the AI Summit, in contrast to its considerable presence in Paris. This could be seen as a sign of disapproval, but also as an attempt to avoid putting its relationship with India under strain.

In parallel, India signed an agreement with Brazil to expand cooperation on mining, steel, and rare earth minerals. Brazil holds 26% of the world’s rare earth reserves, second only to China. The agreement strengthens the partnership between India and Brazil, while also positioning India as a potential reliable source of Brazilian-sourced minerals for Pax Silica members. This could contribute to Brazil’s ‘strategic autonomy,’ helping the country cultivate goods for the US-led group, while remaining at arm’s length from Washington, an approach Brazil also adopts toward Beijing by not joining China’s Belt and Road Initiative.

The real impact of India’s decision may be a segmentation of BRICS AI-related collaboration by layer of the AI stack. While it may become harder to collaborate on semiconductors and advanced AI models, for example, space could remain for continued cooperation on governance frameworks, ethical AI, and the development of AI applications.

Conclusion

The 2026 New Delhi AI Summit will likely be remembered less for its inclusive rhetoric than for what it revealed about the state of global AI governance. India’s effort to re-centre the discourse on development and inclusion offered a distinctive contribution to the evolving landscape of international AI discussions, and showcased the potential of middle powers to shape the agenda.

But beneath the narrative woven in Delhi, the summit exposed fragmentation rather than convergence. The Delhi Summit reflected the underlying realities of a multipolar, securitised, and initiative-saturated landscape. The question is whether future summits and UN processes can reverse this fragmentation.

The entropy trap: When creativity forces AI into piracy

True creativity is statistically improbable. Does this very nature of creativity make copyright infringement unavoidable for generative AI? The recent copyright decision GEMA vs OpenAI implies that it does.


The image shows a sketch of lady justice

On 11 November 2025, the Regional Court of Munich I (Landgericht München I) granted the German copyright collective organisation GEMA injunctive relief and damages for the unauthorised reproduction of copyright-protected song lyrics by OpenAI’s GPT-4 and 4o AI models. The court skilfully dismantled OpenAI’s argument, which has been used in recent years to obscure technical facts and the legal reality. (Note: All translations of the German judgment are by the author.)

(more…)

Tech attache briefing: WSIS+20 zero draft and AI capacity building

 Nature, Outdoors, Sea, Water, Logo, Text, Blade, Dagger, Knife, Weapon

The event is part of a series of regular briefings the Geneva Internet Platform (GIP) is delivering for diplomats at permanent missions and delegations in Geneva following digital policy issues. It is an invitation-only event

On 4 September, we resumed our briefings for Geneva diplomats with one focused on two important topics on the global digital governance agenda: reviewing 20 years of implementation of the outcomes of the World Summit on the Information Society (WSIS+20) and capacity building in artificial intelligence (AI).

We unpacked two documents:

***

For inquiries, contact us at geneva@diplomacy.edu.

AI Apprenticeship for IOs · From diplomats to AI builders

Equipping IOs with practical AI solutions

Discover how professionals in international Geneva – from the UN, WHO, to CERN – are creating AI assistants to support global cooperation.

As artificial intelligence (AI) reshapes how we work, International Organisations face a clear challenge: how to ensure AI strengthens, rather than disrupts, their ability to deliver on complex global mandates.

The AI Apprenticeship for International Organisations (IOs), developed by Diplo, provides a practical and applied response to that challenge.

Unlike technical training or abstract policy discussions, this apprenticeship focuses on enabling professionals to design and build AI tools directly relevant to their daily work, whether that’s improving access to information, enhancing communication, or supporting better decision-making.

Throughout the program, participants

Below, we are featuring a selection of projects created by our participants in the Geneva AI Apprenticeship for IOs program. Each reflects how AI, when designed thoughtfully and applied responsibly, can complement human expertise and support the work of IOs, from improving access to institutional knowledge to tackling misinformation.

The AI Apprenticeship is part of Diplo’s broader effort to close the AI skills gap in international Geneva and beyond, ensuring that those shaping global policy also have the tools to navigate and shape the AI era.

New publication: AI Apprenticeship

Our new guide, AI Apprenticeship: Learning about AI by developing AI, details the successful framework and principles of our program.

The AI Apprenticeship publication explores how learning by building can equip professionals with the skills, ethics, and adaptability needed for the AI era. Inspired by the Swiss vocational model, it presents a human-centred approach to navigating digital transformation. Drawing on Diplo’s experience, it demonstrates how even non-technical professionals can gain confidence and competence in using AI.

The image shows a drawing of a group of people looking at a robot in a suit standing before a forest

AI Projects for IOs · From concept to real-world impact

What happens when you put AI development tools directly into the hands of diplomats, communication heads, and humanitarian experts? These projects are the answer. Discover the powerful assistants built not by programmers but by the professionals facing these global challenges every day.


Diplo Helper · AI for Accessible Emerging Technology

  • Created by: Luis Bobo Garcia
  • Organisation: United Nations
  • Project: Diplo Helper – AI for Accessible Emerging Technology Insights
The image shows a photograph of a person

Luis Bobo Garcia, an Associate Information Systems Officer, works at the intersection of diplomacy and emerging technologies. As AI, quantum computing, and blockchain reshape international discussions, translating these complex topics for non-technical audiences has become increasingly critical.

Through the AI Apprenticeship, Luis developed Diplo Helper, an AI assistant designed to make emerging technologies more accessible to diplomats and policymakers. The tool provides clear, structured insights on key technologies, supporting more informed engagement in international policy discussions.

This project highlights how targeted AI solutions can bridge the gap between technical complexity and the practical needs of global governance. To test the AI Assistant, click here:


AI in Digital Communications at WHO

  • Created by: Sarah Ann Cumberland
  • Organisation: World Health Organisation (WHO)
  • Project: AI in Digital Communications at WHO – Ensuring Responsible AI Use in Global Health Messaging
The image shows a photograph of a person

Through the AI Apprenticeship, Sara developed an AI assistant that provides practical guidance on using AI in WHO communications. The tool offers accessible information on official guidelines, ethical considerations, and best practices, supporting responsible innovation within the organisation.

The project reflects the growing need for governance frameworks that enable the adoption of AI while safeguarding public trust in global health communication. To test the AI Assistant, click here:


Editron · AI-Powered Language Support for CERN

  • Created by: Daniela Antonio
  • Organisation: CERN
  • Project: Editron – AI-Powered Language Support for CERN Communications
The image shows a photograph of a person

As Digital Communications Lead at CERN, Daniela Antonio works where scientific precision extends beyond research to communication. Maintaining consistent language standards, especially for social media, is essential yet resource-intensive.

As part of the AI Apprenticeship, Daniela developed Editron, an AI-based language assistant tailored to CERN’s English Language Style Guide. The tool streamlines copy-editing processes, ensuring consistent, high-quality content across CERN’s communication channels.

This project demonstrates how AI can be customised to meet particular operational needs within international scientific organisations. To test the AI Assistant, click here:


Geneva Loop Events Guide

  • Created by: Amina Osmanova
  • Organisation: United Nations International Computing Centre (UNICC)
  • Project: Geneva Loop Events Guide – AI for Navigating Geneva’s Multilateral Landscape
The image shows a photograph of a person

Amina Osmanova, an Associate Programme Management Officer at UNICC, operates within Geneva’s fast-paced multilateral environment, where staying informed on relevant events and developments is essential yet time-consuming.

To address this, Amina developed the Geneva Loop Events Guide. This AI assistant curates personalised updates on events, deadlines, and developments in areas such as AI, cybersecurity, and digital governance, including major conferences like AI for Good.

The project demonstrates how AI can improve situational awareness and connectivity within complex international ecosystems. To test the AI Assistant, click here:


UNHCR Data Pal · Unlocking Institutional Knowledge

  • Created by: Matthew William Saltmarsh
  • Organisation: UN Refugee Agency (UNHCR)
  • Project: UNHCR Data Pal – Unlocking Institutional Knowledge for Humanitarian Action
The image shows a photograph of a person

Matthew William Saltmarsh, Head of News & Media at UNHCR, works with institutional data essential for evidence-based humanitarian response. Yet, accessing specific insights from decades of UNHCR reports remains a labour-intensive process.

Matthew developed UNHCR Data Pal, an AI assistant trained on UNHCR reports from 2003 to 2025. The tool enables rapid extraction of relevant data, identification of trends, and access to evidence-based insights.

The project demonstrates how AI can unlock institutional knowledge, streamline analysis, and support more agile, informed decision-making in the humanitarian sector. To test the AI Assistant, click here:


Chatbot on Public Health Misinformation

  • Created by: Diya Banerjee
  • Organisation: World Health Organisation (WHO)
  • Project: Chatbot on Public Health Misinformation AI to Support Evidence-Based Communication
The image shows a photograph of a person

Diya Banerjee, WHO’s Head of Social Media, addresses one of the most pressing challenges in global health: the spread of misinformation. False health claims on social media can undermine public health responses and erode trust.

To address this challenge, Diya developed the Chatbot on Public Health Misinformation designed to monitor emerging myths and provide fact-checked information from authoritative sources such as the WHO and the CDC.

The project highlights how AI can be deployed proactively to strengthen science-based public health communication and counter misinformation on a large scale. The AI Assistant will be released shortly.


Explore the Projects. Share Your Insights.

The image shows a drawing of a robot

The AI solutions developed through this showcase reflect the creativity and expertise of professionals across the international Geneva community. Each project addresses a real-world challenge, demonstrating how AI can support the unique missions of International Organisations.

We invite you to explore these projects and to share your feedback or questions.

Together, we can advance the responsible and practical use of AI in global governance.

Discover real-world AI applications.

From Information Warfare to Youth Rights: Building Resilience Through Digital Democracy

The Czech permanent mission organised an event on 20th June 2025 “From Information Warfare to Youth Rights: Building Resilience Through Digital Democracy”. The discussion was at the nexus between information warfare and the rights of young people. It focused on disinformation campaigns and other forms of malign interference in relation to freedom of expression and access to information. 

 Adult, Female, Person, Woman, Conversation, Wristwatch, Clothing, Formal Wear, Suit, People, Accessories, Glasses, Cup, Jewelry, Necklace, Face, Head, Computer, Electronics, Laptop, Pc, Tonino Carotone
Tereza Horejsova of DiploFoundation speaking at an event organized by the Permanent Mission of the Czech Republic to the UN Office in Geneva

The event brought together young delegates from the Czech Republic, Finland, Lithuania and Poland.

Tereza shared thoughts on mis and disinformation especially in relation to AI and built on findings of a study Diplo conducted in 2024 on decoding disinformation.  

Evening Reception on the Use of AI in Humanitarian Contexts 

The Permanent Missions of the United Kingdom and Switzerland hosted an evening reception and panel discussion on ‘What Next for the Use of AI in Humanitarian Contexts?‘ With leading experts and policymakers, Diplo explored AI’s opportunities and challenges in humanitarian action, shared best practices, and discussed responsible innovation.

Related actors:

Related people:

Expert Workshop on the Rule of Law and Human Rights Aspects of Using Artificial Intelligence for Counter-Terrorism Purposes

Diplo attended the expert workshop on “The Rule of Law and Human Rights Aspects of Using Artificial Intelligence for Counter-Terrorism Purposes”.

The Geneva Centre for Security Policy (GCSP) organised the workshop in collaboration with the United Nations Security Council Counter-Terrorism Committee Executive Directorate (CTED) and the Federal Department of Foreign Affairs FDFA of Switzerland.

The event brought together 39 experts to explore the challenges and opportunities of applying human rights-centred approaches to the use of AI in counter-terrorism efforts.