AI goes to war

The new contractual architecture between the AI big tech firms and the heart of the United States military-industrial complex is a way of keeping suppliers aligned with the Trump administration’s ‘AI-first fighting force doctrine.

When the United States government decided, in February, that Anthropic was a supply-chain risk, it deployed for the first time against a domestic firm a designation hitherto reserved for foreign adversaries. When, two months later, it tentatively moved to bring the company back – only to block the expansion of its most advanced model days later and, simultaneously, use that same model as justification for reversing its policy of non-regulation of AI – it showed that the original verdict had never been about risk. It was about who decides what may be done with a frontier AI model. After a court fight with the maker of the Claude chatbot, the Department of Defense now seeks to exclude the company once and for all, while officially bringing eight competitors to the front. More than that, three days later it announced the creation of a working group to consider the possibility of government vetting before the release of frontier technology models.

With a state in permanent culture-war mode, having all of the leading generative-AI model-makers working for the government makes it easier to swap one piece on the board for another without further tight spots, using the redirection of contracts to competitors as a tool of persuasion. In Trumpian warcraft, where frontier AI is treated as a national-security asset, every effort is worthwhile to keep one’s friends close and one’s enemies closer still.

The doctrine behind this new architecture has an administrative name. It is called ‘all lawful purposes’, and it was the very point that broke Washington’s relationship with Anthropic in February. The company demanded explicit safeguards against the use of Claude in fully autonomous weapons and in mass domestic surveillance. Faced with CEO Dario Amodei’s refusal to yield, the Pentagon designated Anthropic a ‘supply chain risk’, a category historically reserved for foreign adversaries. On 1 May, eight tech giants (Google, OpenAI, Microsoft, Amazon Web Services, Nvidia, SpaceX, Reflection AI and Oracle) signed agreements to deploy AI on classified Impact Level 6 and 7 networks under the formula “lawful operational use”, confirming the clause as a contracting standard. Anthropic is explicitly off the list. The Pentagon has become the rule-setter for AI across the federal apparatus.

The financial scale of the operation explains the Department of Defence’s pull on these firms. While the AI big tech companies are expected to pour roughly US$600 billion into AI infrastructure in 2026, the same year’s defence budget set aside US$54 billion for AI and autonomy alone, with a more than hundredfold increase planned for the newly created Defence Autonomous Warfare Group in 2027. The convergence is not a metaphor. It is the financial heart of the relationship. The Pentagon is simultaneously a customer, a regulator, a source of security clearances and a dispenser of geopolitical prestige to a sector that, given its capex requirements, must turn government contracts into structural revenue. Ask Palantir.

Friendly fire

Amodei’s company has been simultaneously punished, treated as indispensable, and once again, left out in the cold. It lost investors such as the 1789 Capital fund, linked to Donald Trump Jr., and watched competitors race to occupy its space, but until the past week, the National Security Agency (NSA) carried on incorporating Mythos, a frontier model launched in April, and the Pentagon never stopped running Claude in the war with Iran. Injunctions and appeals waged a legal battle around the ‘supply chain risk’ designation, ultimately upheld on national-security grounds. The rapprochement being stitched together via a White House executive action intended to ‘save face and bring them back’ coexists with signs that the friction persists. At a Senate hearing on 30 April, Hegseth called Amodei an ‘ideological lunatic’ and likened Anthropic’s stance to ‘Boeing giving us airplanes and telling us who we can shoot at’. The day before, the White House had rejected the expansion of Mythos access from around fifty to one hundred and twenty institutions. Trump says the company is ‘sorting itself out’. Despite the contradictory signals, every indication suggests the bargain remains open.

The rhetorical packaging of this arrangement also merits attention. Deputy Secretary Emil Michael, a former Uber executive turned Pentagon CTO, has christened military AI America’s next ‘Manifest Destiny’. Defence Secretary Pete Hegseth, in the GenAI.mil launch video, declared that the future of American warfare is spelt ‘A-I’. In January, Hegseth brought in Grok, Elon Musk’s xAI chatbot, coining the term ‘woke AI’ as its antonym, in a veiled reference to Anthropic. The point is not merely to arm the troops; it is to frame the AI frontier as contested cultural territory, where ethical caution is read as ideological suspicion. Anthropic was punished, in part, for taking its own ‘safety-first’ rhetoric seriously, the rhetoric that justified its founding in 2021 by dissidents from OpenAI. The message to the sector is clear. Defending restrictions on use has become a political gesture, not a technical one.

About face

Google’s return to its relationship with the Pentagon illuminates the message from another angle, that of the failure of internal compliance as a brake. In December, the Department of Defence launched GenAI.mil, a platform meant for three million service members, civilians and contractors, with Gemini for Government as its inaugural engine. In April, Google signed an expansion of the contract for classified networks, hours after more than six hundred employees, including directors and senior researchers from DeepMind, sent an open letter to CEO Sundar Pichai, urging him to reject the deal. The mobilisation was simply outpaced by the speed of the signing. Its organisers used a phrase that may yet become a refrain: ‘Maven isn’t over’.

The historical reference is unavoidable. In 2018, around 4,000 Google employees signed a petition against Project Maven, and the company let its Pentagon contract expire, adopting ‘AI Principles’ that ruled out weapons and surveillance. Palantir took up the work. In February of last year, Google quietly deleted those commitments, with Demis Hassabis citing ‘the global competition for AI leadership’. At the time, AI was a contestable military experiment; now it is a declared geopolitical asset. The mobilising capacity that killed Maven has evaporated. The cycle of capture of what one might call the AI-military-industrial complex has closed.

All quiet on the Western Front

History offers clear precedents for this realignment. In moments of direct rescue, the state guaranteed private loans to prevent the collapse of strategic suppliers, as with Lockheed in 1971 and Boeing in 2020. In moments of redesign, it forced the consolidation of an entire sector, as in the ‘Last Supper’ of July 1993, when the Defence Secretary summoned CEOs to the Pentagon and triggered the merger wave that reduced the number of prime contractors from 51 to five. In moments of social or internal resistance, it rarely backed down. Dow Chemical kept producing napalm until losing the contract bid in 1969, Honeywell only divested from cluster munitions after more than two decades of pressure and regulatory change, and Microsoft preserved its IVAS contract worth nearly half a billion dollars even after its employees’ open letter in 2019. The exception was Project Maven in 2018, neutralised in the years that followed. The pattern is structural. When a technology is deemed strategic, the state keeps the firm on its budget even against market logic, against the company’s wishes, or against the protest of its workers.

The Anthropic case is a Lockheed in reverse, set in an era of frontier technology. In 1971, the state found itself obliged to keep alive a company that wanted to leave. In 2026, the state may find itself obliged to tolerate a company that does not want to leave but wishes to dictate the terms of use. Even with President Donald Trump posting an order that the use of the technology cease immediately, the Pentagon was running Claude in the war with Iran, and the NSA was using Mythos. The bargain has changed in nature. In the era of consolidated primes, it was about money and industrial capacity; now it is about contractual rules. The Pentagon is trying to replicate with the chosen eight the arrangement it built over the past three decades with Lockheed Martin and Boeing, large dependent suppliers without clauses limiting what the client may do with the product. The public defence of this logic was made by Hegseth himself before the Senate, with a revealing comparison. In his words: ‘It would be like Boeing giving us airplanes and telling us who we can shoot at’. The image is striking, but Boeing never needed such a clause because it sold finite platforms with bounded functions and no civilian applications. Anthropic does not want to define targets. It is refusing to sell a platform unless the buyer agrees to contractually binding restrictions, including a ban on fully autonomous weapons and mass domestic surveillance.

Oppenheimer

The rhetoric of a ‘new Manhattan Project’ for AI, intoned by the bipartisan congressional commission in November 2024 and endorsed by Trump’s Energy Secretary in early 2025, is less a plan of action than a compensatory fantasy. Former Google CEO Eric Schmidt, who helped to construct the framing of AI as a national-security issue while chairing the National Security Commission on Artificial Intelligence between 2019 and 2021, today warns in the paper Superintelligence Strategy that attempting to replicate Oppenheimer’s atomic bomb programme with frontier AI models would provoke pre-emptive Chinese retaliation and destabilise the very equilibrium the programme claims to safeguard.

In the 1940s, the Manhattan Project required a single objective, centralised control and enforceable secrecy, conditions that AI simply does not reproduce, with talent that migrates between private firms, papers that cross the Pacific in hours and annual capex exceeding any conceivable budget line for a state-run programme. The multivendor arrangement the Pentagon is building is what is left when Manhattan turns out to be structurally impossible to replicate in AI. It is a Los Alamos without Los Alamos, executed by a contractual clause rather than an electric fence.

Apocalypse Now

There is an additional twist that confirms the argument. According to the New York Times, the Trump administration’s non-interventionist stance on AI began to shift in April after Anthropic announced Mythos. The model is so powerful at identifying software vulnerabilities that the company decided not to release it to the public. The White House is now discussing an executive order to create a working group to vet AI models before their release, possibly following a model similar to that of the United Kingdom.

The administration that was elected promising not to regulate AI is now studying how to regulate AI precisely because the company it tried to expel produced the very model that has made regulation unavoidable. Trump’s chief of staff Susie Wiles and Treasury Secretary Scott Bessent, the same operators negotiating Anthropic’s return to the Pentagon, now lead the formulation of this new regulatory policy following the departure of AI czar David Sacks in March. Military capture and civilian regulation of AI share the same political architects, which suggests that both obey a single strategy for the governance of the technological frontier.

The thin red line

There is, however, a qualitative difference between the Pentagon’s historical suppliers and the AI big tech firms that changes the nature of what is being contracted. Lockheed, Boeing, Raytheon, Dow Chemical and Honeywell delivered to the Department of Defence finite products, with bounded military application and a production chain separated from civilian life. A Poseidon missile does not touch the everyday life of the ordinary citizen, a gallon of napalm does not route private conversations, a cluster munition does not decide which news reaches the voter. The damage these technologies inflicted was tragic but geographically confined to the theatre of war. And the supplying firm could be replaced without the country’s informational fabric suffering any blow. The ethical barrier between civilian and military use was sharp, and it was precisely that sharpness which allowed mobilisations such as the campaign against Dow’s napalm or the two decades of the Honeywell Project to have a clear target.

Generative-AI companies operate in another register. With small adjustments, the same models that run on the Pentagon’s classified networks answer questions from students, draft commercial contracts, mediate customer service, suggest medical diagnoses and, increasingly, mediate access to public information. When Google delivers Gemini to GenAI.mil under an “any lawful governmental purpose” clause, it is not selling a specific military product, it is opening the same cognitive infrastructure that serves billions of civilian users to use on networks not connected to the internet (air-gapped) where public oversight is non-existent by definition. The Claude that operates in the war with Iran is the same Claude that writes code for programmers in Buenos Aires or São Paulo. The boundary between civilian and military, between domestic and foreign, between sovereign and transnational, ceases to be geographical and becomes contractual, defined in clauses to which the public has no access.

This affects informational sovereignty not only of the United States but of any country whose citizens, governments, journalists and businesses depend on these same tools. A cluster munition manufactured in the suburbs of Minneapolis had no way of modulating the public discourse of an entire country, a frontier model trained to serve both a Brazilian teenager and an NSA analyst does. It is this indistinction between global civilian infrastructure and national military arsenal that makes the current capture more grave than any previous episode of the military-industrial complex, and that makes Anthropic’s refusal to yield on the contractual clause less an ethical tantrum than the last trace of a boundary being erased.

To the last man

The piece that articulates the present arrangement is the multivendor concept. On 1 May, when formalising the agreements with the eight companies, the Pentagon announced its intention to build ‘an architecture that prevents AI vendor lock’, in the words of Cameron Stanley, the department’s chief digital and AI officer. The phrase sounds like bureaucratic banality, but it is a political doctrine. Distributing contracts among eight companies under the same contractual clause allows the state to discipline each one through the credible threat of redirecting the budget. When Anthropic refused, OpenAI signed. When the encirclement closed in May, Microsoft, Amazon, Nvidia, SpaceX, Reflection AI and Oracle joined. When it becomes expedient to bring Anthropic back, a presidential nod is all it takes. No company has individual bargaining power. All operate within the same legal framework. The Pentagon has fragmented the sector in order to capture it whole.

The result is a new architecture of the relationship between the American state and US tech capital. It is not the classical militarisation of Silicon Valley of the Eisenhower years, when DARPA contracts financed pure research. Nor is it the distant civic-mindedness of the early 2010s, when Google could afford ‘don’t be evil’ and still be worth trillions. It is something in between, and more opaque. Contracts large enough to condition corporate strategy, but dispersed enough never to look like dependency. Ethical principles were erased from corporate documents the moment they became expensive or began to compromise new revenue streams. Employees mobilise, but the decisions are out of their hands. Courts limit executive actions but uphold designations in the name of “national security”.

There is a financial layer to this architecture worth noting. AI big tech firms are burning cash at a scale unmatched in the technology sector, with operational losses in the billions and none of the leaders projecting profitability before the end of the decade. In October last year, the International Monetary Fund and the Bank of England issued public warnings about the risk of an “AI bubble” comparable to the dotcom crash of 2000. In this context, signing a contract with the Pentagon ceases to be merely structural revenue and becomes something more valuable, an insurance policy. Lockheed in 1971, rescued by the Emergency Loan Guarantee Act because it was a strategic Department of Defense contractor, was the concrete invention of “too big to fail” years before the term became financial jargon in the 2008 crisis.

When an AI company signs a “lawful operational use” clause for an Impact Level 7 classified network, it is not merely selling a model, it is becoming a critical national-security infrastructure. If the bubble bursts, and the sector’s history suggests it will, the eight firms that signed on 1 May will have legal and political grounds for a federal bailout that Anthropic, expelled from the club, may not. Capture, viewed from this angle, is also a reverse insurance scheme. The Pentagon bought ideological alignment, and the companies bought an implicit guarantee of rescue written into the contract.

The definition of ‘lawful use’ has become an object of dispute between corporate lawyers and Pentagon prosecutors. ‘When you’re regulating by contract‘, said Jessica Tillipman, associate dean for government procurement law studies at George Washington Law School, ‘it’s basically creating a huge amount of power in the agency that’s negotiated that contract and then becomes effectively the de facto policy of the administration’. Tools that three years ago were sold as research assistants today operate on classified networks, inside active wars, under clauses the public cannot read. Anyone who still wishes to speak of AI ethics from this point forward will have to start by reading contractual provisions.

About the author

James Görgen has been a Public Policy and Government Management Specialist since 2008. He holds a Master’s degree in Communication and Information from UFRGS. He is currently an advisor at the Ministry of Development, Industry, Trade, and Services of Brazil and a member of the Brazilian Internet Steering Committee.

*This article reflects the personal reflections of the author and should not be construed as the official position of the Brazilian Ministry of Development, Industry, Trade, and Services of Brazil.

Code of opacity: How AI is enabling corruption

Policymakers worldwide are caught between awe and apprehension over AI. They recognise its potential to accelerate productivity and scientific progress while worrying about threats to jobs, human rights, and social cohesion. Yet they’re missing a critical risk: AI is becoming a code of opacity within government. Without adequate oversight, AI systems can facilitate corruption—eroding public trust in both the technology and the institutions deploying it.

Many governments are using AI to inform decisions, to prepare budgets and analyses, and to make decisions about the allocation of resources.  For example, the Dutch government relied on a self-learning algorithm to evaluate childcare benefit claims. Unfortunately, the AI system was built on an inaccurate and incomplete dataset.  Over time, the algorithm “learned” to flag immigrants, lower-income individuals, and people with non-Western appearances as fraudsters. Civil servants approved these determinations without careful review and offered no mechanisms for affected families to challenge the decision. After years of complaints, a Parliamentary inquiry stopped this misuse of AI. Soon thereafter, the Dutch government collapsed.

The Netherlands’ misuse of AI was a form of AI-enabled corruption. While there is no universal definition of corruption, many international organisations define it as the abuse of entrusted power for private gain—including non-financial gain for oneself or others. There are many ways AI and corruption are linked, leading to unfair processes or uneven outcomes. Furthermore, individuals, firms, and governments that misuse AI can deny responsibility and blame problems on AI’s opacity and complexity.

Unfortunately, the Netherlands’ experience is a cautionary tale and an early signal of what other countries could soon face. According to a 2023 review of policies and programs reported to the OECD, more than 60 countries are using taxpayer dollars to create, disseminate, or conduct research on AI.  Many of these same governments increasingly rely on AI systems to allocate resources such as access to credit, education, and information. If such systems rely on biased, inaccurate, incomplete, or unrepresentative data, or the algorithm is poorly designed or based on pseudoscience, AI use can create risks for both human rights and human autonomy.

Governments are also using AI for national security purposes, such as targeting support, intelligence analysis, proactive forecasting, and streamlined command and control in conflicts in both Europe and the Middle East. Policymakers in the US and China increasingly describe competition in AI as an existential race, which each side must win leading to huge sums off private and public investment. Even as governments seek to expand AI’s role across sensitive domains, a different set of risks has begun to draw attention. In 2025, the UN Office of Drugs and Crime and the Organization of American States co-convened a panel to examine whether AI could yield the greatest corruption.

AI deployers and users deserve a better understanding of how AI misuse might yield corruption.

First, AI systems are already corrupting political processes. In Indonesia, the political party Golkar used AI in 2024 to reanimate Suharto, the longtime dictator who died in 2008. The deepfake Suharto endorsed several candidates, including his son-in-law, who won.

Second, governments must exercise caution when integrating AI systems into their decision-making. As the Dutch government found, relying on inaccurate, incomplete, or unrepresentative data sets can cause real harm and further erode trust. Individuals, too, should be cautious integrating AI into their lives. AI systems can share non-factual information and deceive or manipulate individuals. The US National Institute of Standards and Technology recently noted that generative AI presents a wide range of risks, and there is no foolproof method for protecting AI from misdirection, a form of corruption of AI.

Third, AIs opacity can undermine trust in both the technology and in the policies and institutions that govern it.  AI, like corruption, operates as a black box. Developers often cannot explain how systems reach certain conclusions, and struggle to correct unwanted outcomes—such as when a driverless car killed a pedestrian. The OECD, warns that “insufficient transparency, explainability and public understanding of AI can erode accountability and cause public resistance; and the over-reliance on AI can widen digital divides and allow systemic errors to propagate, weakening citizen trust in government.” Recent polls signal the decline in trust in AI, AI companies, and AI governance around the world.

Policymakers and their constituents must break through this code of opacity and pay closer attention to the risk to democracy and good governance presented by AI. They should collaborate to draft clear guidelines for developing, procuring, and deploying AI that address areas of vulnerability, including corruption. They should incentivise transparency in model development, which can facilitate external audits of government use of AI. Finally, they should empower individuals and civil society organisations to challenge AI misuse through accessible appeal mechanisms.

Susan Ariel Aaronson is a professor at George Washington University and co-PI of the NSF-NIST Institute for Trustworthy AI in Law and Society.

The New Delhi AI Summit: Inclusive rhetoric, fractured reality

When India hosted the AI Impact Summit in New Delhi, from 16-20 February, it seized the moment to demonstrate its growing influence in the digital and AI field. But beyond India’s own ambitions, the gathering also served as a revealing showcase of how ‘middle powers’ are positioning themselves within the fast-changing dynamics of geopolitics. Within this broader category, ‘digital middle powers‘ – countries with established or rapidly growing influence in digital technology, often serving as regional leaders or possessing niche global strengths – deserve particular scrutiny. India clearly fits this description, heightening expectations for the outcomes of the AI Summit in New Delhi.

As host, India sought to shape global AI discourse by re-centering it on development and inclusion. It did so by weaving policy into a narrative rooted in its own cultural and philosophical heritage, framed by the concept of MANAV (a Sanskrit word for humanity) and seven chakras (pillars), which represented the goals of the Summit: social empowerment, trustworthiness; energy efficiency; use of AI in science; democratising AI resources; and use of AI for economic growth and social good. Yet beneath this tapestry of inclusive rhetoric, the summit also laid bare the deepening fractures in global AI governance. Six observations illustrate this fragmentation.

The mushrooming of AI-related initiatives

Each previous host of the AI summit has chosen a distinct framing for the discussion – the UK focused on safety and existential threats, South Korea on the management of risk and innovation, and France on investment, AI governance, and the public good. This framing matters because it sets the tone for the global conversation. For India, this created an opportunity to mainstream issues often left at the margins: democratising access to AI, investing in human capital through worker re-skilling, and pursuing social empowerment. Yet these topics failed to translate into tangible high-level consensus. Instead, they were largely relegated to working group discussions, generating a flurry of voluntary initiatives. Whether any of these initiatives bear fruit will depend entirely on post-summit interest and goodwill of an amorphous community of stakeholders.

The lack of continuity between initiatives advertised at each summit is striking. In South Korea, a network to accelerate the advancement of the science of AI safety was mainstreamed in an Annex to the Seoul Declaration. At the Paris Summit, officials announced ‘Public Interest AI‘, a global platform that can serve as an incubator for artificial intelligence serving the public good, and a network of observatories to monitor AI’s effects on job markets.

One year on, the absence of meaningful attention to these projects is telling. It does not bode well for the many new charters, coalitions, and platforms launched in Delhi, including a Charter for the Democratic Diffusion of AI, the Global AI Impact Commons, the Trusted AI Commons, and a Network of AI for Science Institutions. By multiplying initiatives without mechanisms for follow-through, these summits risk spreading already scarce resource – attention, human capacity, political will – so thin that nothing takes root.

The adoption of increasingly weak commitments

AI Summit Declarations are not legally binding. Against this already low level of constraint, the language employed in the Delhi Declaration stands out for its prolific use of ‘take note’ – a diplomatic way of avoiding clear endorsement – and of ‘voluntary and non-binding’ in reference to all practical initiatives. This linguistic evasion suggests that even the modest ambition of previous declarations may be eroding, replaced by a growing reluctance among states to commit to anything that could later constrain their freedom of action.

The sidelining of UN-led processes

With the approval of the Pact for the Future and the Global Digital Compact, the United Nations has centered international discussions on digital technologies around a people-centered and development-oriented future. This is no minor achievement, given the current climate of geopolitical and geoeconomic tension. The UN is carrying out work, with the involvement of multi-stakeholder expert groups, not only in the AI scientific panel but also on topics intimately related to the AI stack, such as the CSTD working group on data governance.

The Delhi Declaration makes no direct mention of any of these processes. While this omission may have facilitated endorsement by countries that oppose global AI governance – notably the United States – it weakens both UN processes and the evidence-based approach the organisation champions. In the words of UN Secretary General, António Guterres, “If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype – or disinformation. We need facts we can trust – and share – across countries and across sectors. Less noise. More knowledge”.

In a scenario in which AI’s impact on global GDP, labour markets, and education remains insuffiently understood, it is irresponsible to race toward AI adoption without gathering evidence. Building understanding is a mid- to long-term effort that will not be achieved by the disparate and discontinuous political priorities adopted at each summit, but by consistent work – precisely the kind the UN and its specialised agencies are well-placed to coordinate.  As a host of the 2027 AI Summit, and a hub for international organisations and multilatreralism, Switzerland will have the chance to redress this imbalance.  

Economic security and the spreading of weaponised interdependence to the Global South

The main outcome of the AI Summit was not the Delhi Declaration, but India’s decision to join the Pax Silica, a US-led effort to foster “AI and supply chain security, advancing economic security consensus among allies and trusted partners.” This development matters because it further consolidates the extension of great-power competition into the Global South, embedding AI governance within broader geopolitical and economic security alignments.

For the United States, the Pax Silica is a logical extension of a longstanding strategic positioning. Although the US lacks a stand-alone economic security strategy, its contours can be inferred from the National Security Strategy (NSS), from ‘economic security’ clauses in recent trade deals, and from bilateral understandings, such as the one established with Greece, which declares that “economic security is national security.”

Four of the twelve 2025 NSS objectives have an economic security dimension, including preserving leadership in frontier domains like AI. Successive NSS documents, including those from past Trump administrations, make explicit that China is the primary contender of the US. The Pax Silica is, therefore, a coalition designed to maintain that leadership by securing supply chains and coordinating with trusted partners.

For India, the calculation is pragmatic. Joining the Pax Silica consolidates its long-standing security partnership with the US – already institutionalised through arrangements like the QUAD. From the private sector’s perspective, the initiative offers something equally valuable: a US-government-blessed framework for investment in India’s huge, English-speaking IT workforce. In volatile times, this predictability is a significant asset.

The division of labour inside the alliance is clear. India will serve as a provider of skilled workforce and minerals, while the US grants access to advanced technology. India appears to be betting that supply chain alignment will translate into precisely this – investment and technology access – helping it bypass the limitations the US is introducing to control AI diffusion, such as tariffs and export controls.

Yet these bilateral advantages should not obscure the broader strategic outlook. The Pax Silica is a bloc aimed at countering Chinese influence over critical supply chains, particularly in sectors where China currently holds bottlenecks, such as rare earth mineral refinement. Chinese control is a common concern for Washington and Delhi, and the alliance aims to position its members to deter the use of these pressure points while strengthening their own capacity to exert leverage if needed. The securitisation of supply chains is one of the most visible signs of how economic interdependence has become weaponised, and the Pax Silica reinforces this dynamic into the domain of AI governance.

A display that digital/AI sovereignty is whatever governments want it to be

Strengthening sovereignty in the digital domain has become almost commonplace. In Europe, there are initiatives (still incipient, but growing) to develop the European digital industry and replace American technology services with European providers. In the Global South, sovereignty takes on post-colonial connotations and is associated with a way of maintaining control, agency, and self-determination over AI systems.

India shows, however, that the expression has been filled with so many contradictory political projects that it is losing its meaning. The country is often cited as one of the champions of digital sovereignty, due to the way it has created its public digital infrastructure. However, ‘digital sovereignty’ is timidly mentioned in the New Delhi declaration.

This is not surprising. On the one hand, the US instructed its diplomats to fight against digital sovereignty (and data sovereignty) initiatives in capitals around the world. On the other hand, India focused on attracting significant investment from large American technology companies during the summit. This movement shows that digital sovereignty projects remain contradictory in different parts of the world and disconnected from public debate. The way countries are defining what digital sovereignty entails in practice is closely related to the objectives of the political forces in power and depends on how sensitive these elites are to lobbying and international pressure.

If there are no strategies, goals, metrics, and participatory governance mechanisms, sovereignty in AI can be implemented in a way that is disconnected from individual, social, and economic rights. Clear strategies could also help countries identify concrete opportunities for joint action, beyond abstract discussions about digital sovereignty.

BRICS are sensitive to fragmenting trends  

The BRICS grouping offers a window into how middle powers navigate competing pressures in AI governance. India’s decision to join the Pax Silica illustrates this dynamic. India wants to compete with Beijing for primacy in strategic sectors such as AI and for regional clout, and it fears strategic encirclement by China’s growing infrastructure partnerships. It has banned Chinese apps like TikTok and restricted vendors such as Huawei.

In this context, deepening ties with the US through the Pax Silica is less a strategic choice based on a mix of security and economic considerations. It is a decision that may pull India away from its BRICS partners, but reflects the reality that dependence on Beijing is not an option.

Although the situation presents a challenge for the group, BRICS members are carefully navigating these tensions at the diplomatic level, showing that they significantly value the alliance. China was almost absent from the AI Summit, in contrast to its considerable presence in Paris. This could be seen as a sign of disapproval, but also as an attempt to avoid putting its relationship with India under strain.

In parallel, India signed an agreement with Brazil to expand cooperation on mining, steel, and rare earth minerals. Brazil holds 26% of the world’s rare earth reserves, second only to China. The agreement strengthens the partnership between India and Brazil, while also positioning India as a potential reliable source of Brazilian-sourced minerals for Pax Silica members. This could contribute to Brazil’s ‘strategic autonomy,’ helping the country cultivate goods for the US-led group, while remaining at arm’s length from Washington, an approach Brazil also adopts toward Beijing by not joining China’s Belt and Road Initiative.

The real impact of India’s decision may be a segmentation of BRICS AI-related collaboration by layer of the AI stack. While it may become harder to collaborate on semiconductors and advanced AI models, for example, space could remain for continued cooperation on governance frameworks, ethical AI, and the development of AI applications.

Conclusion

The 2026 New Delhi AI Summit will likely be remembered less for its inclusive rhetoric than for what it revealed about the state of global AI governance. India’s effort to re-centre the discourse on development and inclusion offered a distinctive contribution to the evolving landscape of international AI discussions, and showcased the potential of middle powers to shape the agenda.

But beneath the narrative woven in Delhi, the summit exposed fragmentation rather than convergence. The Delhi Summit reflected the underlying realities of a multipolar, securitised, and initiative-saturated landscape. The question is whether future summits and UN processes can reverse this fragmentation.

The entropy trap: When creativity forces AI into piracy

True creativity is statistically improbable. Does this very nature of creativity make copyright infringement unavoidable for generative AI? The recent copyright decision GEMA vs OpenAI implies that it does.


The image shows a sketch of lady justice

On 11 November 2025, the Regional Court of Munich I (Landgericht München I) granted the German copyright collective organisation GEMA injunctive relief and damages for the unauthorised reproduction of copyright-protected song lyrics by OpenAI’s GPT-4 and 4o AI models. The court skilfully dismantled OpenAI’s argument, which has been used in recent years to obscure technical facts and the legal reality. (Note: All translations of the German judgment are by the author.)

(more…)

Tech attache briefing: WSIS+20 zero draft and AI capacity building

 Nature, Outdoors, Sea, Water, Logo, Text, Blade, Dagger, Knife, Weapon

The event is part of a series of regular briefings the Geneva Internet Platform (GIP) is delivering for diplomats at permanent missions and delegations in Geneva following digital policy issues. It is an invitation-only event

On 4 September, we resumed our briefings for Geneva diplomats with one focused on two important topics on the global digital governance agenda: reviewing 20 years of implementation of the outcomes of the World Summit on the Information Society (WSIS+20) and capacity building in artificial intelligence (AI).

We unpacked two documents:

***

For inquiries, contact us at geneva@diplomacy.edu.

AI Apprenticeship for IOs · From diplomats to AI builders

Equipping IOs with practical AI solutions

Discover how professionals in international Geneva – from the UN, WHO, to CERN – are creating AI assistants to support global cooperation.

As artificial intelligence (AI) reshapes how we work, International Organisations face a clear challenge: how to ensure AI strengthens, rather than disrupts, their ability to deliver on complex global mandates.

The AI Apprenticeship for International Organisations (IOs), developed by Diplo, provides a practical and applied response to that challenge.

Unlike technical training or abstract policy discussions, this apprenticeship focuses on enabling professionals to design and build AI tools directly relevant to their daily work, whether that’s improving access to information, enhancing communication, or supporting better decision-making.

Throughout the program, participants

Below, we are featuring a selection of projects created by our participants in the Geneva AI Apprenticeship for IOs program. Each reflects how AI, when designed thoughtfully and applied responsibly, can complement human expertise and support the work of IOs, from improving access to institutional knowledge to tackling misinformation.

The AI Apprenticeship is part of Diplo’s broader effort to close the AI skills gap in international Geneva and beyond, ensuring that those shaping global policy also have the tools to navigate and shape the AI era.

New publication: AI Apprenticeship

Our new guide, AI Apprenticeship: Learning about AI by developing AI, details the successful framework and principles of our program.

The AI Apprenticeship publication explores how learning by building can equip professionals with the skills, ethics, and adaptability needed for the AI era. Inspired by the Swiss vocational model, it presents a human-centred approach to navigating digital transformation. Drawing on Diplo’s experience, it demonstrates how even non-technical professionals can gain confidence and competence in using AI.

The image shows a drawing of a group of people looking at a robot in a suit standing before a forest

AI Projects for IOs · From concept to real-world impact

What happens when you put AI development tools directly into the hands of diplomats, communication heads, and humanitarian experts? These projects are the answer. Discover the powerful assistants built not by programmers but by the professionals facing these global challenges every day.


Diplo Helper · AI for Accessible Emerging Technology

  • Created by: Luis Bobo Garcia
  • Organisation: United Nations
  • Project: Diplo Helper – AI for Accessible Emerging Technology Insights
The image shows a photograph of a person

Luis Bobo Garcia, an Associate Information Systems Officer, works at the intersection of diplomacy and emerging technologies. As AI, quantum computing, and blockchain reshape international discussions, translating these complex topics for non-technical audiences has become increasingly critical.

Through the AI Apprenticeship, Luis developed Diplo Helper, an AI assistant designed to make emerging technologies more accessible to diplomats and policymakers. The tool provides clear, structured insights on key technologies, supporting more informed engagement in international policy discussions.

This project highlights how targeted AI solutions can bridge the gap between technical complexity and the practical needs of global governance. To test the AI Assistant, click here:


AI in Digital Communications at WHO

  • Created by: Sarah Ann Cumberland
  • Organisation: World Health Organisation (WHO)
  • Project: AI in Digital Communications at WHO – Ensuring Responsible AI Use in Global Health Messaging
The image shows a photograph of a person

Through the AI Apprenticeship, Sara developed an AI assistant that provides practical guidance on using AI in WHO communications. The tool offers accessible information on official guidelines, ethical considerations, and best practices, supporting responsible innovation within the organisation.

The project reflects the growing need for governance frameworks that enable the adoption of AI while safeguarding public trust in global health communication. To test the AI Assistant, click here:


Editron · AI-Powered Language Support for CERN

  • Created by: Daniela Antonio
  • Organisation: CERN
  • Project: Editron – AI-Powered Language Support for CERN Communications
The image shows a photograph of a person

As Digital Communications Lead at CERN, Daniela Antonio works where scientific precision extends beyond research to communication. Maintaining consistent language standards, especially for social media, is essential yet resource-intensive.

As part of the AI Apprenticeship, Daniela developed Editron, an AI-based language assistant tailored to CERN’s English Language Style Guide. The tool streamlines copy-editing processes, ensuring consistent, high-quality content across CERN’s communication channels.

This project demonstrates how AI can be customised to meet particular operational needs within international scientific organisations. To test the AI Assistant, click here:


Geneva Loop Events Guide

  • Created by: Amina Osmanova
  • Organisation: United Nations International Computing Centre (UNICC)
  • Project: Geneva Loop Events Guide – AI for Navigating Geneva’s Multilateral Landscape
The image shows a photograph of a person

Amina Osmanova, an Associate Programme Management Officer at UNICC, operates within Geneva’s fast-paced multilateral environment, where staying informed on relevant events and developments is essential yet time-consuming.

To address this, Amina developed the Geneva Loop Events Guide. This AI assistant curates personalised updates on events, deadlines, and developments in areas such as AI, cybersecurity, and digital governance, including major conferences like AI for Good.

The project demonstrates how AI can improve situational awareness and connectivity within complex international ecosystems. To test the AI Assistant, click here:


UNHCR Data Pal · Unlocking Institutional Knowledge

  • Created by: Matthew William Saltmarsh
  • Organisation: UN Refugee Agency (UNHCR)
  • Project: UNHCR Data Pal – Unlocking Institutional Knowledge for Humanitarian Action
The image shows a photograph of a person

Matthew William Saltmarsh, Head of News & Media at UNHCR, works with institutional data essential for evidence-based humanitarian response. Yet, accessing specific insights from decades of UNHCR reports remains a labour-intensive process.

Matthew developed UNHCR Data Pal, an AI assistant trained on UNHCR reports from 2003 to 2025. The tool enables rapid extraction of relevant data, identification of trends, and access to evidence-based insights.

The project demonstrates how AI can unlock institutional knowledge, streamline analysis, and support more agile, informed decision-making in the humanitarian sector. To test the AI Assistant, click here:


Chatbot on Public Health Misinformation

  • Created by: Diya Banerjee
  • Organisation: World Health Organisation (WHO)
  • Project: Chatbot on Public Health Misinformation AI to Support Evidence-Based Communication
The image shows a photograph of a person

Diya Banerjee, WHO’s Head of Social Media, addresses one of the most pressing challenges in global health: the spread of misinformation. False health claims on social media can undermine public health responses and erode trust.

To address this challenge, Diya developed the Chatbot on Public Health Misinformation designed to monitor emerging myths and provide fact-checked information from authoritative sources such as the WHO and the CDC.

The project highlights how AI can be deployed proactively to strengthen science-based public health communication and counter misinformation on a large scale. The AI Assistant will be released shortly.


Explore the Projects. Share Your Insights.

The image shows a drawing of a robot

The AI solutions developed through this showcase reflect the creativity and expertise of professionals across the international Geneva community. Each project addresses a real-world challenge, demonstrating how AI can support the unique missions of International Organisations.

We invite you to explore these projects and to share your feedback or questions.

Together, we can advance the responsible and practical use of AI in global governance.

Discover real-world AI applications.

From Information Warfare to Youth Rights: Building Resilience Through Digital Democracy

The Czech permanent mission organised an event on 20th June 2025 “From Information Warfare to Youth Rights: Building Resilience Through Digital Democracy”. The discussion was at the nexus between information warfare and the rights of young people. It focused on disinformation campaigns and other forms of malign interference in relation to freedom of expression and access to information. 

 Adult, Female, Person, Woman, Conversation, Wristwatch, Clothing, Formal Wear, Suit, People, Accessories, Glasses, Cup, Jewelry, Necklace, Face, Head, Computer, Electronics, Laptop, Pc, Tonino Carotone
Tereza Horejsova of DiploFoundation speaking at an event organized by the Permanent Mission of the Czech Republic to the UN Office in Geneva

The event brought together young delegates from the Czech Republic, Finland, Lithuania and Poland.

Tereza shared thoughts on mis and disinformation especially in relation to AI and built on findings of a study Diplo conducted in 2024 on decoding disinformation.  

Evening Reception on the Use of AI in Humanitarian Contexts 

The Permanent Missions of the United Kingdom and Switzerland hosted an evening reception and panel discussion on ‘What Next for the Use of AI in Humanitarian Contexts?‘ With leading experts and policymakers, Diplo explored AI’s opportunities and challenges in humanitarian action, shared best practices, and discussed responsible innovation.

Related actors:

Related people: