Why AI procurement is the new frontier of diplomacy

Published on April 6 2026
Artificial intelligence is frequently addressed in treaties, national strategies, and United Nations processes. However, many of the most consequential decisions about how AI affects individual rights are more determined by procurement contracts and bilateral agreements between governments and technology firms. In parallel, as we argued in a recent article, open‑weight models offer smaller countries a rare chance to shape AI on their own terms instead of only renting capabilities and rules from a few powerful providers. While the potential for greater AI sovereignty is genuine, there remains a significant risk that such sovereignty could be undermined in the contractual details […]

Artificial intelligence is frequently addressed in treaties, national strategies, and United Nations processes. However, many of the most consequential decisions about how AI affects individual rights are more determined by procurement contracts and bilateral agreements between governments and technology firms. In parallel, as we argued in a recent article, open‑weight models offer smaller countries a rare chance to shape AI on their own terms instead of only renting capabilities and rules from a few powerful providers. While the potential for greater AI sovereignty is genuine, there remains a significant risk that such sovereignty could be undermined in the contractual details governing procurement decisions.

For example, when a ministry of the interior in a smaller country procures an AI system for border management or welfare fraud detection, public debate often centres on the appropriateness of such systems and overarching safeguards. However, the most significant decisions are frequently embedded in contractual clauses, such as data ownership, model inspection rights, avenues for appealing decisions, jurisdiction of legal disputes, and the ease of switching providers. These provisions constitute the operational mechanisms of AI governance rather than minor technicalities.

 Art, Graphics, Logo, Purple, Computer, Electronics, Pc, Light

Governance by procurement: rights in the fine print

Recent research on ‘governance by procurement’ from the Harvard Kennedy School’s Carr-Ryan Centre contends that fundamental questions regarding AI and human rights are resolved through bilateral negotiations and standard contract language, rather than through public law or multilateral agreements. When governments enter into long‑term agreements for predictive policing tools, biometric systems, or AI‑assisted case management, these contracts often contain the sole binding provisions governing personal data handling, decision-logging, and independent review. In effect, such contracts function as a mini‑constitution for AI use within that context.

Several common elements in AI procurement have direct governance effects:

Although these contractual choices are often negotiable, they seldom receive the same level of scrutiny as national AI legislation or international guidelines. Procurement teams frequently operate under significant pressure, ministries may lack specialized expertise in AI contracting, and smaller countries often possess limited bargaining power when negotiating with major suppliers. Consequently, vendors may export not only technology but also their own assumptions regarding data use, transparency, and standards for human oversight.

For instance, in the United States, the General Services Administration has proposed a standard ‘Basic Safeguarding of Artificial Intelligence Systems’ clause that supersedes commercial terms, asserts government ownership over data and custom developments, and imposes disclosure obligations for AI tools used in contract performance. Although this represents a particularly assertive approach by a large purchaser, it demonstrates the significant governance authority embedded within procurement language.

The small‑country dilemma: owning the model, renting the rules

This situation contrasts with the optimism surrounding open‑weight models. In principle, open‑weight models reduce barriers for smaller or resource‑constrained countries to develop and adapt AI systems that align with their legal frameworks, languages, and social priorities. Rather than relying solely on closed, foreign platforms, these countries can train or fine‑tune models locally, collaborate with regional universities or companies, and experiment with configurations suited to their specific contexts. For governments historically positioned as ‘rule takers’ in the digital domain, this represents a notable opportunity.

In practice, however, many governments opt for turnkey systems due to limited capacity. Developing and maintaining an open‑weight-based system requires not only access to code and model weights, but also dedicated teams for configuration, monitoring, security, and ongoing updates. Many public administrations, particularly in smaller countries, currently lack these resources. International tenders and development assistance often include preferred vendors, reference solutions, and financing terms that make comprehensive external solutions appear safer and more expedient. Even when open‑weight models are utilized, the associated services, hosting, and monitoring may remain under the control of foreign partners.

A recent example from North Africa demonstrates these dynamics. Morocco entered a strategic partnership with Mistral AI, which was promoted as an initiative to develop ‘AI made in Morocco,’ adapt open‑weight models to local languages, and reduce reliance on closed foreign platforms.

However, most hosting and computational resources remain located in European data centres, and critical infrastructure is outside Moroccan jurisdiction. Without significant investment in domestic infrastructure and contracting expertise, Morocco risks remaining a dependent client rather than achieving full independence, even when adopting open‑weight technology.

 Art, Graphics, Flare, Light, Nature, Outdoors, Sky, Logo, Computer, Electronics, Pc

This situation creates a dilemma. Although a country may appear to use an open‑weight foundation model that is theoretically adaptable or portable, contractual arrangements for integration, hosting, and support can effectively bind it to external governance frameworks and technical roadmaps. If agreements grant vendors broad discretion over updates, restrict portability of logs and training data, or lack robust audit rights and exit options, the country is effectively renting governance rules rather than exercising true sovereignty, even when model weights are nominally open.

This tension does not mean the earlier optimism about open weights was misplaced. It means that the technical possibility and the political reality diverge unless procurement is treated as part of AI governance, not as a purely administrative chore.

What sovereign AI procurement could look like

If smaller countries want to realise the potential of open‑weight models and reduce one‑sided dependence, they can start by reshaping how they buy AI. That does not require grand declarations. It requires concrete requirements and protections written into contracts, ideally backed by shared templates and regional cooperation.

From a technical perspective, procurement strategies aimed at sovereignty should require robust inspection and audit rights for public authorities and independent experts. This includes access to documentation on model training, subgroup performance, and significant changes over time. Architectures should be prioritized that facilitate portability and exit, such as mandating that models and key data structures are exportable in interoperable formats without excessive fees, and that critical components are not dependent on proprietary hardware or obscure interfaces. Where possible, governments should pursue arrangements that enable local hosting or mirrored control over infrastructure, with the option to transition more elements to domestic or regional providers as capacity increases.

Legally, contracts should stipulate that automated decisions with significant impacts on individuals must be explainable in terms comprehensible to courts and affected persons, and that human appeal mechanisms are guaranteed both in law and practice. Liability clauses should clarify that governments retain responsibility for the lawful use of AI within their jurisdiction, while still permitting recourse if vendors misrepresent system capabilities. Agreements should also restrict the use of secondary data and require prompt notification in the event of vulnerabilities, data breaches, or serious performance issues.

 Logo, Text, Art, Graphics

Diplomatic tools matter as well. Small countries are at a disadvantage when each must negotiate bespoke contracts with powerful providers. Regional organisations and like‑minded coalitions can develop model clauses for AI procurement, share technical and legal expertise and coordinate expectations toward major vendors. International institutions that support digital transitions could incorporate these elements into guidance and funding conditions, so that capacity building and financing strengthen sovereign control rather than entrench dependency. Over time, such practices could become a de facto standard for responsible AI procurement, especially if they align with emerging international principles on transparency, accountability, and human rights in AI.

For example, the PIANOo AI procurement conditions developed in the Netherlands, now recommended by the European Commission as a reference for public buyers, already translate abstract principles into clauses on documentation, data‑quality safeguards, logging and audit rights. They show that it is possible to make these expectations very concrete in tenders and contracts.

In this context, open‑weight models and improved procurement design are mutually reinforcing. Open weights supply the foundational resources for constructing systems independent of any single foreign platform. Carefully structured contracts ensure that associated services, data flows, and decision-making procedures do not inadvertently reintroduce the dependencies that open weights are intended to eliminate.

Why diplomats should be in the room

All of this suggests that AI procurement is no longer a narrow technical matter confined to IT departments and legal offices. It is a site where surveillance powers, information flows and structural dependencies are set for years, often with little public scrutiny. For foreign ministries and diplomatic communities, this should be a signal that some of the most consequential negotiations on AI governance are taking place in venues they do not always monitor.

Diplomats can add value in several ways:

Decisions made in AI procurement will determine whose values and interests are embedded within the technical systems that shape daily life, as well as the effectiveness of public services. For smaller countries, combining open‑weight models with assertive, coordinated procurement offers an opportunity to expand their strategic options amid increasing digital dependence. Early contractual and architectural choices will set precedents for years, warranting attention equal to that given to high‑level strategies and declarations. The more diplomats, policymakers, and procurement officials regard AI contracts as instruments of governance rather than mere technicalities, the greater the potential for smaller countries to achieve meaningful digital sovereignty within an unequal global system.

Author: Slobodan Kovrlija


cross-circle