When India hosted the AI Impact Summit in New Delhi, from 16-20 February, it seized the moment to demonstrate its growing influence in the digital and AI field. But beyond India’s own ambitions, the gathering also served as a revealing showcase of how ‘middle powers’ are positioning themselves within the fast-changing dynamics of geopolitics. Within this broader category, ‘digital middle powers‘ – countries with established or rapidly growing influence in digital technology, often serving as regional leaders or possessing niche global strengths – deserve particular scrutiny. India clearly fits this description, heightening expectations for the outcomes of the AI Summit in New Delhi. As host, India sought to shape global AI discourse by re-centering it on development and inclusion. It did so by weaving policy into a narrative rooted in its own cultural and philosophical heritage, framed by the concept of MANAV (a Sanskrit word for humanity) and seven chakras (pillars), which represented the goals of the Summit: social empowerment, trustworthiness; energy efficiency; use of AI in science; democratising AI resources; and use of AI for economic growth and social good. Yet beneath this tapestry of inclusive rhetoric, the summit also laid bare the deepening fractures in global AI governance. Six observations illustrate this fragmentation. Each previous host of the AI summit has chosen a distinct framing for the discussion – the UK focused on safety and existential threats, South Korea on the management of risk and innovation, and France on investment, AI governance, and the public good. This framing matters because it sets the tone for the global conversation. For India, this created an opportunity to mainstream issues often left at the margins: democratising access to AI, investing in human capital through worker re-skilling, and pursuing social empowerment. Yet these topics failed to translate into tangible high-level consensus. Instead, they were largely relegated to working group discussions, generating a flurry of voluntary initiatives. Whether any of these initiatives bear fruit will depend entirely on post-summit interest and goodwill of an amorphous community of stakeholders. The lack of continuity between initiatives advertised at each summit is striking. In South Korea, a network to accelerate the advancement of the science of AI safety was mainstreamed in an Annex to the Seoul Declaration. At the Paris Summit, officials announced ‘Public Interest AI‘, a global platform that can serve as an incubator for artificial intelligence serving the public good, and a network of observatories to monitor AI’s effects on job markets. One year on, the absence of meaningful attention to these projects is telling. It does not bode well for the many new charters, coalitions, and platforms launched in Delhi, including a Charter for the Democratic Diffusion of AI, the Global AI Impact Commons, the Trusted AI Commons, and a Network of AI for Science Institutions. By multiplying initiatives without mechanisms for follow-through, these summits risk spreading already scarce resource – attention, human capacity, political will – so thin that nothing takes root. AI Summit Declarations are not legally binding. Against this already low level of constraint, the language employed in the Delhi Declaration stands out for its prolific use of ‘take note’ – a diplomatic way of avoiding clear endorsement – and of ‘voluntary and non-binding’ in reference to all practical initiatives. This linguistic evasion suggests that even the modest ambition of previous declarations may be eroding, replaced by a growing reluctance among states to commit to anything that could later constrain their freedom of action. With the approval of the Pact for the Future and the Global Digital Compact, the United Nations has centered international discussions on digital technologies around a people-centered and development-oriented future. This is no minor achievement, given the current climate of geopolitical and geoeconomic tension. The UN is carrying out work, with the involvement of multi-stakeholder expert groups, not only in the AI scientific panel but also on topics intimately related to the AI stack, such as the CSTD working group on data governance. The Delhi Declaration makes no direct mention of any of these processes. While this omission may have facilitated endorsement by countries that oppose global AI governance – notably the United States – it weakens both UN processes and the evidence-based approach the organisation champions. In the words of UN Secretary General, António Guterres, “If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype – or disinformation. We need facts we can trust – and share – across countries and across sectors. Less noise. More knowledge”. In a scenario in which AI’s impact on global GDP, labour markets, and education remains insuffiently understood, it is irresponsible to race toward AI adoption without gathering evidence. Building understanding is a mid- to long-term effort that will not be achieved by the disparate and discontinuous political priorities adopted at each summit, but by consistent work – precisely the kind the UN and its specialised agencies are well-placed to coordinate. As a host of the 2027 AI Summit, and a hub for international organisations and multilatreralism, Switzerland will have the chance to redress this imbalance. The main outcome of the AI Summit was not the Delhi Declaration, but India’s decision to join the Pax Silica, a US-led effort to foster “AI and supply chain security, advancing economic security consensus among allies and trusted partners.” This development matters because it further consolidates the extension of great-power competition into the Global South, embedding AI governance within broader geopolitical and economic security alignments. For the United States, the Pax Silica is a logical extension of a longstanding strategic positioning. Although the US lacks a stand-alone economic security strategy, its contours can be inferred from the National Security Strategy (NSS), from ‘economic security’ clauses in recent trade deals, and from bilateral understandings, such as the one established with Greece, which declares that “economic security is national security.” Four of the twelve 2025 NSS objectives have an economic security dimension, including preserving leadership in frontier domains like AI. Successive NSS documents, including those from past Trump administrations, make explicit that China is the primary contender of the US. The Pax Silica is, therefore, a coalition designed to maintain that leadership by securing supply chains and coordinating with trusted partners. For India, the calculation is pragmatic. Joining the Pax Silica consolidates its long-standing security partnership with the US – already institutionalised through arrangements like the QUAD. From the private sector’s perspective, the initiative offers something equally valuable: a US-government-blessed framework for investment in India’s huge, English-speaking IT workforce. In volatile times, this predictability is a significant asset. The division of labour inside the alliance is clear. India will serve as a provider of skilled workforce and minerals, while the US grants access to advanced technology. India appears to be betting that supply chain alignment will translate into precisely this – investment and technology access – helping it bypass the limitations the US is introducing to control AI diffusion, such as tariffs and export controls. Yet these bilateral advantages should not obscure the broader strategic outlook. The Pax Silica is a bloc aimed at countering Chinese influence over critical supply chains, particularly in sectors where China currently holds bottlenecks, such as rare earth mineral refinement. Chinese control is a common concern for Washington and Delhi, and the alliance aims to position its members to deter the use of these pressure points while strengthening their own capacity to exert leverage if needed. The securitisation of supply chains is one of the most visible signs of how economic interdependence has become weaponised, and the Pax Silica reinforces this dynamic into the domain of AI governance. Strengthening sovereignty in the digital domain has become almost commonplace. In Europe, there are initiatives (still incipient, but growing) to develop the European digital industry and replace American technology services with European providers. In the Global South, sovereignty takes on post-colonial connotations and is associated with a way of maintaining control, agency, and self-determination over AI systems. India shows, however, that the expression has been filled with so many contradictory political projects that it is losing its meaning. The country is often cited as one of the champions of digital sovereignty, due to the way it has created its public digital infrastructure. However, ‘digital sovereignty’ is timidly mentioned in the New Delhi declaration. This is not surprising. On the one hand, the US instructed its diplomats to fight against digital sovereignty (and data sovereignty) initiatives in capitals around the world. On the other hand, India focused on attracting significant investment from large American technology companies during the summit. This movement shows that digital sovereignty projects remain contradictory in different parts of the world and disconnected from public debate. The way countries are defining what digital sovereignty entails in practice is closely related to the objectives of the political forces in power and depends on how sensitive these elites are to lobbying and international pressure. If there are no strategies, goals, metrics, and participatory governance mechanisms, sovereignty in AI can be implemented in a way that is disconnected from individual, social, and economic rights. Clear strategies could also help countries identify concrete opportunities for joint action, beyond abstract discussions about digital sovereignty. The BRICS grouping offers a window into how middle powers navigate competing pressures in AI governance. India’s decision to join the Pax Silica illustrates this dynamic. India wants to compete with Beijing for primacy in strategic sectors such as AI and for regional clout, and it fears strategic encirclement by China’s growing infrastructure partnerships. It has banned Chinese apps like TikTok and restricted vendors such as Huawei. In this context, deepening ties with the US through the Pax Silica is less a strategic choice based on a mix of security and economic considerations. It is a decision that may pull India away from its BRICS partners, but reflects the reality that dependence on Beijing is not an option. Although the situation presents a challenge for the group, BRICS members are carefully navigating these tensions at the diplomatic level, showing that they significantly value the alliance. China was almost absent from the AI Summit, in contrast to its considerable presence in Paris. This could be seen as a sign of disapproval, but also as an attempt to avoid putting its relationship with India under strain. In parallel, India signed an agreement with Brazil to expand cooperation on mining, steel, and rare earth minerals. Brazil holds 26% of the world’s rare earth reserves, second only to China. The agreement strengthens the partnership between India and Brazil, while also positioning India as a potential reliable source of Brazilian-sourced minerals for Pax Silica members. This could contribute to Brazil’s ‘strategic autonomy,’ helping the country cultivate goods for the US-led group, while remaining at arm’s length from Washington, an approach Brazil also adopts toward Beijing by not joining China’s Belt and Road Initiative. The real impact of India’s decision may be a segmentation of BRICS AI-related collaboration by layer of the AI stack. While it may become harder to collaborate on semiconductors and advanced AI models, for example, space could remain for continued cooperation on governance frameworks, ethical AI, and the development of AI applications. The 2026 New Delhi AI Summit will likely be remembered less for its inclusive rhetoric than for what it revealed about the state of global AI governance. India’s effort to re-centre the discourse on development and inclusion offered a distinctive contribution to the evolving landscape of international AI discussions, and showcased the potential of middle powers to shape the agenda. But beneath the narrative woven in Delhi, the summit exposed fragmentation rather than convergence. The Delhi Summit reflected the underlying realities of a multipolar, securitised, and initiative-saturated landscape. The question is whether future summits and UN processes can reverse this fragmentation.The mushrooming of AI-related initiatives
The adoption of increasingly weak commitments
The sidelining of UN-led processes
Economic security and the spreading of weaponised interdependence to the Global South
A display that digital/AI sovereignty is whatever governments want it to be
BRICS are sensitive to fragmenting trends
Conclusion