On 27 October 2025, DiploFoundation, together with the Permanent Delegation of the European Union and the Permanent Missions of Canada and Mexico to the United Nations in Geneva, hosted a diplomatic dialogue on ‘Safe, secure, and trustworthy AI: What is it and how to get there?’.
The dialogue explored issues related to the interplay between AI, human agency, and human rights; ways to achieve safe, secure, and trustworthy AI; and the role of International Geneva in advancing the governance of such AI systems.
About
Artificial intelligence continues to evolve at a fast pace, reshaping societies, economies, and global governance, and bringing with it new concerns about the interplay between technology and humanity, including with regard to the exercise of human agency and the enjoyment of human rights.
Read more…
In addition to various national and regional efforts to regulate AI systems (such as the adoption of the EU AI Act and the Council of Europe Convention on AI), the governance of AI is now also a priority on the international agenda. Last year, UN member states engaged in landmark negotiations, including on the Global Digital Compact and the first UN General Assembly resolutions on AI, whereas the issue of AI and human agency came to the forefront of multiple UN specialised agencies and other international organisations.
In International Geneva, many discussions centred on the (potential) impact of AI on human rights, values, and principles. For instance, the Human Rights Council has increased its work on AI-related topics, notably through the biennial resolution on new and emerging digital technologies (the most recent adopted in July 2025); the Office of the High Commissioner for Human Rights has focused on exploring the links between AI, human rights, and human agency; and standard-setting bodies such as the International Telecommunication Union and the International Organization for Standardization have looked at the role of standards in advancing trustworthy AI. These and similar efforts underscore a need to advance a human-centred approach to technologies, and a commitment to fostering AI that is safe, secure, and trustworthy – principles now embedded in the global governance discourse.
While global agreements on core principles are welcome, they need to turn into concrete action. So what does it mean to ensure AI is safe, secure, and trustworthy? What challenges and risks must we address to uphold these principles? If AI is to truly serve humanity, how can we ensure that the design, development, deployment, and use of AI are compatible with the notion of human agency and aligned with internationally-agreed human rights? What concrete steps can be taken at both technical and policy levels to achieve these goals? And how can global governance mechanisms and existing processes and organisations – particularly those anchored in International Geneva – support this endeavour?
Programme
10:00 – 10:10 | Welcoming remarks
10:10 – 11:00 | Exploring the interplay between AI, human agency, and human rights
- Moderated dialogue on the impact of AI on what makes us human, with a focus on (potential) harms and risks associated with AI systems, including generative AI.
11:00 – 11:45 | Group exercise: What do we talk about when we talk about safe, secure, and trustworthy AI?
- Why do we want AI to be safe, secure, trustworthy? Group discussions having as starting points scenarios in which AI applications challenge concepts and principles such as safety and security, human agency, inclusion and non-discrimination, freedom of expression, etc. (30 min.)
- Debrief: Groups share and discuss their reflections. (15 min.)
11:45 – 12:00 | Break
12:00 – 13:00 | Who needs to do what to ensure safe, secure, and trustworthy AI? And what role for International Geneva?
A moderated dialogue to explore:
- The role that International Geneva could play in advancing such solutions.
- Concrete actions and solutions that different actors (governments, international organisations, private companies, civil society, etc.) could or should take to ensure that AI technologies are developed, deployed, and used in a safe, secure, and trustworthy manner.
***
This is an invitation-only event.
It is part of a Diplomatic dialogues on AI series launched in January 2025 with the goal to provide a space for Geneva-based diplomats to engage in open and informal debates on AI governance issues.
Key insights
The dialogue brought together diplomats from 23 misisons and delegations, along with representatives of several UN entities, to get to the core of the structural and philosophical challenges of working towards trustworthy AI.
The entire discussion was anchored in human agency to explore the interplay between AI and human rights. It moved from the personal cognitive level of understanding how AI works to the ability to make operational decisions about the outputs of AI systems, and to the collective shaping of AI. Therefore, a prerequisite for aligning AI with human agency is a basic understanding of both humans and AI, within the context of each AI application.
Grounded in practical scenario exercises, the participants explored the central conflict between AI-driven optimisation and human agency, with the key question of where to draw the line. The participants’ analysis provided an insightful answer: we do not need to reinvent the wheel when it comes to applying AI, and, most importantly, its regulation.
For this purpose, a granular perspective is needed to identify both the mutual roles of humans and AI and the responsibilities of each. Furthermore, a commitment to actually exercise this granular perspective with practical and legal due diligence that goes beyond mere principles. Such an approach would allow us to oppose technical elitism and the game of passing responsibility for the sake of potential long-term technological progress.
In essence, the event reflected the complexity of the topic, which spans the extremes of utopian and dystopian outlooks on the future, with a crucial difference: participants’ high engagement in moving beyond the narrative that technology is outpacing governance, which enabled a focused and constructive dialogue.
Resources
Below are some resources and materials mentioned during the event or shared by participants.
Relevant international mechanisms
- UNGA Resolution A/RES/78/265: Seizing the opportunities of safe, secure, trustworthy AI systems for sustainable development
- UNGA Resolution A/RES/78/131: Enhancing international cooperation on capacity-building of AI
- Global Digital Compact (A/RES/79/1)
- UNGA Resolution A/RES/79/325: Terms of Reference and Modalities for the Establishment and Functioning of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance
- HRC Resolution on new and emerging digital technologies and human rights (A/HRC/RES/59/11)
- Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law
- UNESCO Recommendation on the ethics of AI
OHCHR briefs and reports
- Brief: Key asks for state regulation of AI
- Online content governance
- Civic Space & Tech Brief: Hacking and spyware
- Civic Space & Tech Brief: Encryption
- Civic Space and Tech Brief: Internet shutdowns and human rights
- Civic Space and Tech Brief: Role of Standard Setting
- United Nations (2024). Mapping Report: Human Rights and New and Emerging Digital Technologies, Report of the Office of the United Nations High Commissioner for Human Rights
- B-Tech: Advancing Responsible Development and Deployment of Generative AI. The value proposition of the UN GPs on BHR
Related actors:
Related people:
