Artificial intelligence (AI) is at the tip of everyone’s tongue these days. It seems to have found a way to wiggle into every conversation; everything is “AI this, AI that”. And all of a sudden, there’s enthusiastic talk on how AI is – or could be – changing the diplomatic, environmental, health, humanitarian, human rights, intellectual property, labour, migration, security, and more and more, landscapes. The alphabetical list of issue domains that AI can change is as long as a dictionary. Probably an attempt to come down from the hype, people have started asking a critical question: What could we actually do with AI in our day-to-day business? Yes, we could ask ChatGPT or Bard this question, and they will give passable answers. And, yes, we will probably be able to ask ChatGPT or Bard to just implement those answers for us. So why bother to figure it out ourselves? Having experimented with various AI systems and reflected on Diplo’s own line of work, we have learned a crucial lesson: adopting AI tools is more than just asking ChatGPT to do our bidding; by incorporating these tools into various tasks, we actually have to enter an organisation-wide transition. And this transitional period is how we keep up with thousands of technological advancements each day. We are particularly interested in exploring whether this lesson is relevant to international organisations (IOs) in the Geneva ecosystem and potentially beyond. Therefore, we sought out the UN entities that are undergoing or beginning a quiet yet perceptible AI transformation. We identified 15 Geneva-based UN actors who participated in the UN Activities on AI Report 2022 prepared by ITU. We interviewed a few practitioners to conclude on the following framework to consider how AI might bring changes to and within an IO. We could identify at least three pathways in which AI will induce an organisational transition. This change is the most visible. All of the analysed UN entities listed projects that output some form of research, policy papers, seminars, or working groups on AI’s influence in their respective domain of work. From the point of view of organisational transition, this is arguably the most straightforward pathway: it simply means that an organisation now has to invest resources in an additional branch of research area that investigates AI’s effects on its mandated missions, and potentially recruit or collaborate with external experts for this purpose. Have you ever pondered whether it is appropriate to ask ChatGPT to write the first paragraph of a press release or rephrase some convoluted sentences? Because we have. Despite the mandate differences across IOs, staff share similar day-to-day tasks. The ubiquity of AI tools will soon make (or has made) many IO staff wonder: is DeepL‘s full-PDF translation acceptable as a first draft for policy briefs? How safe is it to write analytical pieces with generative AI, especially if one is looking at the sensitive data of IO’s constituents? What is the copyright of DALL-E 2 or Canva AI-generated images, and could one use them in PowerPoint slides? What are the security implications of permitting Zoom AI access to meeting content, chat history, and email inboxes? These questions may seem out of place at first, but they will soon inundate an IO tech team’s inbox. To figure out the answers, one must understand how information is generated and flows within an organisation, how data is handled on a daily basis, how people produce meaningful knowledge from an ocean of noise, and how best AI could fit into that picture. Make no mistake; there’s no denying that AI will have to fit into that picture. It is already our spam email sorter, grammar check, search engine, auto-correct, smart scheduling, translation service, and so much more. This pathway of AI-induced transition is a slow burn, but it will light a fire (think prompt hacking) without proper care (think internal guidelines of how to safely employ AI). AI hereby prompts us to give this pathway due consideration. In ‘Prediction Machines: The Simple Economics of Artificial Intelligence,’ Agrawal, Gans, and Goldfarb provided a useful anatomy of a task (Figure 1) that helps distil the fear of AI replacing humans, the fear that AI will make decisions for us. The key is to unpack what goes into a decision: Among several other elements, AI is the best at predicting and not the others. Take the example of the AI advisor that our colleague Sorina Teleanu used during the European Summer School ‘Digital Europe.’ The students were to learn about the who, what, where, when, and how of digital governance. To transform theories into practice, Sorina led a simulation where students played the part of states and other stakeholders in negotiating a Global Digital Compact (GDC). The outcome—the final element of the task—is a well-discussed and thought-out GDC. How did the students arrive at this outcome? Sorina and Diplo’s AI lab used AI for a vital part of this task: prediction. They first supplied a ChatGPT-based model with a vast corpus of essential documents on digital governance as input. Then, they used prompt engineering, a technique to tailor the AI model’s capability to one’s needs by providing it with instruction-output pairs, to train the AI model to assume the role of a helpful advisor (see Figure 2). The trained AI advisor could then take students’ questions on a wide range of issues, including how to finetune their arguments in negotiations, their positions on specific topics, how to argue against other teams’ positions, and so on. The advisor would first go through the corpus selected by Sorina to look for answers and advise students accordingly. What did the AI advisor do best in this case? It had the capacity to go through thousands of pages of documents, find associations among words, phrases, and sentences, and provide its best prediction of what was important to students’ queries (see Figure 3). It was then the students’ job to judge whether the quality of the sentences was optimal, whether the logic was sound, and if the advised position aligned with their teams’ interests. The AI advisor provided the students with preliminary arguments and talking points, and it was instrumental in helping students grasp the gist of diplomatic documents and languages. However, there were also instances when students experienced the limits of the advisor: Students found themselves having to manually correct outdated information, and at other times students pointed out that AI’s reliance on past statements from specific governments or actors might not accurately reflect their current or real stance. In other words, students had to use their analytical skills to make judgments and take actions to modify and replace some less accurate predictions of the AI advisor before finally reaching the outcome. When boiling a task down to such detail, we can see that AI is not here to replace humans in decision-making but to be used in the process leading up to the decision. By incorporating AI into our daily tasks, we must ask: if AI can better carry out the prediction element of a task, which other elements should we focus our energy on? In this particular simulation, Sorina and the AI lab concentrated on collecting high-quality, machine-readable documents (input) and creating useful, instructive task descriptions as prompts (training). Students heightened their critical thinking skills (judgement) in spotting inconsistencies and errors and improved their editing and negotiating skills (action) instead of spending hours poring over documents. AI is not here to replace humans in negotiations, but it changes what we focus on and how we do things in a given task (for the better, hopefully). From software or applications based on machine learning (e.g. WIPO Translate, WIPO Speech-to-Text (S2T), UNOSAT Flood AI, ILO Competency Profiling AI with SkillLab) to analyses enhanced by machine predictions (e.g. UNHCR predictive analytics on contingency planning, UNHCR text analytics to improve protection, OHCHR Universal Human Rights Index (UHRI) powered by AI automated text classification), the usual outputs of an IO—be they actions, products, or seminars—could be transformed or improved by AI. Our analysis shows that Geneva-based UN entities are undergoing such transitions, with 8 out of 15 entities having produced AI-based software or databases. However, most AI activities relate to the aforementioned first pathway, which means that AI remains predominantly a topic of research and discussion instead of a part of the solution. The key lies in transitioning from exploring how AI will change a given field to using AI to change it. External collaborations: Bringing in collaborators is a good starting point. After all, the UN system is no stranger to fostering a multistakeholder ecosystem, where private sectors, academia, and civil society organisations work with UN entities to develop solutions. ILO’s Competency Profiling AI is a good example, with the technological components of the app developed by a Dutch startup SkillLab. OHCHR’s UHRI, likewise, borrows the brain of former CERN scientist Niels Jørgen Kjær, who works at Specialisterne. External collaboration works well for entities with less in-house technical capacities. It requires active scouting of the tech startup ecosystem and innovation powerhouses like universities. Switzerland is not only famous for its natural landscape but also for its tech innovation landscape (think ETH and EPFL, both top-ranking universities that churn out startup-leading engineers every year). Internal capacity-building: the opposite approach is to build in-house capabilities, such as improving the infrastructure that enables AI incorporation via the tech team, recruiting technology background officers, hosting information sessions to introduce the newest technological tools to staff, etc. There is a constraint in this approach, though, as it might overload the existing tech teams with unrealistic expectations and a high amount of caseload without sufficient organisation-wide support and investment. However, with advancements in AI, upskilling current staff who don’t have a hard technical background has become possible. Consider the potential unlocked by Codex, which directly translates instructions in natural languages (i.e., the language we speak) into executable computer codes. An AI model developed by OpenAI that powers the GitHub Copilot, Codex was trained on innumerable exchanges among computer scientists and programmers on GitHub. Its capability to take in natural languages as inputs and provide programming codes as outputs means that an IO staff don’t need to have deep knowledge of programming languages to write their own applications. While this is still a long way from having in-house staff fully capable of automating their tasks, it is but one innovative way of internal capacity-building. The question of AI transition is not one of if or when – it is already underway. The better question is how to be active during this transition so that it works for us instead of pushing us off the cliff. The 3-pathway AI-induced organisational transition could be a starting point that guides our thinking on the big picture. Spoiler alert: It’s more than asking ChatGPT to do our bidding
AI-induced organisational transition: 3 pathways of influence
Pathway 1: AI will influence the nature of various issue domains in which the UN engages.
Pathway 2: AI will change how an organisation does things internally.
Pathway 3: AI will change the output of an organisation.
Concluding remarks
In the end, it is not that AI will replace humans; it is that those who don’t understand AI will be replaced.