AI optimism in geopolitically pessimistic Davos

In the serene backdrop of the Swiss Alps, the World Economic Forum (WEF) in Davos stands as a barometer for the year’s technological zeitgeist. Tracing back to 1996, when John Barlow’s proclamation of cyberspace’s independence echoed through these halls, WEF has consistently heralded the arrival of groundbreaking tech paradigms – from the rise of social media to the advent of blockchain.

After an exceptionally tech-shy Summit in 2023, this year, artificial intelligence (AI) restored tech optimism in Davos. The blueness of the tech sky stands out even more against the gloomy and cloudy global geopolitics. The 24 AI-centric sessions out of the 235 at WEF featured 135 speakers and a flurry of 1101 arguments, with the majority being positive (600), some neutral (319), and a few negative (182).

Is this optimistic dose of AI a panacea for global problems, the maturation of AI discourse, or yet another tech sleepwalk?  Here are some reflection points aimed at answering this question.

Optimistic dosage:  The WEF AI debates have been rated a solid ‘8’ on DiploAI’s optimism scale. Conversations have revolved around AI’s potential to elevate productivity, combat diseases, and solve environmental crises. The economic forecasts present AI as a trillion-dollar boon, a narrative contrast to AI doomsday scenarios from spring of 2023

From extinction to existing risks: Sam Altman and the signatories to several letters in 2023 on extinction AI risks have recalibrated their language at WEF. Risks of extinction or existential threats are no longer ‘in’. AI risks were qualified as problems that humanity will overcome, just as we have done with technologies in the past. It’s unclear if the threat of AI extinction has vanished, they’ve discovered something new, or corporate interests have taken precedence over any other AI-related concerns.

AI gurus owe us an answer to these questions, as their predictions that AI would destroy humanity were made with great conviction last year. If we do not get an explanation, the next time they ‘scream’, no one will take them seriously. Furthermore, such an approach would erode trust in the science that underpins AI.

The majority of the discussion on the AI risks focused on misinformation, job losses, and reform of education.

IPR and Content for AI: The New York Times’ court case against OpenAI over the use of copyrighted material to train AI models was frequently mentioned in Davos. Traceability and transparency in AI development will be critical for a sustainable and functional AI economy. 

AI Governance: The narrative of last year, that governance should focus on AI capabilities, has given way to a focus on AI apps and uses. It makes AI less unique and more governable, just like any other technology. This approach, used by the EU AI Act, is also gaining popularity in the United States. The WEF discussions revealed more similarities than differences in the regulation of AI in China, the United States, and Europe.

AI governance pyramide

Open source AI:  The stance on open-source AI as an unmanageable risk softened at WEF. Yann LeCun of Meta argued that open-source AI is beneficial not only for scientific progress but also for controlling the monopolies of large AI tech companies and incorporating diverse cultural and societal inputs into AI development. Open-source AI will gain traction in 2024, posing a significant challenge to proprietary software such as OpenAI.

AI and Development: According to the UN Technology Envoy, Amandeep Singh Gill,  AI will not save the SDGs if current trends continue. This candid assessment was more of an outlier in WEF discussions. For example, there was little discussion of the AI-driven widening of digital divides and the increasing concentration of economic and knowledge power in the hands of a few companies.

Sam Altman’s admission—’no one knows what comes next’ with AI—encapsulates the dual nature of this AI juncture: it is both alarming and reassuring. The uncertainty voiced by those at the forefront of AI development is concerning, hinting at a possible ‘Frankenstein moment’. At the same time, it is encouraging that Sam Altman and others speak frankly about their knowledge of the impact of AI without the fear-mongering of the last year.

The current confusion around AI potentials and risks underscores the need for a mature, nuanced conversation on AI’s future. While the unpredictability of ‘unknown’ AI risks persists, we must navigate the ‘known’ challenges with agile, transparent, and inclusive regulatory frameworks. The Davos debates have made strides in this direction, aiming to steer us away from a dystopian AI future through informed, balanced dialogue rather than fear.

Four seasons of AI:  From excitement to clarity in the first year of ChatGPT

Winter of excitement | Spring of metaphors | Summer of reflections | Autumn of clarity 

ChatGPT was launched by OpenAI on the last day of November in 2022. It triggered a lot of excitement. We were immersed in the magic of a new tool as AI was writing poems and drawing images for us. Over the last 12 months, the winter of AI excitement was followed by a spring of metaphors, a summer of reflections, and the current autumn of clarity.

On the first anniversary of ChatGPT, it is time to step back, reflect, and see what is ahead of us.

Winter of Excitement 

ChatGPT was the most impressive success in the history of technology in terms of user adoption. In only 5 days, it acquired 1 million users, compared to, for example, Instagram, which needed 75 days to reach 1 million users. In only two months, ChatGPT reached an estimated 100 million users. 

The launch of ChatGPT last year was the result of countless developments in AI dating all the way back to 1956! These developments accelerated in the last 10 years with probabilistic AI, big data, and dramatically increased computational power. Neural networks, machine learning (ML), and large language models (LLMs) set the stage for AI’s latest phase, which brought tools like Siri and Alexa, and, most recently, generative pre-trained transformers, better known as GPTs, which are behind ChatGPT and other latest tools. 

ChatGPT started mimicking human intelligence by drafting our texts for us, answering questions, and creating images. 

Spring of Metaphors

The powerful features of ChatGPT triggered a wave of metaphors in the spring of this year. We humans, whenever we encounter something new, use metaphors and analogies to compare its novelty to something we already know. 

Most AI is anthropomorphised and typically described as the human brain, which ‘thinks’ and ‘learns’. ‘Pandora’s box’ and ‘black box’ are terms used to describe the complexity of neural networks. As spring advanced, more fear-based metaphors took over, centred around doomsday, Frankenstein, and Armageddon

As discussions on governing AI gained momentum, analogies were drawn to climate change, nuclear weapons, and scientific cooperation. All of these analogies highlight similarities while ignoring differences. 

Summer of Reflection 

Summer was relatively quiet, and it was a time to reflect on AI. Personally, I dusted off my old philosophy and history books to search for old wisdom to answer current AI challenges, which are far beyond simple technological solutions. 

Under the series ‘Recycling Ideas’ I dove back into ancient philosophy, religious traditions, and different cultural contexts from Ancient Greece to Confucius, India, and the Ubuntu concept of Africa, among others.

Autumn of Clarity

Clarity pushed out hype as AI increasingly made its way onto the agendas of national parliaments and international organisations. Precise legal and policy formulations have replaced the metaphorical descriptions of AI. In numerous policy documents from various groupings—G7, G20, G77, G193, UN—the usual balance between opportunities and threats has shifted more towards risks. 

Some processes, like the London AI Summit, focused on the long-term existential risks of AI. Others brought more ‘weight’ to the immediate risks of AI (re)shaping our work, education, and public communication. In inspiration for governance, many proposals often mentioned the International Atomic Agency, CERN, and the International Panel on Climate Change. 

A year has passed: What’s next?

AI will continue to become mainstream in our social fabric, from individual choices and family dynamics to jobs and education. As the structural relevance of AI increases, its governance will require even more clarity and transparency. As the next step, we should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed. 

How to address AI risks  

There are three main types of AI risks that should shape AI regulations: 

Unfortunately, currently, it is these long-term, ‘extinction’ risks that tend to dominate public debates. 

AI Risks Venne Diagram

Short-term risks: These include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks: We can see them coming, but we aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all AI knowledge and tools, just as tech platforms currently control people’s data, which they have amassed over the years. Such AI power could lead them to control our businesses, lives, and politics. If we don’t figure out how to deal with such monopolies in the coming years, they could bring humanity to the worst dystopian future in only a decade. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulations, as well as data and intellectual property protection

Long-term risks: The scary sci-fi stuff, or the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising humanity’s very survival. After very intensive doomsday propaganda through 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of long-term risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on long-term risks while mentioning short-term ones in passing and making no reference to any medium-term risk. 

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as the use of precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and the responses of society.

What aspects of AI should be governed? 

In addition to AI risks, the other important question is: What aspects of AI should be governed? As the AI governance pyramid illustrates below, AI developments are related to computation, data, algorithms, or uses.  Selecting where and how to govern AI has far-reaching consequences for AI and society. 

 Business Card, Paper, Text, Triangle

Computation level: The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

Data level: This is where AI gets its main inputs from, sometimes called the ‘oil’ of the AI industry. However, the protection of data and intellectual property is not as prominent as the regulation of AI algorithms in the current AI debates. There are more and more calls for clarity on what data and inputs are used. Artists, writers, and academics are checking if AI platforms build their fortune on their intellectual work. Thus, AI regulators should put much more pressure on AI companies to be transparent about using data and intellectual property to develop their models.

Algorithmic level: Most of the AI governance debate is on algorithms and AI models. They mainly focus on the long-term risks that AI can pose for humanity. On a more practical level, the discussion focuses on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses. Those who highlight security risks also argue for centralised control of AI developments, preferably by a few tech companies, and restricting the use of an open-source approach to AI.

Apps and tools level: This is the most appropriate level for regulating technology. For a long time, the main focus of internet governance was on the level of use while avoiding any regulatory intervention into how the internet functions, from standards to the operation of internet infrastructure (like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. Thus, the current calls to shift regulations on the algorithm level (under the bonnet of technology) could have far-reaching consequences for technological progress. 

Current debates on AI governance are focused on at least one of these layers. For example, at the core of the last mile of negotiations on the EU’s AI Act is the debate on whether AI should be governed on the algorithm level or when they become apps and tools. The prevailing view is that it should be done on the top of the pyramid—apps and tools.

Interestingly, most supporters of governing AI codes and algorithms, often described as ‘doomsayers’ or ‘longtermists’, rarely mention governing AI apps and tools or their data aspects. Both areas—data and the use of AI—are areas that have more detailed regulation, which is often not in the interest of tech companies. 

Read Also

ChatGPT: A Year in Review

 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

x x x

On the occasion of the very first birthday of ChatGPT, the need for clarity in AI governance prevails. It is important that this trend continues, as we need to make complex trade-offs between short, medium, and long-term risks. 

At Diplo, our focus is on anchoring AI in the core values of humanity through our humAInism project and community. In this context, our focus will be on awareness building of citizens and policymakers. We need to understand AI’s basic technological functionality without going into complex terminology. Most of AI is about patterns and probability, as we recently discussed with diplomats while explaining AI via patterns of colours in national flags. 

Why not join us in working for an informed, inclusive, meaningful, and impactful debate on AI governance! 

How can we deal with AI risks?

Clarity in dealing with ‘known’ and transparency in addressing ‘unknown’ AI risks

In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI, compared to more immediate issues: we’re warned about the risk of extinction, the risks from future superintelligent systems, and the need to heed to these problems. But is this focus on future risks blinding us from tackling what’s actually in front of us?

Types of risks

There are three types of risks:

In this text, you can find a summary of three types of risks, their current coverage, and suggestions for moving forward.

 Diagram, Venn Diagram, Disk

Short-term risks include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks are those we can see coming but aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all the AI knowledge, just as they currently control people’s data, which they have amassed over the years. They have the data and the powerful computers. That could lead to them calling the shots in business, our lives, and politics. It’s like something out of a George Orwell book, and if we don’t figure out how to handle it, we could end up there in 5 to 10 years. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulation, as well as protection of data and intellectual property. Provided that we acknowledge these risks and decide we want and need to address them.  

Long-term risks are the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising even humanity’s very survival. These threats haunt the collective psyche and dominate the global narrative with an intensity paralleling that of nuclear armageddon, pandemics, or climate cataclysms. Dealing with long-term risks is a major governance challenge due to the uncertainty of AI developments and their interplay with short-term and mid-term AI risks.  

The need to address all risks, not just future ones

Now, as debates on AI governance mechanisms advance, we have to make sure we’re not just focusing on long-term risks simply because they are the most dramatic and omnipresent in global media. If we are to take just one example, last week’s Bletchley Declaration announced during the UK’s AI Safety Summit had a heavy focus on long-term risks; it mentioned short-term risks only in passing and it made no reference to any medium-term risks.

If we are to truly govern AI for the benefit of humanity, AI risks should be addressed more comprehensively. Instead of focusing heavily on one set of risks, we should look at all risks and develop an approach to address them all.

In addressing all risks, we should also use the full spectrum of existing regulatory tools, including some used in dealing with the unknowns of climate change, such as scenario building and precautionary principles. 

Ultimately, we will face complex and delicate trade-offs that could help us reduce risks. Given the unknown nature of many AI developments ahead of us, trade-offs must be continuously made with a high level of agility. Only then can we hope to steer the course of AI governance towards a future where AI serves humanity, and not the other way around.

Jua Kali AI: Bottom-up algorithms for a Bottom-up economy

As artificial intelligence (AI) becomes a cornerstone of the global economy, AI’s foundations must be anchored in community-driven data, knowledge, and wisdom. ‘Bottom-up AI’ should grow from the grassroots of society in sustainable, transparent, and inclusive ways. 

Kenya, with its innovative bottom-up economic strategy, could play a pioneering role in new AI developments. Bottom-up AI could give farmers, traders, teachers, and the local and business communities the power to use and protect AI systems that contain their knowledge and skills that have been honed over generations. 

Kenya’s digital landscape is ripe for such innovation. It is home to a dynamic tech community. It has been the cradle of numerous technological breakthroughs, such as the widely known Mpesa mobile payment service and the Ushahidi crowdsourcing platforms. 

However, there is a prevailing notion, fuelled by media narratives, that AI development is the exclusive domain of big data, massive investments, and powerful computational centres. Is it possible for Kenya to circumvent these behemoths using its indigenous ‘Jua Kali’—an informal, resourceful approach—to cultivate Bottom-Up AI?

The answer is a resounding yes, as exemplified by the advent of open-source platforms and the strategic utilisation of small but high-quality datasets. 

Micro-enterprises of Kenya's Bottom-up (Jou Kali economy).

Jua Kali micro-enterprises

Open-source Platforms: The pillars of Bottom-up AI

Open-source AI platforms are challenging the dominant paradigm that AI necessitates colossal AI systems  – as leveraged by prominent language models like ChatGPT and Bard. A purported internal document from Google candidly acknowledges this competitive edge: “They are doing things with $100 and 13 billion parameters that we struggle with at $10 million and 540 billion parameters. And they are doing so in weeks, not months.”

Names like Vicuna, Alpaca, LLama, and Falcon now appear alongside ChatGPT and Bard, demonstrating that open-source platforms can deliver comparable performance without extravagant costs. Moreover, they tend to be more adaptable and environmentally friendly – requiring significantly less energy for data processing.

Small and High-quality Data: The key resource of Bottom-up AI

As open-source algorithms become more accessible, the emphasis of bottom-up AI naturally shifts to data quality, which depends on data labelling, a human-intensive activity. A lot of data labelling for Chat GPT has been done in Kenya, which triggered numerous labour criticisms.

Alternative approaches are feasible. As a matter of fact, at Diplo we have pioneered integrating data labelling into our regular activities, from research to training to project development. This is akin to using digital highlighters and sticky notes within our interactive frameworks, thus organically fostering Bottom-Up AI.

Text is not the sole medium for knowledge codification. We can also digitally annotate videos and voice recordings. Imagine farmers sharing their insights on agriculture and market strategies through narratives, enhancing the AI’s knowledge base with lived experiences.

Beyond Technology: Embracing organisational and societal shifts

The primary hurdle for Bottom-up AI is not technological but organisational and revolves around societal and policy priorities. Building on its digital dynamism, Kenya has the potential to lead by marrying technological advances with practical, citizen-focused applications.

Kenya’s bottom-up AI could contribute to preserving our knowledge and wisdom as a global public good, which we should pass on to future generations as the common heritage of humanity.

IGF 2023: Grasping AI while walking in the steps of Kyoto philosophers

The Internet Governance Forum (IGF) 2023 convenes in Kyoto, the historical capital of Japan. With its long tradition of philosophical studies, the city provides a fitting venue for debate on AI, which increasingly centres around questions of ethics, epistemology, and the essence of human existence. The work of the Kyoto School of Philosophy on bridging Western and Asian thinking traditions is gaining renewed relevance in the AI era. In particular, the writings of Nishida Kitaro, father of Japanese modern philosophy, shed light on questions such as human-centred AI, ethics, and the duality between humans and machines. 

Nishida Kitaro, in the best tradition of peripatetic walking philosophy, routinely walked the Philosopher’s Path in Kyoto alone. Yesterday, I traced his paths while trying to experience the genius loci of this unique and historic place.

 Person, Walking, Clothing, Coat, Path, Accessories, Glasses, Footwear, Shoe, Backpack, Bag, Architecture, Building, Outdoors, Shelter, Plant, Vegetation, Tree, Walkway, Garden, Nature

On the Philosopher’s Path in Kyoto

Here are a few of Nishida Kitaro’s ideas that could help us navigate our AI future:

Humanism

Nishida’s work is deeply rooted in understanding the human condition and is heavily influenced by humanistic principles. His philosophy emphasizes the interconnectedness of individuals and the importance of personal experience and self-awareness.

This perspective serves as a vital reminder that AI should be designed to enhance human capabilities and improve the human condition rather than diminish or replace human faculties. By integrating humanistic values, AI can be developed to support human growth, creativity, and well-being, ensuring that technology serves as a tool for empowerment rather than a substitute for human interaction and understanding.

Self-Awareness and Place

Nishida delved deeply into metaphysical notions of being and non-being, the self and the world. His exploration of these concepts often intersected with themes of nihilism, questioning the inherent meaning and value of existence. As the debate on artificial generative intelligence advances, Nishida’s work could offer valuable insights into the contentious issues of machine consciousness and self-awareness. It begs the question: what would it mean for a machine to be ‘aware,’ and how would this awareness correlate with human notions of self and consciousness? Furthermore, how might nihilistic perspectives influence our understanding of machine self-awareness, challenging the essence of consciousness in a world where meaning is not preordained?

Complexity

Nishida paid significant attention to the complexities inherent in both logic and epistemology. He explored how these complexities are not merely abstract concepts but are deeply intertwined with the lived experiences and cultural contexts of individuals. Nishida’s work delves into the dynamic interplay between the subjective and objective realms, emphasizing that understanding complexity requires a holistic approach that considers both the internal and external factors influencing human thought and behavior. His work could serve as a foundational base for developing algorithms that can better understand and adapt to the complexities of human society.e complexities of human society.

Interconnectedness

Nishida’s philosophy strongly critiques the traditional Western view of dualistic perspectives of essence and form. This line of thinking is often extended to understanding the complex relationships between humans and machines. He would likely assert that humans and machines are fundamentally interlinked, challenging the conventional separation. In this interconnected arena, beyond traditional dualistic frameworks (AI vs humans, good vs bad), we must develop innovative approaches to AI.

 Book, Publication, Person, Reading, Adult, Male, Man, Novel, Face, Head, Accessories, Glasses, Kitarō Nishida

Nishido Kitara, founder of the Kyoto School of Philosophy

Absolute Nothingness

Nishida anchors his philosophy in absolute nothingness, which resonates strongly with Buddhism, Daoism, and other Asian thinking traditions that nurtured the concept of ‘zero’, which has shaped mathematics and our digital world. Nishida’s notion of ‘absolute nothingness’ could be applied to understand the emptiness or lack of inherent essence in data, algorithms, or AI.

Contradictions and Dialogue

Contradictions are an innate part of human existence and societal structures. For Nishida, these contradictions should be acknowledged rather than considered aberrations. Furthermore, these contradictions can be addressed through a dialectic approach, considering human language, emotions, and contextual elements. The governance of AI certainly involves many such contradictions, and Nishida’s philosophy could guide regulators in making the necessary trade-offs.

Ethics

Nishida’s work aims to bridge Eastern and Western ethics, which will be one of the critical issues of AI governance. He considers ethics in the wider socio-cultural milieus that shape individual decisions and choices. Ethical action, in his framework, comes from a deep sense of interconnectedness and mutual responsibility. 

Nishida Kitaro would advise AI developers to move beyond codifying ethical decision-making as a static set of rules. Instead, AI should be developed to adapt and evolve within the ethical frameworks of the communities they serve, considering cultural, social, and human complexities. 

Conclusion

As the IGF 2023 unfolds in the philosophical heartland of Kyoto, it’s impossible to overlook the enriching influence of Nishida Kitaro and the Kyoto School. The juxtaposition is serendipitous: a modern forum grappling with the most cutting-edge technologies in a city steeped in ancient wisdom. 

While the world is accelerating into an increasingly AI-driven future, Kitaro’s work helps outline a comprehensive ethical, epistemological, and metaphysical framework for understanding not just AI but also the complex interplay between humans and technology. In doing so, Nishida’s thinking challenges us to envision a future where AI is not an existential threat or a mere tool but an extension and reflection of our collective quest for meaning. 

A Philospher’s Walk in the steps of Nishida Kitaro could inspire new ideas for addressing AI and our digital future. 

Read more on Nishida Kitaro’s work on the Stanford Encyclopedia of Philosophy.

Prediction Machines in International Organisations: A 3-Pathway Transition

Spoiler alert: It’s more than asking ChatGPT to do our bidding

Artificial intelligence (AI) is at the tip of everyone’s tongue these days. It seems to have found a way to wiggle into every conversation; everything is “AI this, AI that”. And all of a sudden, there’s enthusiastic talk on how AI is – or could be – changing the diplomatic, environmental, health, humanitarian, human rights, intellectual property, labour, migration, security, and more and more, landscapes. The alphabetical list of issue domains that AI can change is as long as a dictionary. 

Probably an attempt to come down from the hype, people have started asking a critical question: What could we actually do with AI in our day-to-day business? 

Yes, we could ask ChatGPT or Bard this question, and they will give passable answers. And, yes, we will probably be able to ask ChatGPT or Bard to just implement those answers for us. So why bother to figure it out ourselves?

Having experimented with various AI systems and reflected on Diplo’s own line of work, we have learned a crucial lesson: adopting AI tools is more than just asking ChatGPT to do our bidding; by incorporating these tools into various tasks, we actually have to enter an organisation-wide transition. And this transitional period is how we keep up with thousands of technological advancements each day. 

AI-induced organisational transition: 3 pathways of influence

We are particularly interested in exploring whether this lesson is relevant to international organisations (IOs) in the Geneva ecosystem and potentially beyond. Therefore, we sought out the UN entities that are undergoing or beginning a quiet yet perceptible AI transformation. We identified 15 Geneva-based UN actors who participated in the UN Activities on AI Report 2022 prepared by ITU. We interviewed a few practitioners to conclude on the following framework to consider how AI might bring changes to and within an IO. 

We could identify at least three pathways in which AI will induce an organisational transition.

Pathway 1: AI will influence the nature of various issue domains in which the UN engages.

This change is the most visible. All of the analysed UN entities listed projects that output some form of research, policy papers, seminars, or working groups on AI’s influence in their respective domain of work. From the point of view of organisational transition, this is arguably the most straightforward pathway: it simply means that an organisation now has to invest resources in an additional branch of research area that investigates AI’s effects on its mandated missions, and potentially recruit or collaborate with external experts for this purpose.

Pathway 2: AI will change how an organisation does things internally.

Have you ever pondered whether it is appropriate to ask ChatGPT to write the first paragraph of a press release or rephrase some convoluted sentences? Because we have. Despite the mandate differences across IOs, staff share similar day-to-day tasks. The ubiquity of AI tools will soon make (or has made) many IO staff wonder: is DeepL‘s full-PDF translation acceptable as a first draft for policy briefs? How safe is it to write analytical pieces with generative AI, especially if one is looking at the sensitive data of IO’s constituents? What is the copyright of DALL-E 2 or Canva AI-generated images, and could one use them in PowerPoint slides? What are the security implications of permitting Zoom AI access to meeting content, chat history, and email inboxes? These questions may seem out of place at first, but they will soon inundate an IO tech team’s inbox. To figure out the answers, one must understand how information is generated and flows within an organisation, how data is handled on a daily basis, how people produce meaningful knowledge from an ocean of noise, and how best AI could fit into that picture. 

Make no mistake; there’s no denying that AI will have to fit into that picture. It is already our spam email sorter, grammar check, search engine, auto-correct, smart scheduling, translation service, and so much more. This pathway of AI-induced transition is a slow burn, but it will light a fire (think prompt hacking) without proper care (think internal guidelines of how to safely employ AI). AI hereby prompts us to give this pathway due consideration.

In ‘Prediction Machines: The Simple Economics of Artificial Intelligence,’ Agrawal, Gans, and Goldfarb provided a useful anatomy of a task (Figure 1) that helps distil the fear of AI replacing humans, the fear that AI will make decisions for us.

The key is to unpack what goes into a decision: Among several other elements, AI is the best at predicting and not the others. 

A graph that maps out the anatomy of a task.
Figure 1. Anatomy of a task, ‘Chapter 7: Unpacking Decisions’ in ‘Prediction Machines: The Simple Economics of Artificial Intelligence’ by Agrawal, Gans, and Goldfarb (2018).

Take the example of the AI advisor that our colleague Sorina Teleanu used during the European Summer School ‘Digital Europe.’ The students were to learn about the who, what, where, when, and how of digital governance. To transform theories into practice, Sorina led a simulation where students played the part of states and other stakeholders in negotiating a Global Digital Compact (GDC). The outcome—the final element of the task—is a well-discussed and thought-out GDC. How did the students arrive at this outcome?

Sorina and Diplo’s AI lab used AI for a vital part of this task: prediction. They first supplied a ChatGPT-based model with a vast corpus of essential documents on digital governance as input. Then, they used prompt engineering, a technique to tailor the AI model’s capability to one’s needs by providing it with instruction-output pairs, to train the AI model to assume the role of a helpful advisor (see Figure 2). The trained AI advisor could then take students’ questions on a wide range of issues, including how to finetune their arguments in negotiations, their positions on specific topics, how to argue against other teams’ positions, and so on. The advisor would first go through the corpus selected by Sorina to look for answers and advise students accordingly.

Figure 2. Prompt example.

What did the AI advisor do best in this case? It had the capacity to go through thousands of pages of documents, find associations among words, phrases, and sentences, and provide its best prediction of what was important to students’ queries (see Figure 3). It was then the students’ job to judge whether the quality of the sentences was optimal, whether the logic was sound, and if the advised position aligned with their teams’ interests. 

Figure 3. Brazil team AI advisor’s interaction with the student.

The AI advisor provided the students with preliminary arguments and talking points, and it was instrumental in helping students grasp the gist of diplomatic documents and languages. However, there were also instances when students experienced the limits of the advisor: Students found themselves having to manually correct outdated information, and at other times students pointed out that AI’s reliance on past statements from specific governments or actors might not accurately reflect their current or real stance. In other words, students had to use their analytical skills to make judgments and take actions to modify and replace some less accurate predictions of the AI advisor before finally reaching the outcome

When boiling a task down to such detail, we can see that AI is not here to replace humans in decision-making but to be used in the process leading up to the decision. By incorporating AI into our daily tasks, we must ask: if AI can better carry out the prediction element of a task, which other elements should we focus our energy on? In this particular simulation, Sorina and the AI lab concentrated on collecting high-quality, machine-readable documents (input) and creating useful, instructive task descriptions as prompts (training). Students heightened their critical thinking skills (judgement) in spotting inconsistencies and errors and improved their editing and negotiating skills (action) instead of spending hours poring over documents. AI is not here to replace humans in negotiations, but it changes what we focus on and how we do things in a given task (for the better, hopefully). 

Pathway 3: AI will change the output of an organisation.

From software or applications based on machine learning (e.g. WIPO Translate, WIPO Speech-to-Text (S2T), UNOSAT Flood AI, ILO Competency Profiling AI with SkillLab) to analyses enhanced by machine predictions (e.g. UNHCR predictive analytics on contingency planning, UNHCR text analytics to improve protection, OHCHR Universal Human Rights Index (UHRI) powered by AI automated text classification), the usual outputs of an IO—be they actions, products, or seminars—could be transformed or improved by AI. Our analysis shows that Geneva-based UN entities are undergoing such transitions, with 8 out of 15 entities having produced AI-based software or databases. However, most AI activities relate to the aforementioned first pathway, which means that AI remains predominantly a topic of research and discussion instead of a part of the solution. 

The key lies in transitioning from exploring how AI will change a given field to using AI to change it

External collaborations: Bringing in collaborators is a good starting point. After all, the UN system is no stranger to fostering a multistakeholder ecosystem, where private sectors, academia, and civil society organisations work with UN entities to develop solutions. ILO’s Competency Profiling AI is a good example, with the technological components of the app developed by a Dutch startup SkillLab. OHCHR’s UHRI, likewise, borrows the brain of former CERN scientist Niels Jørgen Kjær, who works at Specialisterne

External collaboration works well for entities with less in-house technical capacities. It requires active scouting of the tech startup ecosystem and innovation powerhouses like universities. Switzerland is not only famous for its natural landscape but also for its tech innovation landscape (think ETH and EPFL, both top-ranking universities that churn out startup-leading engineers every year). 

Internal capacity-building: the opposite approach is to build in-house capabilities, such as improving the infrastructure that enables AI incorporation via the tech team, recruiting technology background officers, hosting information sessions to introduce the newest technological tools to staff, etc. There is a constraint in this approach, though, as it might overload the existing tech teams with unrealistic expectations and a high amount of caseload without sufficient organisation-wide support and investment. 

However, with advancements in AI, upskilling current staff who don’t have a hard technical background has become possible. Consider the potential unlocked by Codex, which directly translates instructions in natural languages (i.e., the language we speak) into executable computer codes. An AI model developed by OpenAI that powers the GitHub Copilot, Codex was trained on innumerable exchanges among computer scientists and programmers on GitHub. Its capability to take in natural languages as inputs and provide programming codes as outputs means that an IO staff don’t need to have deep knowledge of programming languages to write their own applications. While this is still a long way from having in-house staff fully capable of automating their tasks, it is but one innovative way of internal capacity-building. 

Concluding remarks

The question of AI transition is not one of if or when – it is already underway. The better question is how to be active during this transition so that it works for us instead of pushing us off the cliff. The 3-pathway AI-induced organisational transition could be a starting point that guides our thinking on the big picture.

In the end, it is not that AI will replace humans; it is that those who don’t understand AI will be replaced. 

2023 Diplo Days in Geneva

Join us for a wide range of sessions and discussions on AI and digital technologies between 4 and 8 November 2023 during UNCTAD’s eWeek!

On 7 December (17.00 – 20.00) at Diplo’s Evening, you can learn more about our AI activities and plans for 2024 while enjoying an end-of-year get-together with our alumni, lectuers, and experts. Please let us know if you can join us geneva@diplomacy.edu.

YouTube player

Diplo will provide hybrid reporting (AI and experts) from UNCTAD’s eWeek (4-8 November 2023).

All time in CET

Jovan Kurbalija will introduce hybrid reporting during the UNCTAD’s eWeek’s opening session (see more)

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek

Scenario-building exercise with youth on digital economy, organised by Diplo, UNCTAD, and FES (see more)

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek


Diplo will participate in a wide range of activities during UNCTAD’s eWeek. Most of the sessions on AI governance and diplomacy will be held on 6th December.

Diplo will organise the following sessions:

Diplo’s experts will participate in the following sessions:

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek


Cyber norms in action: How to translate diplomatic agreements into real security for us all?

You can consult the draft of the Geneva Manual here.

Venue: WMO building, Attic | Online

Registration: In situ participation in Geneva | Online participation

End-of-year reception | Diplo’s plans for 2024 with Diplo lecturers, developers and experts

Diplo Evening will be an occasion to meet Diplo’s lecturers, developers, and experts. In informal setting, you can visit 6 corners featuring Diplo’s activities and projects (see below)

Venue: WMO building, Attic | Online

Registration: Write to geneva@diplomacy.edu

DiploAI & HumAInism

Learn about Diplo’s holistic approach to AI!

You can learn how AI technology functions and the interplay between AI, governance, diplomacy, philosophy, and the arts. You can also learn about AI tools and platforms.

Geneva Digital Atlas

You can explore the Geneva Digital Atlas, which provides in-depth coverage of the activities of 50 actors, an analysis of policy processes, and a catalogue of all core instruments and events. The atlas follows various topic threads, from AI and cybersecurity to e-commerce and standardisation.

Geneva Manual

Learn about application of cyber norms to digital reality!

You can learn about the Geneva Manual and the next steps in applying cyber norms to the digital realm. The current version of the Geneva Manual is available here.

Future of Meetings

Discover uses of AI in the preparation and running of conferences and meetings!

You can learn on:

  • selection of theme for event (relevance, presence) and speakers,
  • drafting of agenda, summary, and background note, 
  • preparation of visuals (logo, backdrops, accessories)
  • preparations of jingle and video
  • transcribing voice recordings from events
  • reporting and follow-up 

You can also consult practical examples of Diplo’s reporting from the UN General Assembly, UN Cybersecurity processes (UN GGE and OEWG), and the UN Internet Governance Forum.

Diplo AI Campus

Explore the wide range of learning opportunities at AI campus of Diplo Academy!

You can consult Diplo’s forthcoming course on various aspects of AI technology, diplomacy, and governance:

  • Basics of AI technological functionality
  • AI prompting for diplomats
  • AI governance
  • AI diplomacy
  • AI and digital economy
  • ‘Future’ in AI debates
  • Main narratives in AI governance
  • AI and digital economy
  • AI and human rights
  • Use of AI negotiations
  • Use of AI in reporting

Artistic perception of AI

Enjoy alternative ways of understanding AI technology, governance, and diplomacy.

The exhibition will feature the latest drawings and illustrations of Prof. Vladimir Veljasevic. You can explore AI, cybersecurity, diplomacy, and digital governance through the lens of the artist.


The Panel discussion will address participation of African countries in AI and digital governance and diplomacy (See more)

Venue: CICG | Room B

The event is organised in the context of the eWeek. Registration is required.

Related actors:

Related people: