Four seasons of AI:  From excitement to clarity in the first year of ChatGPT

Winter of excitement | Spring of metaphors | Summer of reflections | Autumn of clarity 

ChatGPT was launched by OpenAI on the last day of November in 2022. It triggered a lot of excitement. We were immersed in the magic of a new tool as AI was writing poems and drawing images for us. Over the last 12 months, the winter of AI excitement was followed by a spring of metaphors, a summer of reflections, and the current autumn of clarity.

On the first anniversary of ChatGPT, it is time to step back, reflect, and see what is ahead of us.

Winter of Excitement 

ChatGPT was the most impressive success in the history of technology in terms of user adoption. In only 5 days, it acquired 1 million users, compared to, for example, Instagram, which needed 75 days to reach 1 million users. In only two months, ChatGPT reached an estimated 100 million users. 

The launch of ChatGPT last year was the result of countless developments in AI dating all the way back to 1956! These developments accelerated in the last 10 years with probabilistic AI, big data, and dramatically increased computational power. Neural networks, machine learning (ML), and large language models (LLMs) set the stage for AI’s latest phase, which brought tools like Siri and Alexa, and, most recently, generative pre-trained transformers, better known as GPTs, which are behind ChatGPT and other latest tools. 

ChatGPT started mimicking human intelligence by drafting our texts for us, answering questions, and creating images. 

Spring of Metaphors

The powerful features of ChatGPT triggered a wave of metaphors in the spring of this year. We humans, whenever we encounter something new, use metaphors and analogies to compare its novelty to something we already know. 

Most AI is anthropomorphised and typically described as the human brain, which ‘thinks’ and ‘learns’. ‘Pandora’s box’ and ‘black box’ are terms used to describe the complexity of neural networks. As spring advanced, more fear-based metaphors took over, centred around doomsday, Frankenstein, and Armageddon

As discussions on governing AI gained momentum, analogies were drawn to climate change, nuclear weapons, and scientific cooperation. All of these analogies highlight similarities while ignoring differences. 

Summer of Reflection 

Summer was relatively quiet, and it was a time to reflect on AI. Personally, I dusted off my old philosophy and history books to search for old wisdom to answer current AI challenges, which are far beyond simple technological solutions. 

Under the series ‘Recycling Ideas’ I dove back into ancient philosophy, religious traditions, and different cultural contexts from Ancient Greece to Confucius, India, and the Ubuntu concept of Africa, among others.

Autumn of Clarity

Clarity pushed out hype as AI increasingly made its way onto the agendas of national parliaments and international organisations. Precise legal and policy formulations have replaced the metaphorical descriptions of AI. In numerous policy documents from various groupings—G7, G20, G77, G193, UN—the usual balance between opportunities and threats has shifted more towards risks. 

Some processes, like the London AI Summit, focused on the long-term existential risks of AI. Others brought more ‘weight’ to the immediate risks of AI (re)shaping our work, education, and public communication. In inspiration for governance, many proposals often mentioned the International Atomic Agency, CERN, and the International Panel on Climate Change. 

A year has passed: What’s next?

AI will continue to become mainstream in our social fabric, from individual choices and family dynamics to jobs and education. As the structural relevance of AI increases, its governance will require even more clarity and transparency. As the next step, we should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed. 

How to address AI risks  

There are three main types of AI risks that should shape AI regulations: 

Unfortunately, currently, it is these long-term, ‘extinction’ risks that tend to dominate public debates. 

AI Risks Venne Diagram

Short-term risks: These include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks: We can see them coming, but we aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all AI knowledge and tools, just as tech platforms currently control people’s data, which they have amassed over the years. Such AI power could lead them to control our businesses, lives, and politics. If we don’t figure out how to deal with such monopolies in the coming years, they could bring humanity to the worst dystopian future in only a decade. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulations, as well as data and intellectual property protection

Long-term risks: The scary sci-fi stuff, or the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising humanity’s very survival. After very intensive doomsday propaganda through 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of long-term risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on long-term risks while mentioning short-term ones in passing and making no reference to any medium-term risk. 

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as the use of precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and the responses of society.

What aspects of AI should be governed? 

In addition to AI risks, the other important question is: What aspects of AI should be governed? As the AI governance pyramid illustrates below, AI developments are related to computation, data, algorithms, or uses.  Selecting where and how to govern AI has far-reaching consequences for AI and society. 

 Business Card, Paper, Text, Triangle

Computation level: The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

Data level: This is where AI gets its main inputs from, sometimes called the ‘oil’ of the AI industry. However, the protection of data and intellectual property is not as prominent as the regulation of AI algorithms in the current AI debates. There are more and more calls for clarity on what data and inputs are used. Artists, writers, and academics are checking if AI platforms build their fortune on their intellectual work. Thus, AI regulators should put much more pressure on AI companies to be transparent about using data and intellectual property to develop their models.

Algorithmic level: Most of the AI governance debate is on algorithms and AI models. They mainly focus on the long-term risks that AI can pose for humanity. On a more practical level, the discussion focuses on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses. Those who highlight security risks also argue for centralised control of AI developments, preferably by a few tech companies, and restricting the use of an open-source approach to AI.

Apps and tools level: This is the most appropriate level for regulating technology. For a long time, the main focus of internet governance was on the level of use while avoiding any regulatory intervention into how the internet functions, from standards to the operation of internet infrastructure (like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. Thus, the current calls to shift regulations on the algorithm level (under the bonnet of technology) could have far-reaching consequences for technological progress. 

Current debates on AI governance are focused on at least one of these layers. For example, at the core of the last mile of negotiations on the EU’s AI Act is the debate on whether AI should be governed on the algorithm level or when they become apps and tools. The prevailing view is that it should be done on the top of the pyramid—apps and tools.

Interestingly, most supporters of governing AI codes and algorithms, often described as ‘doomsayers’ or ‘longtermists’, rarely mention governing AI apps and tools or their data aspects. Both areas—data and the use of AI—are areas that have more detailed regulation, which is often not in the interest of tech companies. 

Read Also

ChatGPT: A Year in Review

 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

x x x

On the occasion of the very first birthday of ChatGPT, the need for clarity in AI governance prevails. It is important that this trend continues, as we need to make complex trade-offs between short, medium, and long-term risks. 

At Diplo, our focus is on anchoring AI in the core values of humanity through our humAInism project and community. In this context, our focus will be on awareness building of citizens and policymakers. We need to understand AI’s basic technological functionality without going into complex terminology. Most of AI is about patterns and probability, as we recently discussed with diplomats while explaining AI via patterns of colours in national flags. 

Why not join us in working for an informed, inclusive, meaningful, and impactful debate on AI governance! 

2023 Diplo Days in Geneva

Join us for a wide range of sessions and discussions on AI and digital technologies between 4 and 8 November 2023 during UNCTAD’s eWeek!

On 7 December (17.00 – 20.00) at Diplo’s Evening, you can learn more about our AI activities and plans for 2024 while enjoying an end-of-year get-together with our alumni, lectuers, and experts. Please let us know if you can join us geneva@diplomacy.edu.

YouTube player

Diplo will provide hybrid reporting (AI and experts) from UNCTAD’s eWeek (4-8 November 2023).

All time in CET

Jovan Kurbalija will introduce hybrid reporting during the UNCTAD’s eWeek’s opening session (see more)

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek

Scenario-building exercise with youth on digital economy, organised by Diplo, UNCTAD, and FES (see more)

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek


Diplo will participate in a wide range of activities during UNCTAD’s eWeek. Most of the sessions on AI governance and diplomacy will be held on 6th December.

Diplo will organise the following sessions:

Diplo’s experts will participate in the following sessions:

Venue: CICG (International Conference Center of Geneva)

Registration: UNCTAD eWeek


Cyber norms in action: How to translate diplomatic agreements into real security for us all?

You can consult the draft of the Geneva Manual here.

Venue: WMO building, Attic | Online

Registration: In situ participation in Geneva | Online participation

End-of-year reception | Diplo’s plans for 2024 with Diplo lecturers, developers and experts

Diplo Evening will be an occasion to meet Diplo’s lecturers, developers, and experts. In informal setting, you can visit 6 corners featuring Diplo’s activities and projects (see below)

Venue: WMO building, Attic | Online

Registration: Write to geneva@diplomacy.edu

DiploAI & HumAInism

Learn about Diplo’s holistic approach to AI!

You can learn how AI technology functions and the interplay between AI, governance, diplomacy, philosophy, and the arts. You can also learn about AI tools and platforms.

Geneva Digital Atlas

You can explore the Geneva Digital Atlas, which provides in-depth coverage of the activities of 50 actors, an analysis of policy processes, and a catalogue of all core instruments and events. The atlas follows various topic threads, from AI and cybersecurity to e-commerce and standardisation.

Geneva Manual

Learn about application of cyber norms to digital reality!

You can learn about the Geneva Manual and the next steps in applying cyber norms to the digital realm. The current version of the Geneva Manual is available here.

Future of Meetings

Discover uses of AI in the preparation and running of conferences and meetings!

You can learn on:

  • selection of theme for event (relevance, presence) and speakers,
  • drafting of agenda, summary, and background note, 
  • preparation of visuals (logo, backdrops, accessories)
  • preparations of jingle and video
  • transcribing voice recordings from events
  • reporting and follow-up 

You can also consult practical examples of Diplo’s reporting from the UN General Assembly, UN Cybersecurity processes (UN GGE and OEWG), and the UN Internet Governance Forum.

Diplo AI Campus

Explore the wide range of learning opportunities at AI campus of Diplo Academy!

You can consult Diplo’s forthcoming course on various aspects of AI technology, diplomacy, and governance:

  • Basics of AI technological functionality
  • AI prompting for diplomats
  • AI governance
  • AI diplomacy
  • AI and digital economy
  • ‘Future’ in AI debates
  • Main narratives in AI governance
  • AI and digital economy
  • AI and human rights
  • Use of AI negotiations
  • Use of AI in reporting

Artistic perception of AI

Enjoy alternative ways of understanding AI technology, governance, and diplomacy.

The exhibition will feature the latest drawings and illustrations of Prof. Vladimir Veljasevic. You can explore AI, cybersecurity, diplomacy, and digital governance through the lens of the artist.


The Panel discussion will address participation of African countries in AI and digital governance and diplomacy (See more)

Venue: CICG | Room B

The event is organised in the context of the eWeek. Registration is required.

Related actors:

Related people: