Unpacking the EU AI Act

Diplo & GIP, WMO Building, 7bis Avenue de la Paix, Geneva

Organised by

DiploFoundation and Geneva Internet Platform

in partnership with

EU Delegation

 Unpacking EU AI Act

Whether you are deeply involved in AI developments, interested in AI governance, or looking to understand the impact of AI on our society, this event on ‘Unpacking EU AI Act’ will provide you with valuable insights and information. 

Gabriele Mazzini, the architect and lead author of the EU AI Act at the European Commission (DG Connect), will guide us through this 200+ page-long legislation by elaborating on the overall architecture and, in particular, the provisions in the EU Act of relevance for countries and actors beyond the EU.

Sorina Teleanu, Diplo’s AI governance expert, will provide commentaries. Jovan Kurbalija will moderate the discussion. 

Why you should attend

This event offers a unique opportunity to gain insights into:

Panellists

Who should attend

Organisers

The event is organised by Diplo and the Geneva Internet Platform (GIP), in partnership with the EU Delegation.

 Logo, Text
 Nature, Outdoors, Sea, Water, Logo, Text, Sea Waves
 Outdoors, Symbol, Mailbox, Nature

Venue

The event will be held at Diplo/GIP (WMO building, 7 bis Avenue de la Paix, Geneva).

Register

Participation is free of charge, but there are limited seats available. Registration is closed.

Related actors:

Related people: Gabriele Mazzini, Sorina Teleanu, Jovan Kurbalija

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

clip-path: polygon(15.79% 8.28%, 55.63% 8.28%, 63.41% 11.36%, 70.44% 15.9%, 74.61% 23.16%, 75.25% 30.92%, 72.93% 38.76%, 67.27% 44.55%, 61.12% 48.53%, 66.77% 51.33%, 73.12% 55.35%, 77.87% 62.55%, 77.87% 71.74%, 74.08% 79.6%, 69.51% 83.94%, 64.13% 87.25%, 59.31% 89.86%, 28.12% 89.86%, 15.79% 99.31%);
background-color: #c2eb47

clip-path: polygon(0% 0%, 55.63% 0%, 73% 0%, 86.76% 5.35%, 95.71% 14.51%, 96.44% 25.68%, 90.37% 35.68%, 77.74% 41.85%, 92.01% 49.53%, 100% 58.5%, 100% 69.12%, 92.01% 78.19%, 77.74% 86.97%, 59.31% 89.86%, 18.55% 90.44%, 0% 99.16%);
background-color: #c2eb47

‘Yes Minister’ as the novel Turing Test for advanced AI

In the rapidly changing landscape of AI, distinguishing between AI and human capabilities becomes increasingly tricky. Once considered the gold standard for assessing machine intelligence, the Turing Test needs an update to account for the sophisticated nuances of human conversation and thought processes.

The British sitcom “Yes Minister” enters the picture as an unconventional but insightful candidate for a new type of Turing Test, joining the flea market, humour, and diplomatic language. The programme, which delves humorously into the complexities of the British political and bureaucratic systems, is an intriguing benchmark for AI’s ability to replicate human-like ambiguity and institutional wisdom.

The image, entitled 'In Search of the New Turing Test', shows four possible sources for a new Turing test: a flea market, jokes and humour, diplomatic language, and Sir Humphrey from the television show Yes Minister

“Yes Minister” chronicles the exploits of Minister Jim Hacker, his secretary Bernard, and the chief bureaucrat Sir Humphrey Appleby. With his mastery of language and deep bureaucratic savvy, Sir Humphrey’s conversations, laden with evasion, ambiguity, and wit, highlight the sophisticated layers of human communication and institutional memory that AI systems like ChatGPT cannot easily emulate.

Avoiding clear answers

Consider Sir Humphrey’s art of avoiding a straight answer. When pressed by Minister Hacker for a clear stance on civil service staffing, Sir Humphrey’s response is a marvel of obfuscation, meandering through qualifiers and hypotheticals without ever arriving at a concrete conclusion. While maintaining logical consistency, this dance of words epitomizes a level of linguistic dexterity and situational awareness that AI struggles to fully replicate despite its advances.

Minister Hacker asks:

When you give your evidence to the think tank, will you support my view that the civil service is overmanned and feather-bedded or not? Yes or no? Straight answer!

Sir Humprehy answers:

Well, Minister, if you ask me for a straight answer, then I shall say that, as far as we can see, looking at it by and large, taking one thing with another in terms of the average of departments, then, in the final analysis, it is probably true to say, that at the end of the day, in general terms, you would probably find that, not to put too fine a point on it, there probably wasn’t very much in it one way or the other as far as one can see, at this stage.

Ambiguities of bureaucratic language

Furthermore, the show delves into the complexities of bureaucratic communication, as exemplified by Bernard’s explanation of the difference between “under consideration” and “under active consideration” – a distinction that humorously reveals the realpolitik of administrative inertia. It underscores the subtleties of institutional language and the tacit knowledge embedded within it, challenging AI to understand and reproduce the underlying social and organizational contexts.

Mr. Bernard explains the bureaucratic language to the minister.

Minister Hacker asks ‘What’s an official reply? ….’

Mr. Bernard: ‘“The matter is under consideration.” Or even, if we feel so inclined, “under active consideration!”’

Minister: ‘What’s the difference between “under consideration” and “under active consideration”?’

Bernard: ‘“Under consideration” means we’ve lost the file. “Under active consideration” means we’re trying to find it!’

(Adapted from ‘The Complete Yes Minister’, p. 33) 

Bureaucratic stalling tactics

Sir Humphrey’s stalling tactics introduce another dimension of complexity. His strategic deployment of delaying tactics, from citing the administration’s nascent stage to invoking the imminence of elections, showcases a masterful understanding of institutional dynamics and timing. Replicating such a nuanced, context-dependent strategy requires an AI to grasp the specifics of political maneuvering and anticipate and adapt to human reactions and institutional pressures. Could AI chain prompting mimic Sir Humprhey’s following five-step stalling tactic?

Five steps to bureaucratic stalling

Stage One: Humphrey will say that the administration is in its early months and there’s an awful lot of other things to get on with. 

Stage Two: If I persist past Stage One, he’ll say that he quite appreciates the intention, something certainly ought to be done – but is this the right way to achieve it? 

Stage Three: If I’m still undeterred he will shift his ground from how I do it to when I do it, i.e. ‘Minister, this is not the time, for all sorts of reasons.’ 

Stage Four: Lots of Ministers settle for Stage Three. But if not, he will then say that the policy has run into difficulties – technical, political, and/or legal. (Legal difficulties are best because they can be made totally incomprehensible and can go on for ever.) 

Stage Five: Finally, because the first four stages have taken up to three years, the last stage is to say that ‘we’re getting rather near to the run-up to the next general election – so we can’t be sure of getting the policy through.’ … 

Blame game

The gist of this conversation is who is to blame for the failure of policy initiatives. Responsibility exists in the grey area between the Minister’s political responsibility for introducing policy and the administration’s responsibility for implementing it. Sir Humphrey answers the Minister’s accusation that civil servants failed his policy initiative.

Sir Humphrey Appleby: Yes, yes, yes, I do see that there is a real dilemma here. In that, while it has been government policy to regard policy as a responsibility of Ministers and administration as a responsibility of Officials, the questions of administrative policy can cause confusion between the policy of administration and the administration of policy, especially when responsibility for the administration of the policy of administration conflicts, or overlaps with, responsibility for the policy of the administration of policy.

Taxonomy of bureaucratic silence 

Lastly, Sir Humphrey’s taxonomy of bureaucratic silence—discreet, stubborn, and courageous—reveals a sophisticated classification system born from years of navigating conversations and decisions. It is a testament to the depth of understanding and experience of social dynamics, challenging AI to decipher the words and the silence between them.

Sir Humphrey’s taxonomy of bureaucratic silence:

  1. The silence when they do not want to tell you the facts: Discreet Silence.
  2. The silence when they do not intend to take any action: Stubborn Silence. 
  3. The silence when you catch them out and they haven’t a leg to stand on. They imply that they could vindicate themselves completely if only they were free to tell all, but they are too honourable to do so: Courageous Silence. (The Complete Yes Minister, pp. 93-4) 

In light of these examples, it becomes evident that areas where AI cannot yet compete with human intelligence and creativity include the depth of contextual understanding, emotional intelligence, the subtlety of social interactions, and the ability to navigate complex institutional memories. The nuances of humour, irony, and the unspoken word remain uniquely human territories, underscoring the importance of continuously advancing AI capabilities while acknowledging the irreplaceable value of human insight and creativity.

It invites us to consider how far AI has come and yet how far it still has to go in truly understanding and replicating the full spectrum of human intelligence and creativity.


Chat with Sir Humphrey

 The image shows a textbox depicting an AI drive 'Yes Minister' Chatbot called Ask Sir Humphrey. The user has asked 'How are you?', to which the chatbot has replied 'As Sir Humphrey, I find myself in the delightful position of navigating the turbulent waters of bureaucracy, while attempting to steer our new Prime Minister towards a course of 'masterly inactivity'. I am, as ever, the civil servant's civil servant, a beacon of tradition in a sea of change. My days are filled with the gentle art of persuasion, the subtle dance of diplomacy, and the occasional necessity of translating my own eloquence into plain English. Despite the Prime Minister's insistence on 'making his mark', I remain, as always, the immovable ovject to his unstoppable force. In short, I am the steadfast rudder in the ship of the state, ensuring we stay on course, even if the captain has other ideas.'

If you want to learn more about the Yes Minister Turing Test and other Diplo’s projects on AI you can write to us: ai@diplomacy.edu

AI for Good Global Summit

The AI for Good Global Summit will take place on May 30-31, 2024, in Geneva, Switzerland. The AI for Good Global Summit is the main event of the AI for Good platform organised by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland. The summit aims to Identify practical applications of AI to advance the SDGs and scale those solutions for global impact.

The AI for Good Global Summit is designed to feature several interactive stages where the visitor can learn, build and connect. Stages will feature keynote speakers, cutting-edge AI solutions, pitching sessions, and performances by AI-inspired artists.

 Triangle, Person, Lighting, Advertisement, Poster, Adult, Bride, Female, Wedding, Woman

AI for Good is a year-round digital platform with weekly sessions where relevant stakeholders learn, build and connect to help identify practical AI solutions to advance the UN SDGs and other global development priorities. The platform aims to encourage cooperative efforts to assure reliable, secure, and inclusive AI technology development and equal access to its advantages.

Please visit the dedicated page for more information about the AI for Good platform and the upcoming sessions.

Diplo@AI for Good

Large Language Models, from sectoral applications to implications for human-machine collaboration in manufacturing

31 May | 14:00 – 18:00 CEST

The workshop explores the practical applications of Large Language Models (LLMs) across various sectors, with examples from climate change to advanced manufacturing. It then focuses on the manufacturing industry, examining the transformative potential of LLMs and Vision Language Models (VLMs) in enhancing human-machine collaboration. Participants will gain a comprehensive understanding of LLMs and VLMs, their applications, and their potential to revolutionize manufacturing processes and efficiencies.

A Diplo researcher will participate in this session as a speaker. For more information, please visit the official event page.

Related actors: United Nations, World Health Organization, World Intellectual Property Organization, United Nations Commission on Sustainable Development (CSD)

Related people:

The impact of AI on data collection, exploitation, protection, and privacy in Africa

The rapid advancement of AI technologies has brought about transformative changes in various sectors globally. In Africa, where the adoption of digital technologies is on the rise, the impact of AI on data collection, exploitation, protection, and privacy is becoming increasingly significant. The webinar will explore the multifaceted implications of AI use in these domains and to propose considerations for ensuring responsible and ethical use. The deployment of AI raises critical questions regarding data protection and privacy, necessitating a thorough examination of its impact on the African continent.

The African Union (AU) Convention on Cyber Security and Personal Data Protection, also known as the Malabo Convention, is a legal framework for addressing cybercrime and data protection in Africa. The convention, adopted by the AU in 2014, is a critical tool for protecting personal data and preventing cybercrime on the continent.

The Malabo Convention is now in effect. Article 36 of the Malabo Convention says that the treaty will come into effect when there are 15 ratifications. Mauritania ratified the convention in May 2023, becoming the 15th African country to do so. And the convention came into effect 30 days later, in June 2023.

Following this important development, the webinar aims to:

  1. Examine the current landscape: Investigate the existing state of AI deployment in data collection, exploitation, protection, and privacy in Africa, including case studies and notable initiatives.
  2. Assess risks and challenges: Identify potential risks and challenges associated with AI implementation in data-centric domains, with a specific focus on the African context.
  3. Explore opportunities: Investigate how AI can positively contribute to efficient data collection, improved exploitation for socio-economic development, and enhanced protection of privacy rights.
  4. Highlight ethical considerations: Analyse the ethical implications of AI in data-centric applications, emphasising the need for responsible AI practices that align with cultural, social, and legal norms in African countries.

Propose policy recommendations: Develop a set of policy recommendations for governments, businesses, and civil society to ensure the responsible use of AI in data-related domains, emphasising the protection of privacy and human rights.

Time of the event
4:30pm-6:00pm Nairobi time

Agenda:

Introduction (5 minutes)

  • Welcome and Opening Remarks
  • Brief Introduction to the Webinar’s Theme

Session 1: Overview of AI in Africa (10 minutes)

Open panel

  • Current State of AI Adoption in Africa
  • Key Applications of AI in Various Sectors
  • Challenges and Opportunities

Session 2: Data Protection and Privacy Challenges (30 minutes)

Panelists: Tyrus Muya Kamau, Immaculate Kassait, Mercy Mutemi, Moctar Yedaly

  • Overview of Data Protection Regulations in African Countries
  • Ratification of the Africa union convention on cybersecurity and personal data protection Malabo convention
  • Risks and Concerns Related to AI and Privacy
  • Strategies for Mitigating Privacy Risks

Session 3: Data Collection and Exploitation in the AI Era (30 minutes)

Panelists: John Walubengo, Abdul-Hakeem Ajijola, Shikoli Makatiani

  • Role of AI in Enhancing Data Collection
  • Case Studies on AI-Driven Data Collection Projects in Africa
  • Ethical Considerations and Potential Biases
  • How AI is Transforming Data into Actionable Insights
  • Examples of AI-Driven Decision-Making in African Contexts
  • Societal and Economic Impacts

Q&A Session (10 minutes)

  • Open Floor for Questions from Participants
  • Panel Discussion with Experts

Closing Remarks (5 minutes)

  • Summary of Key Takeaways
  • Thanking Speakers, Participants, and Sponsors
  • Information on Future Webinars or Resources

Related actors:

Related people: Tyrus Muya Kamau, Mercy Mutemi, Moctar Yedaly, Katherine Getao, Shikoli Makatiani, John Walubengo, Abdul-Hakeem Ajijola, Immaculate Kassait

AI and Digital Predictions for 2024

How will AI develop in 2024? What are the top digital trends to keep an eye on? These and other questions were addressed in traditional predictions event with Jovan Kurbalija in conversation with journalist Richard Werly.

Geneva and Online | Thursday, 11 January 2024 at 13.00-14.00 UTC (14.00-15.00 CET).

Further resources:

Event report

You can read the full report here

Related actors:

Related people: Jovan Kurbalija, Richard Werly

Review of AI and digital developments in 2024

Grounding is the defining term for AI in 2024. After the 2023 hype, AI has become a commodity. New large language models emerged almost daily, and thousands of agents seamlessly integrated into our digital landscape. While technological growth remained rapid, the perception of AI’s opportunities and risks became much more grounded in reality.

The conversation has shifted—from distant existential risks to the practicalities of current regulation. The EU AI Act and Council of Europe AI Convention were adopted. Geopolitically divided UN managed to agree on the Global Digital Compact.

In spite of numerous concerns, AI did not trigger any major crisis during a tense election year. Then came CrowdStrike, disrupting computers worldwide—a reminder that a few lines of code can have a profound impact on our digital reality, sometimes more so than cutting-edge AI.

You can revisit 12 AI and digital predictions for 2024 further down.

Join us on 8 January for traditional forecast of AI and digital developments in 2025!

Read January intro for 2024 prediction

Will AI and digital technologies exacerbate or lessen the impending polycrisis of 2024? This is the primary question underlying these predictions.

In 2024 AI will continue to dominate the tech landscape. It will be less turbulent than last year when AI went through 4 seasons, from the winter of AI excitement to an autumn of clarity. AI technology will grow steadily, with more clarity and some disillusionment. 

We will also return to digital basics in 2024. For instance, we frequently forget that access to the internet is not a given. It is comparable to water or electricity services. When it breaks, we become aware of it. We experience our digital reality, including access to the cutting-edge ChatGPT, through the decades-old Internet protocol – TCP/IP. Thus, in addition to AI, we will discuss traditional digital issues like infrastructure and standardisation, cybersecurity, content governance, and the digital economy in our 2024 predictions.

Jovan

Revisiting 12 Predictions for 2024:

Artificial Intelligence | Geopolitics | Governance | Diplomacy | Security | Human Rights | Economy | Standards | Encryption | Identity | Content | Inclusion


 Art, Graphics, Light, Nature, Night, Outdoors, Text

In 2024, AI grounding confirmed three main predictions: steady growth continued to the point that AI became commodity; some disillusionment came as AI has not replaced everything as many argued in 2023; yet, the challenge for jobs, education, and media will continue steadily with more clarity.

Focus on extinction risk of 2023 shifted towards existing risk as you can from visualisaiton of analysis of media coverage, regulation, and policy statements.

AI growth continued in 2024 both deep and wide. On the deep side, tech companies were competing with realising AI models with more and more parameters. However, there was no major conceptual breakthrough similar to multi-layered neural networks (1986), deep learning (2006), transformers (2017) or LLMs (2018).

On the wide side of AI growth, open-source platforms, numerous apps, and AI agents facilitated the deployment of AI in education, the economy, and other walks of life. AI became an affordable commodity in 2024.

Predictions from January 2024

AI technology will continue to grow both deeper, through powerful foundational models, and wider, through more connections to the rest of the digital ecosystem (e.g. the internet of things, virtual reality, digital infrastructures). In parallel, smaller AI and open source models will gain momentum as a more transparent, adaptable and environmentally friendly approach to AI. 

In 2024, Google will try to challenge ChatGPT’s dominance. It remains to be seen whether the AI space will be bipolar, with OpenAI/Microsoft and Google, or multipolar, with more actors joining at the top. In addition to Apple, Amazon, and other companies, the most interesting challenge for duopoly comes from Meta, which has been supporting the open-source LLaMA model. 

Large Multimodal Models (LMMs) will gain in relevance following the shift from text towards integrating video, images, and sound in AI foundation models. 

After realising the strategic relevance of AI-codified knowledge, companies, communities and countries will intensify the development of bottom-up AI based on in-house solutions. AI models will be localised through the use of data pruning, adaptive computing, and models for retrieval augmented generation (RAG). Bottom-up AI will become a new AI dynamic in 2024. 

Small Language Models (SLMs) are already a thing, but cost-efficiency and sustainability considerations will accelerate this trend as AI has a big carbon footprint due to the use of a lot of energy for processing data and the development of foundational models. 

After surviving an attack from the ‘extinction risk’ camp in 2023, open-source AI will continue to develop fast in the new year. Open-source AI tends to be smaller but more adaptable and customisable. LLaMA is 10 times smaller than OpenAI Large Language Model (LLM). If LLaMA or other smaller and open-source models (BLOOM and OPT) can achieve similar results to GPT-4, it can be a major shift in the AI landscape with more inclusion for organisations and individuals with much fewer resources. 

AI growth continued in 2024 both deep and wide. On the deep side, tech companies were competing with realising AI models with more and more parameters. However, there was no major conceptual breakthrough similar to multi-layered neural networks (1986), deep learning (2006), transformers (2017) or LLMs (2018).

On the wide side of AI growth, open-source platforms, numerous apps, and AI agents facilitated the deployment of AI in education, the economy, and other walks of life. AI became an affordable commodity in 2024.

In 2024, clarity of AI debates increased. From probabilistic narratives of what may happen in the future, there is much more discussion on AI now and here. In addition, public awareness about AI functionality increased inquiry how what is ‘under the bonnet of AI’ may affect society. This positive development will both ensure that AI potentials will be realised and risks will be managed in informed and inclusive ways.

Predictions from January 2024…

There will be more clarity in discussions about governing AI and dealing with associated risks. Existing risks (e.g., jobs, misinformation, biases) will receive more attention than existential risks (human survival). Exclusion risks based on a few AI monopoly companies will become more relevant. AI governance will become more specific and concrete, addressing computational power, data and knowledge, algorithms, and AI applications.

Focus on extinction risk of 2023 shifted towards existing risk as you can from visualisaiton of analysis of media coverage, regulation, and policy statements. Existing risks are more specific and concrete affecting jobs, education, and media, among others. Exclusion risks are becoming more visible as big AI platforms conetralise and mnopolise AI-generated knowledge. Data protection and intellectual property rights are becoming the key tools in dealing with exclusion risks. Extinction risks started being addressed in more balanced and informed way through, among others, the International Network of AI Safety Institutes, establisehd in May 2024.

May 2023

The image shows a diagram with three overlapping circles representing a prediction of the coverage of the risks of AI in 2024. The biggest is existing risks, such as AI's impact on jobs, information, and education,, and the other two, extinction risks, such as AI destroying humanity, and exclusion risks, such as AI tech monopolising global knowledge, are both smaller and of roughly equal size.

January 2024

 Diagram, Venn Diagram, Disk

December 2024

Predictions from January 2024…

AI risks will dominate the governance and regulatory debate. The main shift will be from a heavy focus on extinction and long-term risks towards existing risks that AI poses to society, from misinformation to education and the future of jobs. In addition, governments will focus more on exclusion risks triggered by the AI monopolies of the major tech companies. 

The end of 2023, with the adoption of the EU AI Act and the publishing of the Interim Report by the UN Secretary-General’s High-level Advisory Body on AI, signalled more clarity in dealing with AI risks. The previous heavy focus on extinction risk is balanced with existing risks and midterm risks, such as monopolies of AI platforms. 

Existing risks (short-term) from AI developments include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Exclusion risks (medium-term) could be triggered by the centralisation of AI knowledge in the hands of a few powerful players. Ultimately, their monopolies could create a risk of exclusion: citizens, communities, and countries worldwide will be limited in their ability to use and benefit from common knowledge. Such monopolies could lead to a dystopian future for humanity.  Legally speaking, risks of such AI monopolies can be reduced via antitrust regulation and protection of data and intellectual property associated with the knowledge used for the development of AI models. 

Extinction risks (long-term) are based on the possibility of AI evolving from servant to master, jeopardising humanity’s very survival. After very intensive doomsday media coverage throughout 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of extinction risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on extinction risks while mentioning existing AI risks in passing and making no reference to exclusion risks.

US Vice-President on existential and existing risks

… There are additional threats that also demand our action—threats that are currently causing harm and which, to many people, also feel existential.

Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?

When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her? …

Source: Remarks by Vice President Harris on the Future of Artificial Intelligence

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In the environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as using precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and society’s responses.

AI Governance on 4 Layers

Focus on existing risks was also echoed in the increasing focus on the top of AI governance pyramide dealing with uses of AI in society. Some ‘tacit’ policy impacts is done on hardware level mainly through export restrictions on powerful microprofcessors. Data and knowledge, as course of AI, are getting more prominent in AI governance and regulation. Algorithmic regulation is less prominent than, for example, in 2023.

The image shows a pyramid representing DIplo's AI Governance pyramid. The base of the pyramid is Hardware and computational power, the next section is Data & knowledge, the source for AI, the third section is the algorithm and AI capabilities, whilst the tip of the pyramid is apps and AI uses.
Diplo’s AI Governance Pyramid
Predictions from January 2024…

The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. 

With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

There are two sets of challenges: firstly, copyright holders worldwide are questioning in courts the use of their creative work in the development of AI; secondly, facing a limited data volume of internet content and in an attempt to respond to privacy and intellectual property concerns, companies are using synthetic data generated by AI. This recursive loop triggers ‘AI inbreeding’ and an inevitable degeneration of the quality and relevance of AI models. 

Copyright: In 2023, the question of protecting the data and intellectual property used by AI platforms started gaining momentum. The EU AI Act requires more transparency about data used for AI models. Writers, musicians, and the photography industry started court cases against OpenAI for use of their intellectual property for training AI models. The New York Times’ lawsuit against OpenAI for copyright infringement is one recent example. Other companies, however, are watching and learning valuable lessons: Apple decided to conclude a $50m multi-year deal with the Times to license material for its AI models. 

In 2024, new ideas will flourish. For example, some lawyers argue that, analogous with copyright, a ‘learnright’ should be established to govern how AI uses content for training. Various tagging and watermark systems are discussed for distinguishing human-made from AI-made artefacts.

There are also arguments against strict copyright protection and the need to allow data scrapping in order to facilitate progress of AI.

Synthetic data: Most existing models are based on internet content, considered as the ‘ground truth’ for AI development. This initial input into AI models is already depleted. In 2024, there will be two major problems with acquiring data. Facing limited available data, AI companies started using synthetic data generated by AI. According to Stanford University researchers there is a 68% increase in synthetic data generated by AI on Reddit. 

In 2024, discussion on transparency, evaluation, and explainability will need to operationalised as it will be required by the EU AI Act and other legal instruments in the field of AI. AI governance will also focus on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses.

Transparency: There is very little transparency when it comes to AI models; we know little to nothing about, for instance, the data fed into the model or the weights that are given to parameters. Transparency is a precondition for evaluation and, ultimately, accountability in the digital realm. The lack of transparency in AI follows a general trend of reduced transparency in the digital industry and social media platforms. Twitter, Facebook, and Google provide less and less access to their operations. 

Evaluation: The EU AI Act and other regulations require the evaluation of AI platforms. However, there are numerous limitations in conducting evaluation, from lack of data to lack of common evaluation methodologies. In the USA, bipartisan efforts in Congress, such as the ‘AI Research, Innovation, and Accountability Act,’ aim to increase transparency and accountability for high-risk AI applications.  

The focus is on the practical use of AI via apps and other services. For example, regulations would focus on the implications that the outputs of systems such as ChatGPT have in terms of human rights, security, and consumer protection instead of regulating the algorithms that generate such outputs. Like with traditional digital systems, responsibility for obeying regulation will be placed on companies that develop and operate AI systems, such as OpenAI/Microsoft, Google, Meta, and Amazon.

For a long time, one of the pillars of digital governance has been to regulate the uses and outputs of digital systems instead of regulating how the internet functions technically (from standards to the operation of critical internet resources like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. The current calls to shift regulations on the algorithm level (under the bonnet of technology) would be a major departure from the well-tested and established approach of technology regulation, with far-reaching consequences for technological progress. 

More from Diplo:


 Light, Art, Graphics, Text, Logo, Nature, Night, Outdoors

In 2024, tech geopolitics were characterized by consolidation of position of China and USA as two AI/digital superpowers, search of many other countries and regions for ‘third digital way’, and promising signs that diplomacy is alive as China and USA supported each other resolutions on AI and development at the UN General Assembly and countries agreed on Cybercrime Convention and Global Digital Compact.

 Chart, Pie Chart, Ball, Rugby, Rugby Ball, Sport

Geopolitics was centered around access to semiconductors (access to GPUs for AI), satellites (competition in low-orbit satellites), artificial intelligence (access to data and LLMs), and submarine cables (competition in developing and using cables). Digital economy is ‘securitising’ as Marilia Maciel researched in her PhD thesis.

India, Brazil, South Africa, Singapore, Turkey, and Gulf states, among others, will try to carve an asymmetric ‘third digital space’ between the two superpowers. Some new actors such as Vietnam started increasing their presence in global digital dynamics. In 2024, the push for national sovereignty over data, AI, and technology infrastructure gained new momentum.

A promising news is that diplomacy is alive. China and USA supported each other resolutions on AI and development at the UN General Assembly. In spite of all divisions, UN member states agreed on the Cybercrime convention and Digital Global Compact.

In 2024, semiconductors industry grew 16 % from the previous year, according to the World Semiconductor Trade Statistics. G7 discussed geopolitical risks for semiconductors supply chains of a potential military crisis involving South Korea and/or Taiwan, main manufacturers of semiconductors.

Many countries accelerated local production of semi-conductors. Netherlands, home of ASML – the leading manufacturer of litography equipment for semiconductors, called for a ‘coalition of the willing’ to strengthen the EU’s chip growth. India and Singapore joined forces in semiconductor development.   China focused in 2024 on the production of equipment of manufacturing semiconductors with the main focus on high-end lithography equipment.  There was also consolidation of semiconductor industry with 31 merger and acquisiton deals. In Asia, Vietnam has made a strategic shift towards becoming a semicodustors hub amid US-China tech tensions. It already resulted in the relocaiton of a few chip companies from China to Vientam.

Russia is also focusing on localisation of semiconductor industry with an ambition to replace 70% of foreign equipement by 2030.

Predictions from January 2024…

In 2024, semiconductors will remain the main digital battleground between China and the USA. So far, the USA-led restrictions on the export of chip-making technology have triggered rapid growth in this sector in China. Alibaba, Xiaomi, Baidu, and other Chinese companies invest heavily in the local semiconductor industry. The same tendencies can be observed worldwide as countries try to become self-reliant in the development and production of advanced chips. 

Last year marked a growing number of submarine cable incidents, especially in areas affected by military conflicts and tensions such as the Baltic and Red Sea. In August, several African countries experienced major internet disruption due to the cutting of cables triggered by an underwater landslide in the Congo Canyon.

It also triggered international reactions. During the UN General Assembly, 18 countries endorsed a joint statement on the resilience of undersea cable infrastructure. ITU launched the International Advisory Body for Submarine Cable Resilience with the main aim of providing strategic guidance, sharing best practices, and promoting international cooperation. Portugal and Nigeria lead the Advisory Body.

Predictions from January 2024…

The previously integrated network of submarine cables will continue separating between China and the USA. We have already seen, for instance, that the USA banned landing rights for a few submarine cables with links to China. And China started investing in separate cable systems, such as a new cable connecting China with France via Singapore, Pakistan, and Egypt. 

Faced with these divisions, countries will focus on digital sustainability by developing alternative digital routes. They will also work more on preserving content locally via internet exchange points (IXPs).  

Satellite constellations were in the center of geopolitical and geoeconomic race. SpaceX led this race with adding countries for Starlink operations. In 2024, Starlink experienced a few setbacks with Cameroon and Namibia requesting SpaceX to cease operations in these two countries. In satellite competition, EU announced funding of Euro 10.6 billion for IRIS satellite constellation. India created consortium of 30 companies aimed at making national sattelite network. Australia scrapped plans for a large defense satellite project, opting for a network of smaller micro-satellites. China launched satellite for testing 6G mobile technology.

Digitalisation moved further in the outer space with Musk’s ambitious to establish a human colony on Mars and the IGF panel discussion on interplanetary Internet.

Predictions from January 2024…

Outer space is another field of accelerated competition between the USA and China, as well as other public and private actors. In the area of internet connectivity alone, low-orbit satellite (mega)constellations by the likes of SpaceX’s Starlink and Amazon’s Project Kuiper are to face competition from China Satellite Network Group’s Gouwang project. 

National space agencies and private actors alike are planning or working on deploying (new) communications and navigation networks (e.g. around the Moon), the development of new space stations, and the exploration and exploitation of space resources (e.g. metals, minerals), giving rise to new governance issues. 

Against this backdrop, ensuring a fair and responsible allocation and use of spectrum resources and orbital positions, encouraging responsible behaviour in outer space from public and private actors, and ensuring the sustainability of space activities, will be some of the questions that UN bodies and processes—from the Committee on the Peaceful Uses of Outer Space (COPUOS) and the International Telecommunication Union (ITU), to the Summit of the Future—will be addressing in 2024.

More from Diplo on AI/digital geopolitics:

Analysis: Jurisdiction | Data governance | Semiconductors | Space diplomacy 

Training: Introduction to Internet Governance | Diplomatic Theory and Practice | Artificial Intelligence: Technology, Governance and Policy Framework

Push for sovereignty in the tech realm

In 2024, countries and regional blocs such as the EU will also push for more self-reliance in terms of digital developments. Initiatives focused on achieving digital, data, AI, or cyber sovereignty are often motivated by a desire to reduce the risks of negative security and economic spillovers from integrated digital networks.

The image shows border officials inspecting an internet cable as it crosses a nation border checkpoint, equipped with a list of internet content that is and isn't allowed to enter.

The sovereignty drive takes different forms. Sometimes, it is about control of infrastructure. In other cases, it is about preserving data on national territory. Increasingly, it is about facilitating national AI developments. 

Approaches to digital sovereignty will vary, depending on a country’s political and legal systems. Legal approaches include national regulation and court judgments; technical ones vary between data filtering and frowned-upon internet shutdowns. 

A focus on digital sovereignty reduces the appetite for global digital governance solutions. In one illustration of this, the recent withdrawal of the USA from the WTO e-commerce negotiations was justified by the need to gain more room for regulating tech companies nationally. 

Follow sovereignty issues on the GIP Digital Watch observatory: Jurisdiction | Data governance

Sovereignty issues are covered in the following courses: Introduction to Internet Governance | Diplomatic Theory and Practice | Artificial Intelligence: Technology, Governance and Policy Framework


 Text, Light

Last year was busy in the field of AI governance and regulation. Two critical regulations were adopted: Council of Europe Convention on AI and Human Rights (17 May 2024), and EU AI Act (13 June 2024). During the Summit of the Future, the Global Digital Compact addressing both AI and digital governance issues.

Apart from these key governance developments, there are tens of other international initiatives especially in the AI field. Some of them are matter of insurance ‘policy mandates’ especially for international organisations. Governments are positioning themselves for the future AI negotiations and governance. While motivations for AI governance ‘gold rush’ vary, they open risk of duplication of initiatives and difficulty for stakeholders, especially from small countries, to follow so many AI/digital governance initiatives.

In addition AI governance regulation are becoming long. tend to be very long. For example, U.S. President’s Executive Order on Safe, Secure, and Trustworthy AI (30 October 2023) is the longest executive order ever published with 20.000 words.

Document Year Word Count
GDPR 2016 55,000
Treaty of Versailles 1919 44,000
Code of Hammurabi 1754 BC 6,000
U.S. Constitution with amendments 1787 4,400
Magna Carta 1215 4,000
Universal Declaration of Human Rights 1948 1,800
Declaration of Independence 1776 1,458
Bill of Rights with 10 amendments 1791 500
Napoleon’s Civil Code 1804 30,000
UN Charter 1945 8,000
EU AI Act 2024 85,500
U.S. President's Executive Order on AI 2023 20,000

WP DataTables

On the national level, more and more countries are developing national AI strategies.

 Chart, Plot, Map, Atlas, Diagram, Land, Nature, Outdoors, Plant, Rainforest, Tree, Vegetation
Update of national strategies for AI (November 2024)

Predictions from January 2024…

In 2024, there will be a push for new organisations, commissions, and expert groups dealing with AI and digital governance. ‘Governance inflation’ will be fuelled by media hype and confusion about AI as a ‘wicked’ policy issue. AI will be used by international organisations to carve out new mandates and ensure their ‘raison d’être’ in the future.

The adoption of the UN Cybercrime Convention will mark the beginning of global digital governance in 2024. The Global Digital Compact (GDC) will be negotiated ahead of the Summit for the Future in September 2024. The main challenge will be to align the GDC with the World Summit on the Information Society (WSIS) and the 2030 Agenda for Sustainable Development. In the turbulent year of 2024, the UN Security Council and other bodies will have to deal with digital aspects of conflicts and humanitarian crises.

2024 will be marked by an interplay between change, which is the essence of technological development, and continuity, which characterises digital governance efforts. Change in search of ‘the next big thing’ is at the core of tech narratives. Governance has different dynamics for a reason. The foundation for the current global digital governance architecture was set back in 1998 and remains largely relevant today. Decisions to bring changes in this space should be made with calm and clarity, allowing for a clear separation between governance that works and governance that needs to be changed with a reason. 

At the UN, the year will start with the adoption of the Cybercrime Convention. Most of the year will be dominated by discussions and negotiation of the Global Digital Compact (GDC) and synchronisation with the wider UN agenda of the World Summit of Information Society (and the upcoming WSIS+20 review), the future of Internet Governance Forum, and the Agenda 2030 for Sustainable Development. 

In 2024, Brazil’s G20 Presidency will continue IBSA (India–Brazil–South Africa) momentum after India’s G20 Year. Brazil announced a focus on the following digital issues: digital inclusion, digital government, information integrity, and artificial intelligence. Brazil will also host a NETMundial+10 in the first half of the year with the aim to ‘update the global discussion on internet governance and the digital ecosystem’ and review the 2014 principles and roadmap for internet governance.

More from Diplo on AI/digital geopolitics:

Analysis: GDC process | Ad Hoc Committee on Cybercrime

Training: Introduction to Internet Governance | AI governance


 Light, Logo, Text, Art, Graphics, Nature, Night, Outdoors

In 2024, diplomacy was successful in addressing the rising importance of AI and digital geopolitical and governance issues. Contrary to the general crisis in diplomacy, digital diplomacy has achieved several successes, including the successful completion of negotiations on the UN Cybercrime Convention and the Digital Global Compact. Regarding AI, two major actors, China and the USA, supported each other’s resolutions in the UN General Assembly on sustainable development and capacity building. On cybersecurity, the UN OEWG began advancing towards a new partnership arrangement, to be agreed upon in 2025.

While effectively addressing AI and digital topics on the negotiating agenda, diplomacy is lagging in the use of new tech tools and the overall AI transformation.

Predictions from January 2024…

2004 was the year of ‘soul searching’ of diplomatcy for its role and purpose. Diplomacy has been overshadowed in recent years by military and confrontational logic. Unfortunately, this trend is likely to continue in 2024, with no end to current conflicts in sight and looming new conflicts.

Diplomacy will begin a ‘soul searching’ for its role and purpose. Technology will play an important role in shaping the future of this ancient profession by facilitating representation, negotiations, and the peaceful resolution of conflicts, all of which are core diplomatic functions.

AI will put diplomacy to the test through language, a key diplomatic tool. The automation of summarising, reporting, and drafting using Large Language Models (LLMs) will have a significant impact on the diplomatic profession.

Read more on predictions…

Diplomacy has been overshadowed in recent years by military and confrontational logic. Unfortunately, this trend is likely to continue in 2024, with no end to current conflicts in sight and looming new conflicts.

Diplomacy will begin a ‘soul searching’ for its role and purpose. Technology will play an important role in shaping the future of this ancient profession by facilitating representation, negotiations, and the peaceful resolution of conflicts, all of which are core diplomatic functions.

AI will put diplomacy to the test through language, a key diplomatic tool. The automation of summarising, reporting, and drafting using Large Language Models (LLMs) will have a significant impact on the diplomatic profession.

Diplomacy must prepare to cope with increasing pressure to negotiate AI and digital TOPICS on bilateral, regional, and global diplomatic agenda. There will be increasing pressure to negotiate digital governance as a whole, as well as the digitalisation of traditional policy issues ranging from health to trade and human rights.

Second, it’s important for diplomats to assess how they use AI and digital TOOLS in their work. For example, they should find out if and how social media platforms help diplomats in their work. For example, social media should not be used if it hinders the process of reaching a compromise, as has frequently happened.

Diplomacy will face a new challenge due to the fact that AI is automating drafting, summarising, and other diplomatic tasks. Although diplomats ought to refrain from embracing the hype surrounding it, they should evaluate practical applications of AI with an open mind in areas such as diplomatic reporting.

By relying on AI to perform tedious tasks, they can allocate more time to engage in substantive diplomatic activities such as peacefully resolving conflicts, engaging in negotiations, and engaging in engagements. With the availability of new AI tools and public pressure to deliver solutions in 2024, diplomats may begin the transition from “bureaucratic” to “real” diplomacy. There will be fewer formal reports and more substantive participation and negotiation.

The shift from digital PUBLIC diplomacy to digital PROPAGANDA in 2024

Soft power loses when power politics dominate. 2024 will be a year of real power in military conflicts and economic battles. It will be a year of geopolitics. Soft power will become sheer propaganda in war and other conflicts.

2024 will be dominated by a dichotomy between stories that are ‘ours’ and stories that are ‘wrong’. Spaces for persuasion will shrink significantly.

Hypocrisy and double standards will increase as actors amplify their stories and ignore anything else, including ethics, common sense, and fairness. 

It will be difficult to win ‘hearts and minds’ when powers are busy winning territories, economic resources, and strategic positions.

The relevance of soft power will decline in 2024 as online spaces disintegrate into niches of like-minded actors. Social media echo chambers will fortify their walls by reinforcing support for ‘our’ cause. Talking to ‘others’ will be mainly through verbal wars and insults. They will be cancelled or gaslighted. The public space of genuine engagement, the search for the middle ground, and the search for compromise will shrink. It will move to secret places away from online space. 

The relevance of soft power, public diplomacy, and persuasion to ‘win hearts and minds’ will decline sharply in 2024. It will have far-reaching consequences, as words can quickly evolve into wars. By inertia, traditional ‘soft power’ mechanisms (people, institutions, academics) will continue working ‘as always’. They will be more propagandists than contributors to solving conflicts and ensuring a better future for humanity. 

In such a pessimistic scenario, one can ask if we can do anything. Yes, we can. Immediately, the most reasonable action is to expose mis/dis-information and fake news with factual info. One should not have the illusion that the fight will stop power-fuelled propaganda. But, in this way, we can start fighting a battle for reason and regaining public space for constructive and relevant policy discussions. 

In the longer term, the next generations should be exposed more towards arguments of reason, respect for others, and compromise. Respecting others and compromising with them should become critical values defining the very survival of humanity.


 Art, Graphics, Light, Text

Last year was characterised by the following main developments:

Increasing cybersecurity incidents: There is a continuous rise in the number and sophistication of cybersecurity incidents globally. This trend underscores the growing digital vulnerabilities that organisations face today.

CrowdStrike failure: The CrowdStrike incident is a notable example highlighting digital vulnerabilities. It symbolises the risks associated with the weakest links in integrated digital systems, demonstrating the worldwide impact such failures can have.

UN Cybercrime Convention: The adoption of the UN Cybercrime Convention marks a significant step in digital security and safety. The implementation of the Convention will be be shaped by numerous differences especially around the questions of fight against cybercrime and protection of human rights.

AI in cybersecurity: Artificial intelligence (AI) is increasingly being utilised in cybersecurity, both for launching sophisticated attacks and enhancing protection mechanisms. AI’s dual role presents both opportunities and challenges for cybersecurity professionals.


 Art, Graphics, Light, Text, Nature, Night, Outdoors, Logo

An interplay between human rights and tech developments was the focus of several regulatory and policy initiatives. For example, protecting human rights was critical in negotiations for the UN Cybercrime Convention.

The interplay between AI and human rights is central to the Council of Europe Framework Convention on AI, human rights, democracy, and the rule of law.

The EU’s Digital Services Act (DSA) is a practical tool for implementing disability rights through digital accessibility provisions.

Privacy protection is highlighted in AI safety frameworks in the USA and EU.

Using AI to mimic Scarlett Johansson’s voice raised questions about privacy protection.

Predictions from January 2024…

In 2024, AI will continue reshaping ‘traditional’ human rights, such as freedom of expression and privacy. Additionally, AI and other cutting-edge technologies will spark debates about our dignity and what defines us as humans. Neurorights, prompted by AI and biotechnological developments, will gain prominence on human rights agendas.

AI will open new possibilities for enabling online access for people with disabilities. Access for people with disabilities will feature high in the development of usability standards and, ultimately, new digital products and services.

We may see an increase in the use of privacy-enhancing technologies such as federated learning and homomorphic encryption.

Tech companies might be compelled to be more transparent about their content moderation policies and to engage in dialogue with civil society to protect free speech while combating misinformation and preserving information integrity in the face of growing challenges posed by AI technologies.

There will probably be further development of legal frameworks and codes of practice to protect children online, inspired by actions taken in France, the UK, and elsewhere. Digital literacy programmes for children may become more widespread, and there could be an increased emphasis on co-viewing and co-playing strategies for parents to guide their children’s digital experiences.

Benefiting from AI progress, developments in neurotechnologies are accelerating, paving the way to significant breakthroughs in the medical field (for instance, brain-computer interfaces that restore the ability to walk or AI-based ‘brain decoders’ that may help people otherwise unable to physically speak to communicate), but also to more widely-available direct-to-consumer applications (e.g. neurotech devices used for neurogaming, learning, meditation, or interaction with digital devices).

Read More

Read more on predictions…

The tech and internet industry itself is showing an increasing interest in neurotechnologies, for a variety of reasons (from developing new ways for users to control digital devices or interact with virtual environments, to understanding that access to neural data could significantly change advertisement-based business models). In fact, a UNESCO report indicates that computer technology is the area in which most neurotechnology patents are filled, with medical technology and biotechnology only coming after.

These developments come with implications for human rights and fundamental freedoms (e.g. personal identity, mental privacy, free will) that may require new forms of protection. Although the concept of neurorights is not new, it will gain more visibility in policy spaces in 2024. At the UN Human Rights Council, the Advisory Committee is tasked with presenting a report on neurotechnology and human rights at the Council’s 57th session in September 2024, and UNESCO is expected to start working on a recommendation on the ethics of neurotechnology.

Resources form Diplo and GIP:

Freedom of expression | Privacy and data protection | Children rights | Rights of persons with disabilities


 Art, Graphics, Light, Logo, Text

In 2024, economic fragemtnation trend continued. OECD could not agree on the new global digital taxation framework. Thus, Canada, India and many other countries will introduce their own Digital Services Taxes (DSTs). It will further complicate cross-border digital economy.

After five-years of negotiations, deal on Joint Initiative on e-commerce at WTO is not in-sight in spite of July agreement of 80 states on the text on agreement. The United States, India, South Africa, Brazil, Turkey and most developing countries are not satisfied with agreement.

On anti-trust, the main development was (Google). There is also new trend in making with anti-monopoly initiatives on AI. Notably, Microsoft’s significant investment in OpenAI h

Digital trade

In 2024, after 5 years, negotiations at WTO on Joint Initiative on e-commerce entered into a ‘make or break’ phase on Some progress was made with 80 countries agreeing in July on the global digital commerce rules. The main set-back has been the withdrawal of the US from the negotiations and their reluctance to endorse a new compromise agreement. In addition to India and South Africa who opposed e-commerce negotiations from the outset, BrazilIndonesia, and Turkey expressed reservations about the July version of the agreement. It remains to be seen in 2025 if AustraliaJapan, and Singapore, key movers behind e-commerce negotiations, will manage to forge a new consensus.

Digital taxation

The OECD member states failed to agree on the taxation of digital platforms (Pillar One negotiations) by the extended deadline of 30 June 2024. Although the OECD stated that progress has been made and negotiations are ongoing, the lack of agreement—particularly due to US reluctance—makes enforcement of Pillar One unlikely in the near term. In response, several countries have either maintained or introduced their own Digital Services Taxes (DSTs); most notably, Canada has authorised a DST with retroactive effect to 1 January 2022, and India appears poised to strengthen its existing levy. The European Union is also revisiting the possibility of an EU-wide DST, especially now that a global solution remains out of reach. Overall, with the OECD still at an impasse, unilateral DSTs are expanding, raising concerns about double taxation and economic distortions for digital businesses operating in multiple jurisdictions.

Anti-trust

In 2024, tech companies were under much pressure for monopolistic practices. Google’s lawyers were particularly busy. A DC court found Google violated Section 2 of the Sherman Act. Remedies—potentially including Chrome and Android divestitures—await a 2025 trial and are subject to appeal. The Department of Justice and several states claimed Google monopolises digital advertising services.Google faces anti-trust investigation mainly centered of dominance of Chroma browser and search engines in Japan, Department of Justice acused Apple over monopolistic practice in app ecosystem policies. Similar case against Google is initiated by Canada’s Competition Bureau.

Meta also faces several anti-monopoly processes. EU hits Meta with fine of nearly Euor 800M for anti-competitive pracices to its Marketplace features. In the United States, Meta faces court case on acquistion of Instagram in 2012 and WhatsApp in 2014. India’s Competition Commission imposed $25.4 million fine and restricted data-sharing between WhatsApp and other Meta-owned applications for five years. Apple also faces anti-monoply probe in India.

In EU, tech companies face anti-trust challenges under the Digital Market Act (DMA). Google has announced further changes to its search results in Europe in response to complaints from smaller competitors. EU is expanding anti-monopoly actions in the AI realm with scrutinising two partnerships: Micorosof-OpenAI and Google Samsung. Google managed to win court case overturning €1.49 billion antitrust fine in EU court.

Anti-monopoly is used in the battle among tech companies. Elon Musk has expanded his legal battle against OpenAI by adding Microsoft to his lawsuit, accusing both companies of engaging in illegal practices to monopolise the generative AI market. 

Digital currencies

A growing number of countries, representing 98% of the global economy, are exploring digital versions of their currencies, with almost half of them at an advanced stage of development. Countries like China, the Bahamas, and Nigeria have already seen a noticeable increase in the usage of their central bank digital currencies (CBDCs).

Research by the Atlantic Council reveals that all G20 nations are now investigating CBDCs, with 44 countries currently piloting them, up from 36 last year. Authorities are accelerating these efforts in response to decreasing cash usage and the potential threat from cryptocurrencies like Bitcoin and big tech companies.

Notable growth has been observed in the CBDCs of the BahamasJamaica, and Nigeria, while China’s digital yuan (e-CNY) has seen its transaction value almost quadruple to 7 trillion yuan ($987 billion). The European Central Bank has also launched a multi-year digital euro pilot, while the US, despite being slower to act, has recently joined a cross-border CBDC project with six other central banks.

Meanwhile, the United States is grappling with privacy concerns over CBDCs. Elsewhere, Russia’s digital rouble pilot has expanded to Moscow’s metro, and Iran is working on a digital rial.

Predictions from January 2024….

In 2024, AI will accelerate changes in the the modern economy, from restructuring traditional industries to developing new ones built around AI technology. The main policy dynamics will be related to the economic consequences of the digital decoupling between China and the USA, anti-monopolies in AI field, taxation of online industry, and digital trade.

Read More

Anti-monopoly

In 2024, EU’s Digital Market Act (DMA) will provide the legal basis for a push for interoperability among social media platforms. New solutions should make it easier for users to switch from platform to platform without losing previous networks of people. They will also prevent usability-driven monopolies of the major social media platforms. 

New risks related to AI come from the high concentration of AI solutions, data, and knowledge in the hands of a few companies. Microsoft’s investment in OpenAI triggered an investigation by US and UK anti-monopoly authorities. More such antitrust investigations are likely to be launched in 2024.

Taxation

June 2024 is the next deadline for concluding the OECD’s negotiations on taxing cross-border digital services (Amount A of Pillar One of OECD negotiations). The main controversy is around a complex formula for reallocating taxes among the major tech companies. The outcome of negotiations will have far-reaching consequences for the digital economy. Many countries have paused their unilateral digital service taxes until the OECD completes the negotiation process.

Digital trade

At the beginning of the year, all eyes will turn to the 13th WTO Ministerial, which will take place from 26 to 29 February in Abu Dhabi. The Joint Statement on e-commerce, a plurilateral negotiating process currently ongoing among 90 WTO members, is likely to achieve a ‘partial delivery’. While preliminary agreement has been reached on several topics, such as Paperless trading, Open government data, Online consumer protection, Cybersecurity, Open Internet access (Net Neutrality), and Personal data protection, the most controversial issues remain unresolved.

Read more on predictions…

This is the case of negotiations on data flows, which have suffered a significant setback following the USA’s decision to withdraw from negotiations on this topic, under the justification of preserving domestic policy space on data regulation. The 13th WTO Ministerial is likely to produce an agreement on e-commerce, but significantly less ambitious than initially foreseen. 

Digital Economy Agreements (DEAs) will likely continue to hit the headlines. This digital-only type of free trade agreement is being concluded by countries around the world, and notably in the Asia-Pacific. DEAs not only deal with traditional issues, such as customs duties, online consumer protection, and electronic authentication, but also tackle emerging trends and technologies that are not yet considered to be ‘treaty-ready’, such as digital identities and AI, establishing platforms for collaboration and harmonization. As multilateral discussions become increasingly stalled, the role and importance of DEAs in digital governance is likely to grow in 2024.

The value of bitcoin more than doubled in 2023 (see graph below). Regulatory frameworks for cryptocurrencies will become more sophisticated, aiming to balance innovation with financial stability and consumer protection. In 2024, more countries will introduce Central Bank Digital Currencies (CBDC). 

The image shows a line graph depicting the rising value of bitcoin throughout 2023, which on 31 December 2023 was 35563.99 USD.

Transparency

Social media companies are becoming less and less transparent about their activities and business models. Most of them stopped and reduced access to their data to independent researchers. For example, X (Twitter) stopped free access to the platform’s API. Meta is also restricting access to their services. The EU’s Digital Service Act provides regulatory solutions via provisions to allow researchers to monitor social network platforms. The lack of transparency on social media platforms will be particularly problematic in monitoring online aspects of the forthcoming elections in 2024.

More from Diplo and GIP:

Taxation | Cryptocurrencies | E-commerce and trade | WTO Joint Statement Initiative on E-commerce


 Light, Text, Art, Graphics

Technical standardisation increased in relevance last year due to several reasons:

First, dealing with increasingly complex and interdependent AI and digital systems requires standardised solutions.

Second, standardisation became a geoeconomic and geopolitical issue with actors gaining strategic advantages by ensuring that standards reflect their priorities and strengths.

Third, human rights considerations became much more prominent, especially in standards for AI technology.

Out of hundreds of standardisation processes, here are a few highlights:

Digital public infrastructure (DPI): India, a DPI initiative champion, pushed for DPI standards during the World Telecommunication Standardisation Assembly (WTSA) held in Dubai.

Mobile standards: On 6G standardisation, China was actively involved in ITU processes. The USA and its allies agreed to coordinate efforts on 6G technologies.

Brain-computer interface: China initiated a standardisation process on BCI.

Predictions from January 2024…

Tech standards—especially those adopted at an international level—are essential for interoperability and ensuring that technologies work seamlessly across borders. They also enable quality of service, safety and security, and can serve as de-facto governance tools in particular for newer technologies that are not yet subject to (strong) regulations. We saw this in 2023, when standard-setting bodies responded fast to the calls for AI governance mechanisms by focusing on the development of standards. Right now, there are over 300 existing and under-development standards for AI, nationally and internationally. 

The standardisation community reacted fast in 2023 by adopting a few AI standards. And more standards are expected, especially in issues of monitoring and evaluating AI foundational models. Beyond AI, we can expect a focus on standards for high-speed mobile networks (6G), quantum computing, brain-machine interfaces and other similar advanced technologies. 

Outside of traditional standard-setting bodies, minilateral and multilateral processes such as G7, G20 and the UN will explore technical standards as a ‘soft regulation’ approach at the time where there is little appetite for international treaties, continuing a trend from previous years. 

In 2024, this AI standardisation work will accelerate, also encouraged by the growing recognition—from traditional regulators and multilateral bodies—of the importance of standards in meeting public policy objectives. 

Beyond AI, standards around the new high speed mobile network (6G) will be in focus in particular at ITU and 3GPP. ITU’s World Radiocommunications Conference at the end of 2023 set the starting ground for these developments. Telecom operators around the world are testing the new speed networks, and research is in full swing. Standardisation work for quantum computing, quantum communication networks, virtual reality, and brain-computer interfaces will likely also accelerate this year, as these technologies are on a fast development track, not least because they are benefiting from advancements in AI. 

As human rights issues are increasingly brought up in standardisation discussions, there will be a push for human-rights-by-design approaches to be embedded into technical standards that form part of the design and development process of new hardware and software.


More from Diplo and GIP:

Digital standards

 Light, Art, Graphics, Text, Logo

In 2024, policymakers in the EU, the US, and Singapore mandated robust encryption and secure practices, making encryption a foundational element of software and system design. EU member states incorporated the requirements of the Network and Information Security Directive (NIS2) into their national legislation, mandating strong encryption measures to protect critical infrastructure such as energy grids and healthcare networks. Meanwhile, Singaporean regulators placed encryption at the center of their “secure-by-deployment” approach.

Within the EU, the primary controversy over encryption policy revolved around the proposed “Chat Control” bill, which would require technology platforms to scan their services for illegal activities.This proposal a strong push back by civil society and many EU parliamentarians, as it may create backdoor for access to encrypted communication.

The major development in encryption in 2024 wasthe release of the new post-quantum cryptography standard by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST). This standard introduces a principal set of encryption algorithms designed to resist attacks by quantum computers. The new NIST standards represent a proactive measure, urging system administrators to transition to these more robust encryption methods to safeguard security and privacy in the era of powerful quantum computing.

Read more on predictions…

The decades-long saga around online encryption will gain new momentum in 2024. In their push to get more access to encrypted communication, some governments are proposing client-side (on-device) scanning of communications for illegal content such as child pornography. Once such content is identified, an alert is sent to law enforcement authorities. After the message leaves the device, it will enjoy end-to-end encryption. 

Encryption serves as a vital tool for protecting personal and corporate data, yet it also poses significant challenges to criminal investigations. Traditionally, governments have been pushing for backdoor access to encrypted content—via online platforms—while tech companies and human rights actors have been resisting. In 2024, a more nuanced debate is gaining ground around the notion of client-side scanning: using algorithms to scan messages locally (on-device) for illegal content, and enabling the reporting of red flags to authorities. 

Proponents—including some governments—argue that on-device scanning would preserve the end-to-end encryption principle, but still support law enforcement in their fight against crime. Opponents—including some tech companies like Meta—argue that ‘client-side scanning could mark the end of private communication online’, as the technology could also serve as a tool for mass surveillance. 


 Light, Art, Graphics, Nature, Night, Outdoors, Text

By 2024, over 5 billion individuals globally have adopted digital IDs, with more than 150 countries implementing digital ID programmes. This widespread adoption underscores the critical role of digital identities in enhancing access to services and promoting economic inclusion.

Last year were characterised with more intensive use of AI in digital identification. Privacy concern over potential miuse triggered research that provest that decentralised identification and self-sovereign identity are technically possible.

The most prominent inititives in 2024 include:

Use of India’s Aadhaar verification system as open-source solution in developing countries.

EU has tackled digital identities from wide range of angles. It started testing technical solutions for EU Digital Identity Wallet (EUDI). Implementation of the use of the biometric data for entry of non-EU citizens was delayed due to techical problems.

In the United States The National Institute of Standards and Technology (NIST) updated its Digital Identity Guidelines to strengthen anti-fraud measures and promote accessibility in the United States.

Argentina launched QuarkID, the government-backed blockchain identity system which ensures high level of privacy proteciton.

The Internet Governance Forum (IGF) hosted session on digital identities with the main takeway of balancing privacy, security, and inclusivity in digital identities project which are becoming critical infrastructure of modern society and economy.

Predictions from January 2024…

Digital identities will gain relevance in 2024. The most interesting development will be related to the Digital Public Infrastructure (DPI) initiative, which provides a solution for managing identities online. DPI was endorsed by G20 during the New Delhi Summit and gained relevance in international debates in 2023. 

Having a digital identity is becoming critical for individual participation in the economic and social life of digitalised societies. Digital identity is used for access to government, economic, and other services. Proper technical and governance solutions for digital identity involve a delicate interplay between the protection of privacy, access to societal services, financial transactions, and other aspects of our lives that require proof of identity.

Identity management opens many controversies. For example, the proposed revision of the EU regulation on electronic identification, authentication and trust services (eIDAS regulation) has attracted criticism over certain provisions that would allegedly enable governments to intercept web traffic without effective recourse, limit the security measures that could be taken to protect web traffic connections, and otherwise deteriorate privacy safeguards. 

In 2024, there will be more discussion on the interoperability of online identities between countries, as national jurisdictions over identities collide with the trans-border nature of most digital services and activities. 

More from Diplo and GIP:

Digital identities


 Art, Graphics, Light, Logo, Text

Content was central in 2024 due to the series of major elections held in close to 70 countries, including USA, UK, and India. Social media platforms were very proative in order to contain risks from their misues in election pruposes. TikTok’s launch of an Election Centre for EU member states, Meta’s establishment of an Elections Operations Centre, and Google’s Jigsaw Unit’s campaigns across TikTok and YouTube are commendable efforts to curb election disinformation.

On AI, Meta reported minimal AI impact on global misnformation.

Predictions from January 2024…

In 2024, according to the Economist, 4.2 billion people in 76 countries will hold some sort of national election. Digital platforms and tools will play an important role in campaigns and elections. Their relevance increases given the high importance of these elections for the future of democracy itself.

AI empowers the generation of content, including fake and manipulative content such as the following deepfake of Trump and Fauci posted by Ron DeSantis.

The image shows a screenshot of a tweet from the account @DeSantisWarRoom. The tweet reads: "Donald Trump became a household name by FIRING countless people on television. But when it came to Fauci..." The text is followed by an image which shows six deepfake photographs of Trump and Fauci, sitting together, smiling, or hugging.

Link: https://www.youtube.com/watch?v=hLuUmNkS21A

Some platforms like TikTok, Discord, and Twitch are developing new tools to handle election disinformation, while others like X and Meta have rolled back their policies. However, experts have already expressed concerns about platforms not having sufficient resources for monitoring complexity of online content during elections. 

In Michigan, USA, there is impending legislation to regulate AI in political advertising highlighting the need for transparency and accountability in how AI is leveraged during electoral processes.

Detection of AI-generated content

The race to create and detect AI content will speed up in 2024. Those who use AI to generate text, video, and sound in the race have a significant advantage. The “AI detecting” camp attempts to catch up in the race using two major approaches: detection and watermarking.

Read more on predictions…

First, AI detection applications and platforms are failing. It became clearer in July when OpenAI discontinued its platform for detecting AI-generated text. The probabilistic core of generative AI makes it difficult to determine whether AI has generated content. In addition, AI foundation models are rapidly improving their ability to mimic human creativity. Thus, early signs of ‘AI creativity’, such as imperfect drawings of human hands, are becoming less useful as AI advances. AI detection can easily lead to false positives (misclassifying human content as AI-generated) or false negatives (failing to identify machine-generated content).

A recent study of 14 AI-detection tools used in universities found that they were neither accurate nor reliable. There are increasing cases of students being wrongfully accused of AI plagiarism. Many universities have stopped using AI plagiarism platforms due to the ethical and legal risks.

The second main approach is to watermark AI-generated content. It has gained traction among regulators, businesses, and researchers. Watermarking is more promising than AI detection, but it is unlikely to be a foolproof solution.

For example, “Tree-Ring” watermarking is built into the process of generating AI images using diffusion models, which start with messy images that are gradually sharpened. The ‘Tree-Ring’ method embeds a watermark during the early noise phase. Watermarks can be detected during the noise phase of image generation by reverse engineering the final image.

The race continues between those attempting to conceal watermarks and those trying to detect them. In parallel, many policymakers advocate for a watermarking approach. In 2023, the US government and a few technology companies made ‘voluntary commitments’ to support watermarking research.

Given the limited reliability of detection and watermarking techniques, the most helpful approach is the age-old one of determining an image’s source via email, URL, and institutional/individual credibility. Because we cannot trust AI systems, we stay with the old approach of trusting (or not) people and institutions who send us content, whether AI or human-generated.


Content moderation

Major tech companies, such as Alphabet, X, Meta, and TikTok, will play an increasingly prominent role in content policy and moderation. They are becoming de facto content regulators, determining what content is allowed and what is removed from their platforms. In 2024, these companies are expected to continue adapting their content moderation policies to address the growing concerns around misinformation, fake news, and violent content.

Content regulation

The implementation of the EU’s Digital Service Act (DSA) will gain momentum in 2024. Analogous to the ‘Brussels effect’ from data governance, DSA is likely to be emulated in other jurisdictions worldwide. 

The internet has become an unregulated space where violent ideologies flourish unchecked. The US and EU’s joint statement on cyber resilience reflects a concerted effort to address cyberterrorism and online radicalisation.

Because governments and tech companies lack adequate policies and technical tools for content governance, the arbitrary prohibition of certain content could cause significant social and political unrest. Tensions will spill over into the streets and squares from online spaces. Following the crisis of 2024, more stringent content governance policies will emerge.

Content fragmentation

The digital space is fragmenting with the development of smaller and segregated online communities of like-minded people. For example, when Trump was banned from Twitter he moved to Truth Social platform which gathers users with similar views. 

This trend of fragmentation means a further disintegration of social spaces and ‘online squares’ with far-reaching consequences for social cohesion and political systems.

Transparency

Social media companies are becoming less and less transparent about their activities and business models. Most of them stopped and reduced access to their data to independent researchers. For example, X (Twitter) stopped free access to the platform’s API. Meta is also restricting access to their services. The EU’s Digital Service Act provides regulatory solutions via provisions to allow researchers to monitor social network platforms. The lack of transparency on social media platforms will be particularly problematic in monitoring online aspects of the forthcoming elections in 2024.


 Art, Graphics, Light, Text, Logo

Inclusion took centre stage during negotiating the Pact for the Future and the Global Digital Compact. Digital inclusion, connectivity, and bridging the digital divide were prioritised in the UN discussions.

2024 adds a groundbreaking twist to inclusion discussion: the rise of AI sparks pressing questions about how major AI platforms reflect—and sometimes distort—cultural, religious, and regional diversity. Does the answer we get from the major AI platform reflect the philosophical and other traditions of communities worldwide?

Predictions from January 2024…

In 2024, AI will trigger new aspects of inclusion in addition to the traditional issues of internet access. The main issue is the incorporation of knowledge from various cultural, regional, and civilisation traditions in the development of AI models. Current AI models are based on limited datasets, primarily Western ones. In the coming years, communities will aim to develop bottom-up AI solutions that reflect their cultural and knowledge heritage.

Inclusion is a cornerstone principle of the 2030 Agenda for Sustainable Development, and one that should guide all our efforts to ensure that no-one is left behind in the march into a brighter global future. We must make certain that all citizens, communities, and nations benefit from the historic transition to a digital world, and that special attention is paid to those groups that have historically been neglected or ill-served by technological progress, such as women and girls, those with disabilities, youth and children, and indigenous peoples.

In the 2020s, the challenges of digital inclusion will demand a holistic approach that is able to take into account all of the following policy areas: 

  • Access inclusion: equal access to the internet, information/content, digital networks, services, and technologies.
  • Financial inclusion: access to affordable and trustworthy digital financial and banking services, including e-commerce, e-banking, and insurance.
  • Economic inclusion: facilitate all individuals’, groups’, and communities’ ability to participate fully in the labour market, entrepreneurship opportunities, and other business and commercial activities.  
  • Work inclusion: support and promote equal access to careers in the tech industry and elsewhere irrespective of gender, culture, or nationality. 
  • Gender inclusion: educate and empower women and girls in the digital and tech realms.
  • Policy inclusion: encourage the participation of stakeholders in digital policy processes at the local, national, regional, and international levels. 
  • Knowledge inclusion: contribute to knowledge diversity, innovation, and learning on the internet. 

As we endeavour to find unified responses in this varied range of spheres, and as we are forced to make informed trade-offs between different goals and interest groups, clarity in our thinking about statistics and policy will be essential. Without it, we will be negligent in our duty to work towards the digital inclusion of the ‘next’ or ‘bottom’ billion of digitally excluded citizens of the world.


More from Diplo and GIP:

Digital access | Sustainable development | Inclusive finance


Stay ahead of the curve with the update on the latest digital developments at Digital Watch, sharpen your skills in Diplo courses, and harness the power of advanced AI tools.

Want more insights? Subscribe to our Diplo Newsletters and feel free to contact Diplo directly.

Let’s explore the AI and digital future together!

Previous predictions and forecasts: 2011 | 2012 | 2013 | 2014 | 2015 | 2016 2017 | 2018 | 2019 2020 | 2022 | 2021 | 2022 | 2023 | 2024

Valtazar Bogišić: How can the 1888 Code inspire the AI code?

Note: I had the pleasure of talking often with the late Ambassadors Ljubiša Perović and Milan Begović about the applicability of Valtazar Bogišić’s legal philosophy to our era. I am dedicating this text to the memory of two of them. The 2024 updated version is prepared ahead of the annual meeting of the 2024 International Forum on Diplomatic Training, which will be held between 8 and 12 October in Becici, Montenegro with the aim to link Montenegrin cultural, legal, and political heritage to the latest challenges of the AI era ahead

Today, as we search for the best way to regulate AI, let us seek inspiration from past philosophers and thinkers. One of them is Valtazar Bogišić, who drafted a legal masterpiece of its time – the  General Property Code for the Principality of Montenegro in 1888. 

As his wisdom remains confined to a small circle of legal studies in Montenegro and the Balkans, this text aims to unlock this hidden thinking treasure that can help drafters of AI codes and regulations worldwide. 

Similar to the current AI transformation, Valtazar Bogišić, in 1880, had to anchor, at that time, modern civil code regulation into the customs of Montenegrian society. He had to sync two different legal, policy, and societal spaces. AI regulators and diplomats face similar dilemmas nowadays as AI global platforms developed in the cultural context of Silicon Valley are used to address family, personal and emotional issues of very diverse societies and cultures worldwide. What can we learn for AI era from Bogisic’s way of linking modernity and tradition in the 19th century? 

There is painting of Valtazar Bogisic, lawyer who drafted the Montenegrin Civil Code in 1888. Painter is Vlaho Bukovac.
Valtazar Bogišić – Painting by Vlaho Bukovac (1892)

Combination of customary and modern law

The Montenegrin 1888 code successfully integrated traditional customary law with modern legal principles of Code Civil or Napoleonic Code, which was introduced after the French Revolution.  Bogišić conducted an in-depth study of Montenegrin customs and legal traditions, managing to incorporate them into the code, thus preserving the authenticity of local traditions while introducing modern legal concepts.

In contrast, modern digital and AI regulations and strategies are typically a ‘copy and paste’ exercise. This approach dealt with rather technical cybersecurity or data protection issues that dominated the first two decades of our century. You could have used similar regulations to protect cyber infrastructure in, for example, Germany and Kenya. 

Conversely, AI poses another type of challenge as it is much more than technology. AI codifies social and cultural norms. Thus, the cultural context of Germany cannot be used to deal with, let’s say, family issues such as divorce in Kenya. Therefore, drafters of AI policies and regulations should, like Bogisic in the 1880s, understand their society and adjust AI models and governance to their respective local cultural and societal contexts. 

Simplicity and clarity

The 1888 Montenegrin code was written in simple and understandable language, making it accessible to the general population and reminiscent of simple and precise formulation in Roman law. For example, Bogisic’s  formulation that initial injustice or legal problem cannot be fixed by the passage of time reads:

‘Što se grbo rodi, vrijeme ne ispravi.’ has also simple formulation in the Latin: ‘Quod natura curvum est, nemo corrigit.’

The gist of this formulation in English – ‘what is born crooked, time does not straighten’ – is, in modern legal regulations, usually explained through long paragraphs filled with legal terms, making them incomprehensible to a wider population, thus preventing them from understanding all possible implications of the policies discussed. 

Most AI regulations are similar, i.e. they are written beyond comprehension. The EU AI Act has 144 pages, which is similar to other AI regulations. Hence, grasping them requires in-depth knowledge of AI, which creates a major obstacle for the legal profession, let alone citizens; while some AI complexity exists, it can be explained simply and clearly. Modern AI drafters can learn a lot from Bogisic!

Societal acceptance

The people of Montenegro fully accepted the 1888 code as their own law, thanks to Bogisic’s work linking their existing customs to modern law in a clear and understandable language. Thus, the code was easily applied in everyday life, making it an effective tool for regulating property and family relations in Montenegro. Bogisic’s code was one of the laws which required the least force and sanctions to be implemented.

Implementing the current AI regulation is complex, mainly due to the incomprehensibility of the language used. As a result, most rules remain just letters on paper as companies and developers follow their ways of developing and deploying AI systems. 

Even when AI regulation becomes enforceable, it’s implementation is conducted mainly via the threat of sanctions and fines. Again, Bogisic can inspire a regulatory approach relying more on societal acceptance of rules than on punishments. 

Protection of vulnerable social groups

One of the Montenegro Code’s main goals was to protect poorer segments of society. The code introduced certain social measures that protected the rights of peasants and small property owners, thereby promoting social justice, which also helped the code’s smoother acceptance.

This starkly contrasts the current situation where AI is increasing societal and digital divides between masters of AI and the rest of society. If not addressed properly, these new divides around access to knowledge will tear societal fabrics worldwide and create inevitable tensions and conflicts.

Agile regulations

Fully aware of societal changes, Bogišić designed the code in such a way that it could adapt to future social and economic developments. This was achieved through the flexibility in interpreting and applying the legal provisions.

AI’s rapid and often unpredictable development makes many rules obsolete fast. Last year’s frenzy call for regulating long-term risks brought us on the edge to adapt rigid regulations of AI in anticipation of possible future developments. Fortunately, it was avoided. This year brought a much more realistic approach to AI regulation, focusing on concrete issues such as jobs, education, and content management. AI regulators are getting close to Bogisic’s ‘agile regulation’ approach.  

Conclusion

Bogišić’s work helped Montenegro transition from a traditional to a modern society. He drafted laws compatible with European legal standards at that time while tailoring them to Montenegrin society’s specific needs. Today, many societies are searching for similar social contracts to anchor the latest AI developments into local cultural, legal, and societal contexts. 

Bogišić’s legacy is significant for diplomats working to sync between international and national dynamics. In the coming years, many hours of their work will ensure global AI developments are anchored well into national legal, cultural, and societal dynamics. They will have to negotiate not only with their counterparts from other countries but also with their constituencies back home. In this context, the International Forum for Diplomatic Training members should act fast to prepare future diplomats for new and demanding tasks in the AI era. Bogisic’s work and wisdom can inspire us all!