Four seasons of AI:  From excitement to clarity in the first year of ChatGPT

Winter of excitement | Spring of metaphors | Summer of reflections | Autumn of clarity 

ChatGPT was launched by OpenAI on the last day of November in 2022. It triggered a lot of excitement. We were immersed in the magic of a new tool as AI was writing poems and drawing images for us. Over the last 12 months, the winter of AI excitement was followed by a spring of metaphors, a summer of reflections, and the current autumn of clarity.

On the first anniversary of ChatGPT, it is time to step back, reflect, and see what is ahead of us.

Winter of Excitement 

ChatGPT was the most impressive success in the history of technology in terms of user adoption. In only 5 days, it acquired 1 million users, compared to, for example, Instagram, which needed 75 days to reach 1 million users. In only two months, ChatGPT reached an estimated 100 million users. 

The launch of ChatGPT last year was the result of countless developments in AI dating all the way back to 1956! These developments accelerated in the last 10 years with probabilistic AI, big data, and dramatically increased computational power. Neural networks, machine learning (ML), and large language models (LLMs) set the stage for AI’s latest phase, which brought tools like Siri and Alexa, and, most recently, generative pre-trained transformers, better known as GPTs, which are behind ChatGPT and other latest tools. 

ChatGPT started mimicking human intelligence by drafting our texts for us, answering questions, and creating images. 

Spring of Metaphors

The powerful features of ChatGPT triggered a wave of metaphors in the spring of this year. We humans, whenever we encounter something new, use metaphors and analogies to compare its novelty to something we already know. 

Most AI is anthropomorphised and typically described as the human brain, which ‘thinks’ and ‘learns’. ‘Pandora’s box’ and ‘black box’ are terms used to describe the complexity of neural networks. As spring advanced, more fear-based metaphors took over, centred around doomsday, Frankenstein, and Armageddon

As discussions on governing AI gained momentum, analogies were drawn to climate change, nuclear weapons, and scientific cooperation. All of these analogies highlight similarities while ignoring differences. 

Summer of Reflection 

Summer was relatively quiet, and it was a time to reflect on AI. Personally, I dusted off my old philosophy and history books to search for old wisdom to answer current AI challenges, which are far beyond simple technological solutions. 

Under the series ‘Recycling Ideas’ I dove back into ancient philosophy, religious traditions, and different cultural contexts from Ancient Greece to Confucius, India, and the Ubuntu concept of Africa, among others.

Autumn of Clarity

Clarity pushed out hype as AI increasingly made its way onto the agendas of national parliaments and international organisations. Precise legal and policy formulations have replaced the metaphorical descriptions of AI. In numerous policy documents from various groupings—G7, G20, G77, G193, UN—the usual balance between opportunities and threats has shifted more towards risks. 

Some processes, like the London AI Summit, focused on the long-term existential risks of AI. Others brought more ‘weight’ to the immediate risks of AI (re)shaping our work, education, and public communication. In inspiration for governance, many proposals often mentioned the International Atomic Agency, CERN, and the International Panel on Climate Change. 

A year has passed: What’s next?

AI will continue to become mainstream in our social fabric, from individual choices and family dynamics to jobs and education. As the structural relevance of AI increases, its governance will require even more clarity and transparency. As the next step, we should focus on the two main issues at hand: how to address AI risks and what aspects of AI should be governed. 

How to address AI risks  

There are three main types of AI risks that should shape AI regulations: 

Unfortunately, currently, it is these long-term, ‘extinction’ risks that tend to dominate public debates. 

AI Risks Venne Diagram

Short-term risks: These include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks: We can see them coming, but we aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all AI knowledge and tools, just as tech platforms currently control people’s data, which they have amassed over the years. Such AI power could lead them to control our businesses, lives, and politics. If we don’t figure out how to deal with such monopolies in the coming years, they could bring humanity to the worst dystopian future in only a decade. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulations, as well as data and intellectual property protection

Long-term risks: The scary sci-fi stuff, or the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising humanity’s very survival. After very intensive doomsday propaganda through 2023, these threats haunt the collective psyche and dominate the global narrative with analogies to nuclear armageddon, pandemics, or climate cataclysms. 

The dominance of long-term risks in the media has influenced policy-making. For example, the Bletchley Declaration adopted during the London AI Safety Summit heavily focuses on long-term risks while mentioning short-term ones in passing and making no reference to any medium-term risk. 

The AI governance debate ahead of us will require: (a) addressing all risks comprehensively, and (b) whenever it is required to prioritise them, that decisions be made in transparent and informed ways. 

Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a whole spectrum of regulatory tools and approaches, such as the use of precautionary principles, scenario building, and regulatory sandboxes. The key is that AI risks require transparent trade-offs and constant revisits based on technological developments and the responses of society.

What aspects of AI should be governed? 

In addition to AI risks, the other important question is: What aspects of AI should be governed? As the AI governance pyramid illustrates below, AI developments are related to computation, data, algorithms, or uses.  Selecting where and how to govern AI has far-reaching consequences for AI and society. 

 Business Card, Paper, Text, Triangle

Computation level: The main question is access to powerful hardware that processes the AI models. In the race for computational power, two key players—the USA and China—try to limit each others’ access to semiconductors that can be used in AI. The key actor is Nvidia, which manufactures graphical processing units (GPU) critical for running AI models. With the support of advanced economies, the USA has an advantage over China in semiconductors, which they try to preserve by limiting access to these technologies via sanctions and other restriction mechanisms.

Data level: This is where AI gets its main inputs from, sometimes called the ‘oil’ of the AI industry. However, the protection of data and intellectual property is not as prominent as the regulation of AI algorithms in the current AI debates. There are more and more calls for clarity on what data and inputs are used. Artists, writers, and academics are checking if AI platforms build their fortune on their intellectual work. Thus, AI regulators should put much more pressure on AI companies to be transparent about using data and intellectual property to develop their models.

Algorithmic level: Most of the AI governance debate is on algorithms and AI models. They mainly focus on the long-term risks that AI can pose for humanity. On a more practical level, the discussion focuses on the relevance of ‘weights’ in developing AI models: how to highlight the relevance of some input data and knowledge in generating AI responses. Those who highlight security risks also argue for centralised control of AI developments, preferably by a few tech companies, and restricting the use of an open-source approach to AI.

Apps and tools level: This is the most appropriate level for regulating technology. For a long time, the main focus of internet governance was on the level of use while avoiding any regulatory intervention into how the internet functions, from standards to the operation of internet infrastructure (like internet protocol numbers or the domain name system). This approach is one of the main contributors to the fast growth of the internet. Thus, the current calls to shift regulations on the algorithm level (under the bonnet of technology) could have far-reaching consequences for technological progress. 

Current debates on AI governance are focused on at least one of these layers. For example, at the core of the last mile of negotiations on the EU’s AI Act is the debate on whether AI should be governed on the algorithm level or when they become apps and tools. The prevailing view is that it should be done on the top of the pyramid—apps and tools.

Interestingly, most supporters of governing AI codes and algorithms, often described as ‘doomsayers’ or ‘longtermists’, rarely mention governing AI apps and tools or their data aspects. Both areas—data and the use of AI—are areas that have more detailed regulation, which is often not in the interest of tech companies. 

Read Also

ChatGPT: A Year in Review

 Birthday Cake, Cake, Cream, Dessert, Food, People, Person, Icing

x x x

On the occasion of the very first birthday of ChatGPT, the need for clarity in AI governance prevails. It is important that this trend continues, as we need to make complex trade-offs between short, medium, and long-term risks. 

At Diplo, our focus is on anchoring AI in the core values of humanity through our humAInism project and community. In this context, our focus will be on awareness building of citizens and policymakers. We need to understand AI’s basic technological functionality without going into complex terminology. Most of AI is about patterns and probability, as we recently discussed with diplomats while explaining AI via patterns of colours in national flags. 

Why not join us in working for an informed, inclusive, meaningful, and impactful debate on AI governance! 

Valtazar Bogišić: How can the 1888 Code inspire the AI code?

Note: I had the pleasure of talking often with the late Ambassadors Ljubiša Perović and Milan Begović about the applicability of Valtazar Bogišić’s legal philosophy to our era. I am dedicating this text to the memory of two of them. The 2024 updated version is prepared ahead of the annual meeting of the 2024 International Forum on Diplomatic Training, which will be held between 8 and 12 October in Becici, Montenegro with the aim to link Montenegrin cultural, legal, and political heritage to the latest challenges of the AI era ahead

Today, as we search for the best way to regulate AI, let us seek inspiration from past philosophers and thinkers. One of them is Valtazar Bogišić, who drafted a legal masterpiece of its time – the  General Property Code for the Principality of Montenegro in 1888. 

As his wisdom remains confined to a small circle of legal studies in Montenegro and the Balkans, this text aims to unlock this hidden thinking treasure that can help drafters of AI codes and regulations worldwide. 

Similar to the current AI transformation, Valtazar Bogišić, in 1880, had to anchor, at that time, modern civil code regulation into the customs of Montenegrian society. He had to sync two different legal, policy, and societal spaces. AI regulators and diplomats face similar dilemmas nowadays as AI global platforms developed in the cultural context of Silicon Valley are used to address family, personal and emotional issues of very diverse societies and cultures worldwide. What can we learn for AI era from Bogisic’s way of linking modernity and tradition in the 19th century? 

There is painting of Valtazar Bogisic, lawyer who drafted the Montenegrin Civil Code in 1888. Painter is Vlaho Bukovac.
Valtazar Bogišić – Painting by Vlaho Bukovac (1892)

Combination of customary and modern law

The Montenegrin 1888 code successfully integrated traditional customary law with modern legal principles of Code Civil or Napoleonic Code, which was introduced after the French Revolution.  Bogišić conducted an in-depth study of Montenegrin customs and legal traditions, managing to incorporate them into the code, thus preserving the authenticity of local traditions while introducing modern legal concepts.

In contrast, modern digital and AI regulations and strategies are typically a ‘copy and paste’ exercise. This approach dealt with rather technical cybersecurity or data protection issues that dominated the first two decades of our century. You could have used similar regulations to protect cyber infrastructure in, for example, Germany and Kenya. 

Conversely, AI poses another type of challenge as it is much more than technology. AI codifies social and cultural norms. Thus, the cultural context of Germany cannot be used to deal with, let’s say, family issues such as divorce in Kenya. Therefore, drafters of AI policies and regulations should, like Bogisic in the 1880s, understand their society and adjust AI models and governance to their respective local cultural and societal contexts. 

Simplicity and clarity

The 1888 Montenegrin code was written in simple and understandable language, making it accessible to the general population and reminiscent of simple and precise formulation in Roman law. For example, Bogisic’s  formulation that initial injustice or legal problem cannot be fixed by the passage of time reads:

‘Što se grbo rodi, vrijeme ne ispravi.’ has also simple formulation in the Latin: ‘Quod natura curvum est, nemo corrigit.’

The gist of this formulation in English – ‘what is born crooked, time does not straighten’ – is, in modern legal regulations, usually explained through long paragraphs filled with legal terms, making them incomprehensible to a wider population, thus preventing them from understanding all possible implications of the policies discussed. 

Most AI regulations are similar, i.e. they are written beyond comprehension. The EU AI Act has 144 pages, which is similar to other AI regulations. Hence, grasping them requires in-depth knowledge of AI, which creates a major obstacle for the legal profession, let alone citizens; while some AI complexity exists, it can be explained simply and clearly. Modern AI drafters can learn a lot from Bogisic!

Societal acceptance

The people of Montenegro fully accepted the 1888 code as their own law, thanks to Bogisic’s work linking their existing customs to modern law in a clear and understandable language. Thus, the code was easily applied in everyday life, making it an effective tool for regulating property and family relations in Montenegro. Bogisic’s code was one of the laws which required the least force and sanctions to be implemented.

Implementing the current AI regulation is complex, mainly due to the incomprehensibility of the language used. As a result, most rules remain just letters on paper as companies and developers follow their ways of developing and deploying AI systems. 

Even when AI regulation becomes enforceable, it’s implementation is conducted mainly via the threat of sanctions and fines. Again, Bogisic can inspire a regulatory approach relying more on societal acceptance of rules than on punishments. 

Protection of vulnerable social groups

One of the Montenegro Code’s main goals was to protect poorer segments of society. The code introduced certain social measures that protected the rights of peasants and small property owners, thereby promoting social justice, which also helped the code’s smoother acceptance.

This starkly contrasts the current situation where AI is increasing societal and digital divides between masters of AI and the rest of society. If not addressed properly, these new divides around access to knowledge will tear societal fabrics worldwide and create inevitable tensions and conflicts.

Agile regulations

Fully aware of societal changes, Bogišić designed the code in such a way that it could adapt to future social and economic developments. This was achieved through the flexibility in interpreting and applying the legal provisions.

AI’s rapid and often unpredictable development makes many rules obsolete fast. Last year’s frenzy call for regulating long-term risks brought us on the edge to adapt rigid regulations of AI in anticipation of possible future developments. Fortunately, it was avoided. This year brought a much more realistic approach to AI regulation, focusing on concrete issues such as jobs, education, and content management. AI regulators are getting close to Bogisic’s ‘agile regulation’ approach.  

Conclusion

Bogišić’s work helped Montenegro transition from a traditional to a modern society. He drafted laws compatible with European legal standards at that time while tailoring them to Montenegrin society’s specific needs. Today, many societies are searching for similar social contracts to anchor the latest AI developments into local cultural, legal, and societal contexts. 

Bogišić’s legacy is significant for diplomats working to sync between international and national dynamics. In the coming years, many hours of their work will ensure global AI developments are anchored well into national legal, cultural, and societal dynamics. They will have to negotiate not only with their counterparts from other countries but also with their constituencies back home. In this context, the International Forum for Diplomatic Training members should act fast to prepare future diplomats for new and demanding tasks in the AI era. Bogisic’s work and wisdom can inspire us all!

How can we deal with AI risks?

Clarity in dealing with ‘known’ and transparency in addressing ‘unknown’ AI risks

In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI, compared to more immediate issues: we’re warned about the risk of extinction, the risks from future superintelligent systems, and the need to heed to these problems. But is this focus on future risks blinding us from tackling what’s actually in front of us?

Types of risks

There are three types of risks:

In this text, you can find a summary of three types of risks, their current coverage, and suggestions for moving forward.

 Diagram, Venn Diagram, Disk

Short-term risks include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks are those we can see coming but aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all the AI knowledge, just as they currently control people’s data, which they have amassed over the years. They have the data and the powerful computers. That could lead to them calling the shots in business, our lives, and politics. It’s like something out of a George Orwell book, and if we don’t figure out how to handle it, we could end up there in 5 to 10 years. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulation, as well as protection of data and intellectual property. Provided that we acknowledge these risks and decide we want and need to address them.  

Long-term risks are the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising even humanity’s very survival. These threats haunt the collective psyche and dominate the global narrative with an intensity paralleling that of nuclear armageddon, pandemics, or climate cataclysms. Dealing with long-term risks is a major governance challenge due to the uncertainty of AI developments and their interplay with short-term and mid-term AI risks.  

The need to address all risks, not just future ones

Now, as debates on AI governance mechanisms advance, we have to make sure we’re not just focusing on long-term risks simply because they are the most dramatic and omnipresent in global media. If we are to take just one example, last week’s Bletchley Declaration announced during the UK’s AI Safety Summit had a heavy focus on long-term risks; it mentioned short-term risks only in passing and it made no reference to any medium-term risks.

If we are to truly govern AI for the benefit of humanity, AI risks should be addressed more comprehensively. Instead of focusing heavily on one set of risks, we should look at all risks and develop an approach to address them all.

In addressing all risks, we should also use the full spectrum of existing regulatory tools, including some used in dealing with the unknowns of climate change, such as scenario building and precautionary principles. 

Ultimately, we will face complex and delicate trade-offs that could help us reduce risks. Given the unknown nature of many AI developments ahead of us, trade-offs must be continuously made with a high level of agility. Only then can we hope to steer the course of AI governance towards a future where AI serves humanity, and not the other way around.

Jua Kali AI: Bottom-up algorithms for a Bottom-up economy

As artificial intelligence (AI) becomes a cornerstone of the global economy, AI’s foundations must be anchored in community-driven data, knowledge, and wisdom. ‘Bottom-up AI’ should grow from the grassroots of society in sustainable, transparent, and inclusive ways. 

Kenya, with its innovative bottom-up economic strategy, could play a pioneering role in new AI developments. Bottom-up AI could give farmers, traders, teachers, and the local and business communities the power to use and protect AI systems that contain their knowledge and skills that have been honed over generations. 

Kenya’s digital landscape is ripe for such innovation. It is home to a dynamic tech community. It has been the cradle of numerous technological breakthroughs, such as the widely known Mpesa mobile payment service and the Ushahidi crowdsourcing platforms. 

However, there is a prevailing notion, fuelled by media narratives, that AI development is the exclusive domain of big data, massive investments, and powerful computational centres. Is it possible for Kenya to circumvent these behemoths using its indigenous ‘Jua Kali’—an informal, resourceful approach—to cultivate Bottom-Up AI?

The answer is a resounding yes, as exemplified by the advent of open-source platforms and the strategic utilisation of small but high-quality datasets. 

Micro-enterprises of Kenya's Bottom-up (Jou Kali economy).

Jua Kali micro-enterprises

Open-source Platforms: The pillars of Bottom-up AI

Open-source AI platforms are challenging the dominant paradigm that AI necessitates colossal AI systems  – as leveraged by prominent language models like ChatGPT and Bard. A purported internal document from Google candidly acknowledges this competitive edge: “They are doing things with $100 and 13 billion parameters that we struggle with at $10 million and 540 billion parameters. And they are doing so in weeks, not months.”

Names like Vicuna, Alpaca, LLama, and Falcon now appear alongside ChatGPT and Bard, demonstrating that open-source platforms can deliver comparable performance without extravagant costs. Moreover, they tend to be more adaptable and environmentally friendly – requiring significantly less energy for data processing.

Small and High-quality Data: The key resource of Bottom-up AI

As open-source algorithms become more accessible, the emphasis of bottom-up AI naturally shifts to data quality, which depends on data labelling, a human-intensive activity. A lot of data labelling for Chat GPT has been done in Kenya, which triggered numerous labour criticisms.

Alternative approaches are feasible. As a matter of fact, at Diplo we have pioneered integrating data labelling into our regular activities, from research to training to project development. This is akin to using digital highlighters and sticky notes within our interactive frameworks, thus organically fostering Bottom-Up AI.

Text is not the sole medium for knowledge codification. We can also digitally annotate videos and voice recordings. Imagine farmers sharing their insights on agriculture and market strategies through narratives, enhancing the AI’s knowledge base with lived experiences.

Beyond Technology: Embracing organisational and societal shifts

The primary hurdle for Bottom-up AI is not technological but organisational and revolves around societal and policy priorities. Building on its digital dynamism, Kenya has the potential to lead by marrying technological advances with practical, citizen-focused applications.

Kenya’s bottom-up AI could contribute to preserving our knowledge and wisdom as a global public good, which we should pass on to future generations as the common heritage of humanity.

IGF 2023: Grasping AI while walking in the steps of Kyoto philosophers

The Internet Governance Forum (IGF) 2023 convenes in Kyoto, the historical capital of Japan. With its long tradition of philosophical studies, the city provides a fitting venue for debate on AI, which increasingly centres around questions of ethics, epistemology, and the essence of human existence. The work of the Kyoto School of Philosophy on bridging Western and Asian thinking traditions is gaining renewed relevance in the AI era. In particular, the writings of Nishida Kitaro, father of Japanese modern philosophy, shed light on questions such as human-centred AI, ethics, and the duality between humans and machines. 

Nishida Kitaro, in the best tradition of peripatetic walking philosophy, routinely walked the Philosopher’s Path in Kyoto alone. Yesterday, I traced his paths while trying to experience the genius loci of this unique and historic place.

 Person, Walking, Clothing, Coat, Path, Accessories, Glasses, Footwear, Shoe, Backpack, Bag, Architecture, Building, Outdoors, Shelter, Plant, Vegetation, Tree, Walkway, Garden, Nature

On the Philosopher’s Path in Kyoto

Here are a few of Nishida Kitaro’s ideas that could help us navigate our AI future:

Humanism

Nishida’s work is deeply rooted in understanding the human condition and is heavily influenced by humanistic principles. His philosophy emphasizes the interconnectedness of individuals and the importance of personal experience and self-awareness.

This perspective serves as a vital reminder that AI should be designed to enhance human capabilities and improve the human condition rather than diminish or replace human faculties. By integrating humanistic values, AI can be developed to support human growth, creativity, and well-being, ensuring that technology serves as a tool for empowerment rather than a substitute for human interaction and understanding.

Self-Awareness and Place

Nishida delved deeply into metaphysical notions of being and non-being, the self and the world. His exploration of these concepts often intersected with themes of nihilism, questioning the inherent meaning and value of existence. As the debate on artificial generative intelligence advances, Nishida’s work could offer valuable insights into the contentious issues of machine consciousness and self-awareness. It begs the question: what would it mean for a machine to be ‘aware,’ and how would this awareness correlate with human notions of self and consciousness? Furthermore, how might nihilistic perspectives influence our understanding of machine self-awareness, challenging the essence of consciousness in a world where meaning is not preordained?

Complexity

Nishida paid significant attention to the complexities inherent in both logic and epistemology. He explored how these complexities are not merely abstract concepts but are deeply intertwined with the lived experiences and cultural contexts of individuals. Nishida’s work delves into the dynamic interplay between the subjective and objective realms, emphasizing that understanding complexity requires a holistic approach that considers both the internal and external factors influencing human thought and behavior. His work could serve as a foundational base for developing algorithms that can better understand and adapt to the complexities of human society.e complexities of human society.

Interconnectedness

Nishida’s philosophy strongly critiques the traditional Western view of dualistic perspectives of essence and form. This line of thinking is often extended to understanding the complex relationships between humans and machines. He would likely assert that humans and machines are fundamentally interlinked, challenging the conventional separation. In this interconnected arena, beyond traditional dualistic frameworks (AI vs humans, good vs bad), we must develop innovative approaches to AI.

 Book, Publication, Person, Reading, Adult, Male, Man, Novel, Face, Head, Accessories, Glasses, Kitarō Nishida

Nishido Kitara, founder of the Kyoto School of Philosophy

Absolute Nothingness

Nishida anchors his philosophy in absolute nothingness, which resonates strongly with Buddhism, Daoism, and other Asian thinking traditions that nurtured the concept of ‘zero’, which has shaped mathematics and our digital world. Nishida’s notion of ‘absolute nothingness’ could be applied to understand the emptiness or lack of inherent essence in data, algorithms, or AI.

Contradictions and Dialogue

Contradictions are an innate part of human existence and societal structures. For Nishida, these contradictions should be acknowledged rather than considered aberrations. Furthermore, these contradictions can be addressed through a dialectic approach, considering human language, emotions, and contextual elements. The governance of AI certainly involves many such contradictions, and Nishida’s philosophy could guide regulators in making the necessary trade-offs.

Ethics

Nishida’s work aims to bridge Eastern and Western ethics, which will be one of the critical issues of AI governance. He considers ethics in the wider socio-cultural milieus that shape individual decisions and choices. Ethical action, in his framework, comes from a deep sense of interconnectedness and mutual responsibility. 

Nishida Kitaro would advise AI developers to move beyond codifying ethical decision-making as a static set of rules. Instead, AI should be developed to adapt and evolve within the ethical frameworks of the communities they serve, considering cultural, social, and human complexities. 

Conclusion

As the IGF 2023 unfolds in the philosophical heartland of Kyoto, it’s impossible to overlook the enriching influence of Nishida Kitaro and the Kyoto School. The juxtaposition is serendipitous: a modern forum grappling with the most cutting-edge technologies in a city steeped in ancient wisdom. 

While the world is accelerating into an increasingly AI-driven future, Kitaro’s work helps outline a comprehensive ethical, epistemological, and metaphysical framework for understanding not just AI but also the complex interplay between humans and technology. In doing so, Nishida’s thinking challenges us to envision a future where AI is not an existential threat or a mere tool but an extension and reflection of our collective quest for meaning. 

A Philospher’s Walk in the steps of Nishida Kitaro could inspire new ideas for addressing AI and our digital future. 

Read more on Nishida Kitaro’s work on the Stanford Encyclopedia of Philosophy.

Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems?

Last week, as the corridors of the UN General Assembly (UNGA) buzzed with the chatter of global leaders, our team at Diplo delved into an unusual experiment: the analysis of countries’ statements using the ‘hybrid intelligence’ approach of combining artificial and human intelligence. Some of our findings were more than just intriguing; they were paradoxical. A dash of AI hallucinations could spark creative problem-solving and novel approaches to humanity’s challenges, such as the rising number of conflicts, climate change, and controlling AI itself.

Interface of Diplo's AI reporting system with three options: Summary, AI Talks, and Statements

AI hallucinations

Traditionally used in psychology and literature, hallucination has gained new use to describe the fabrication of facts and reality distortion by AI platforms such as ChatGPT and Bard. The risk of hallucinations is built into the way AI works. Namely, AI makes informed guesses with a high probability of hitting the mark. AI does not provide logical or factual certainty. Yet, AI’s guessing is very plausible and realistic, as we can see from the answers and stories provided by ChatGPT.

Diplomatic hallucinations 

Hallucinations could be used to describe some features of diplomatic language. They often relate to the proverbial vagueness of diplomacy to avoid uncomfortable truths or sync national interests with prevailing global values. Sometimes, diplomats must slip into a hallucination to reinterpret facts and reality to create constructive ambiguity, which is critical to reaching a negotiation compromise. Practically speaking, they use metaphors, analogies, ambiguities, and other linguistic techniques as essential negotiation tools. 

The UNGA: In vivo lab for language, diplomacy, and AI 

Every September, the UNGA offers a unique lab for studying the interplay between language and diplomacy with extensive use of metaphors, cliches, signalling, and nuanced language as global leaders address the audience in the UN Main Hall and, equally importantly, the public back home. This year, the UNGA in vivo lab gained new relevance in testing large language models (LLM) on diplomatic speeches. Diplo’s AI, supported by human expertise, sifted through this linguistic treasure trove, capturing the essence of each statement while identifying patterns of diplomatic and AI hallucinations. 

The double-edged sword of AI hallucinations

As we reported from UNGA speeches and debates, we gained new insights on both AI and diplomacy. In most instances, AI hallucinates by simply reformulating existing diplomatic jargon into new phrases. While these remixes did not offer much new insight, every now and then, AI hallucinated unexpected insights, such as:

 Sign, Symbol, Text

The dilemma of perfection

So, we’re confronted with a dilemma about when and whether we should allow AI to hallucinate. As a default, we need AI to reflect reality and be as perfect as possible for most uses. For example, summaries from UN discussions should accurately mirror what was said. Last week, as we fine-tuned our AI to minimise hallucinations, the quality of our reporting from the UNGA improved constantly. But in that quest for perfection, we sacrificed AI’s probabilistic ability to make up facts and think outside the box. It made us wonder whether, in some cases, we might want to allow AI to hallucinate

Think of it this way: There are a growing number of so-called creative sessions aimed at stimulating unconventional thinking, whether they’re brainstormings, idea labs, hackathons, incubators, unconferences, innovator cafes, paradigm-shifting seminars, ingenuity forums, Zen koans, thought experiments, and so on. They are used to trigger shifts in usual thinking by identifying paradoxes, juxtaposing ideas, and finding novel solutions for existing problems. 

Why not use AI’s imperfections to help us think outside the box? Thus, perhaps some AI systems should be left to hallucinate intentionally.

Who knows? We may discover that the undiscovered genius of AI and humans working together lies in their imperfections.

EspriTech de Genève: Nexus between technology and humanity

Animated GIF of EspriTech de Geneve

At our last stop of the summer journey ‘Recycling ideas’, we come to Geneva, a meeting place of technology and humanity for centuries. The city’s role is gaining new relevance as we find ourselves at a turning point, facing both changes and challenges triggered by fast technological growth. Once again, humanity steps out of its comfort zone into the new unknown: certainty ends, opportunity begins, and risks increase. 

As we stand at this crossroads, literal and intellectual work through the streets of Geneva can help us understand the broader context for decisions to be made. 

In the Old City, you can find some of the landmarks of the tech-humanity journey: from the Calvin Auditorium (Auditoire de Calvin), where the interplay between progress and society was discussed, to within a few steps, at Grand Rue, where Rousseau was born, and at the next corner, the house where Borges died.

Just up the hill from Lake Geneva, Mary Shelley wrote Frankenstein. You can visit Voltaire’s chateau in the nearby commune of Ferney-Voltaire, named after its most famous resident.

In this episode, we will first revisit the ideas and works of philosophers and writers born or living in Geneva. After that, we will outline the 12 principles of EspriTech de Genève.

This text is based on the Geneva Digital Atlas 2.0, published in November 2022. You can find comprehensive coverage of historical and contemporary technological developments in Geneva at the Atlas’s portal.


John Calvin: Balance between individual action and social responsibility

 Photography, Face, Head, Person, Portrait, Art, Painting, Adult, Male, Man, Text, Jehan Cauvin

John Calvin1John Calvin was born on 10 July 1509 in Picardy (France). He converted to Protestantism and fled to Geneva. His thinking shaped Protestant reform thinking and became influential worldwide. and his teaching, Calvinism, have profoundly impacted political, economic, and cultural life in Europe and the USA, as explained by Max Weber in his seminal book The Protestant Ethic and the Spirit of Capitalism.2Weber, M. (2005). The Protestant Ethic and the Spirit of Capitalism (1930). London: Routledge, p. 175.Calvin’s idea crossed the Atlantic with the Pilgrims on the Mayflower ship, shaping the foundations of American society and ultimately arriving after centuries in Silicon Valley, the venue for technological developments.

The central role of human agency and its accompanying responsibility3Stückelberger, C. (2009). No interest from the poor. Calvin’s economic and banking ethics. are two key pillars of Calvin’s social thinking. Striking the right balance between them is critical for societal stability and progress. 

Like other Protestant thinkers, Calvin was enthusiastic about science and knowledge. If we want to change society, we have to first understand it. At the same time, Calvin called for moderation and caution in pursuing scientific advances, which were later echoed in Mary Shelley’s Frankenstein, and currently in calls for the responsible use of AI and biotechnology.

Calvin’s balance between human agency and social responsibility has been tilted in Silicon Valley towards the former. The tech industry could benefit from Calvin’s opus by striking a more balanced view of business and society. It would also help in ‘uplifting’ their social roles.

Calvin argued for universal education, including for girls, which was revolutionary in the sixteenth century. Later on, another Genevan, Rousseau, put education at the center of his philosophy. 


Charles Bonnet: Nature and early neutral networks

 Photography, Art, Painting, Face, Head, Person, Portrait, Adult, Male, Man, Valentin Haüy

Charles Bonnet, born in Geneva in 1720, was an exceptional polymath. His many academic interests included being a naturalist, a botanist, a lawyer, a philosopher, a psychologist, and a politician. Bonnet was an early boundary spanner, crossing disciplinary delimitations. This approach facilitated his far-reaching insights way ahead of time. 

In 1789, by building on the idea of neural networks, he envisaged AI by arguing that machines could mimic human intelligence.4Bonnet, C. (1789). Betrachtung über die Natur. W. Engelmann. 

In his Essai de Psychologie (1755) he describes the concept of neural networks:

If all our ideas, even the most abstract, depend ultimately on motions that occur in the brain, it is appropriate to ask whether each idea has a specific fiber dedicated to producing it, or whether different motions of the same fiber produce different ideas.5Bonnet, C. (1755). Essai de psychologie. Londres.

For more on Bonnet and neural networks, consult Trends in Cognitive Sciences.6Mollon, J., Takahashi, C., & Danilova, M. (2022). What kind of network is the brain? Trends in Cognitive Sciences, 26. His theory of associations, which contends that associations are how ideas connect in the mind, served as inspiration for the development of early theories of neural networks.

This idea was further developed by the American psychologist William James and the British philosopher John Stuart Mill.

As a keen observer of nature, Bonnet identified numerous patterns and interesting phenomena. He found that the leaves on a plant stem are arranged to match the Fibonacci sequence. He was interested in how math could be used to describe patterns in nature. 

His work was largely forgotten until it was rediscovered in the early twenty-first century.


Jorge Luis Borges: Limits of human knowledge

 Photography, Face, Head, Person, Portrait, Adult, Male, Man, Accessories, Formal Wear, Tie, People, Text, Jorge Luis Borges

Borges chose Geneva as his home and, ultimately, the place where he is laid to rest. As one of the leading writers of the twentieth century, Borges was the master of discovering paradoxes and addressing irreconcilable contradictions in human existence.

He rarely provides answers in his writings. Instead, he takes us on a journey, showing that every certainty triggers a new uncertainty. Borges’s work gives a sobering look at the human condition and the limits of reason when solving personal and social problems.

His fiction is inspirational reading for addressing the core questions of humanity’s future, centered on the interplay between science, technology, and philosophy. His short story The Library of Babel,  written in 1941, is prophetic; it outlines the search for meaning in endless volumes of information, as we do today on the internet. Borges wrote:

‘Nonsense is normal in the Library and that the reasonable (and even humble and pure coherence) is an almost miraculous exception.’7Borges, J. L. (1970). The library of Babel, in J. L. Borges, Labyrinths. Penguin. 

The truth exists somewhere in Borges’ library but is almost impossible to find as it is overwhelmed by irrelevant information, fake news, and competing narratives. 

In addressing informational chaos, Borges shies away from giving a naive hope of certainty. Still, he does provide some hope: he advocates for order in chaos and argues that by taking an occasional rest, we can stop, or at least slow down, the constantly shifting kaleidoscope of meaning. 


Jean-Jacques Rousseau: Social contract theory

 Photography, Face, Head, Person, Portrait, Adult, Male, Man, Art, Painting, Jean-Jacques Rousseau

Jean-Jacques Rousseau, born in Geneva on 28 June 1712, was one of the most important philosophers of the Enlightenment. His most influential works were The Discourse on Inequality and The Social Contract.  The call of the UN Secretary-General for societies worldwide to work on social contracts, addressing profound changes in modern society, renewed the relevance of Rousseau’s thinking. The Social Contract will be vital as we try to answer critical questions about modernity and our future. 

According to Rousseau, social contracts are not formal contracts signed on the dotted line by all citizens. They are representations of the general will of all citizens around a few key principles. At the heart of a social contract is a way for people to regularly participate in public debates and decision-making. It is much more than an occasional vote. 

Rousseau’s social contract demands citizen involvement in civic and political life. His home city, Geneva, has come close to his ideal of a lively and engaging democracy.

Rousseau also argued that sovereignty stays with individuals, not the state. This idea could be important in the current conversation about digital sovereignty, which usually means that states have control over digital networks and data. If we apply Rousseau’s thinking, digital sovereignty should be based on people’s right to control their data and digital assets. 

The question of a social contract was popular among other Enlightenment thinkers.

Hobbes, for example, in his Leviathan proposed a less demanding form of the social contract on citizens than Rousseau’s.8Hobbes, T. (1651). Leviathan. Citizens were supposed to give their natural rights to a sovereign (state) in exchange for the state guaranteeing their safety. 


Ferdinand de Saussure: From linguistic patterns to AI

 Head, Person, Face, Photography, Adult, Male, Man, Portrait, Mustache, Ferdinand de Saussure

Ferdinand de Saussure was a Geneva-born linguist whose book, Course in General Linguistics (1916) became the cornerstone of modern linguistics.Published by Saussure’s students from lecture notes after his premature death.9de Saussure, F. (1916). Course in general linguistics. https://openlibrary.org/books/OL23291521M/Course_in_general_linguistics  Saussure’s work on language and systems laid the foundation for natural language processing (NLP) and modern AI. 

Saussure’s pioneering linguistic research on identifying language patterns and relationships between signifiers and signifieds (i.e. words and their meanings) is key to understanding how NLP systems can map words and other linguistic units to the concepts they represent, allowing them to perform tasks such as text classification and machine translation.

The conceptual bridge between Saussure and the latest AI developments is represented in Alan Turin’s paper, Computing Machinery and Intelligence.10Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433-460. https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf


Mary Shelley: Ethics and scientific progress 

 Photography, Face, Head, Person, Portrait, Adult, Female, Woman, Art, Painting, Lady, Text, Mary Wollstonecraft Shelley

The British writer Mary Shelley started writing Frankenstein in 1816 at the Villa Diodati in Geneva. With Lord Byron and a group of friends, Shelley came to Geneva for better weather, as Geneva typically has more sunny days than London. This was not the case in 1816. That year, both cities missed the summer weather because of the eruption of Mount Tambora in Indonesia. 

Shelley was a big fan of science and experimentation. She believed that science and technology could improve the human condition. However, she also recognised these new technologies’ potential for abuse and misuse. In this way, Shelley brought into focus important questions about the ethics of progress and how to use scientific knowledge responsibly. 

Even though technology and society have come a long way since 1816, the dilemmas people faced then are still relevant today. How far can technology go in affecting core human features? Are there ethical limits to technological development?


Voltaire: Technology as an enabler of freedom

 Photography, Face, Head, Person, Portrait, Text, Art, Voltaire

François-Marie Arouet (1694–1778), better known as Voltaire, was one of the key figures of the Enlightenment. Voltaire lived in Geneva and the neighbouring village of Ferney-Voltaire, named after him, between 1755 and his death in 1778. His major works include Candide, Philosophical Letters, and a Treatise on Toleration. Voltaire is still the symbol of Enlightenment philosophy based on reason, critical thinking, and scientific inquiry.

He was a strong advocate for the advancement of science and technology. Voltaire thought that everyone should have access to knowledge and that progress in science and technology should help society. He frequently criticised the church and state for hindering scientific progress in his writings. 

Voltaire supported Newton throughout his life and insisted on using facts and evidence in social sciences and public life, drawing inspiration from Newton’s empirical science and other works.

Liberty and freedom were crucial to Voltaire’s philosophy. He argued that freedom of thought is a fundamental human right. He also advocated for freedom of expression and freedom of religion. In his historical works, he often championed the cause of oppressed peoples and fought against tyranny.

I disagree with what you say, but I will defend to the death your right to say it, is often attributed to Voltaire. Although there is no proof that these are his words, they capture the core of his philosophy of liberty very well.‘11In 1943, Burdette Kinne of Columbia University published a short article in “Modern Language Notes” which contained an important letter Hall sent to Kinne in 1939. Hall stated that she had crafted the saying and not Voltaire: The phrase “I wholly disapprove of what you say and will defend to the death your right to say it” which you have found in my book “Voltaire in His Letters” is my own expression and should not have been put in inverted commas. Please accept my apologies for having, quite unintentionally, misled you into thinking I was quoting a sentence used by Voltaire (or anyone else but myself).’ https://quoteinvestigator.com/2015/06/01/defend-say/ 

Voltaire’s advocacy of critical thinking and engaging debates is just as important today as it was a few hundred years ago. Most current debates online are shaped by division, biases, and false information. 


12 Principles of EspriTech de Genève

L’esprit de Genève, which refers to the city’s tradition as a place for peace, tolerance, international cooperation, human rights, and inclusion, is the source of inspiration for EspriTech de Genève (the tech spirit of Geneva). 

Robert de Traz coined L’esprit de Genève in his book of the same title12De Traz, R. (1929). L’esprit de Genève. B. Grasset. , where he traced the origins of this concept to Jean Calvin, Jean-Jacques Rousseau, and Henri Dunant.13The meaning of the term humanism has fluctuated according to the successive intellectual movements which have identified with it. In our research, humanism is an approach centred on three aspects of human existence: life, dignity, and agency. 

At the core of EspriTech de Genève is humanism, a philosophical and ethical approach that is centered on human beings, individually and collectively. We summarise it in the 12 values, principles, and approaches explained below. 

1. Human life and dignity 

Geneva’s enduring humanitarian values stand strong, safeguarding human dignity and life. Amidst the digital era’s advances, pivotal questions arise about what defines us as humans. Protecting human life is a core value in religious texts, political statements, and laws. 

In Geneva, the question of human lives and technological developments has come up in negotiations on lethal autonomous weapons systems (LAWS), often called ‘killer robots’. In LAWS negotiations and other policy processes, there is consensus that decisions on life and death in armed conflicts should rest with humans. Even though this consensus is sound and strongly endorsed, it is still unclear how it will be implemented in a world where AI, drones, and other high-tech weapons are making war increasingly automated. The ICRC has been working on bringing clarity to this matter via awareness building, training, and developing policy guidelines. 

The question of what it means to be human becomes more central as biological and digital technologies like brain-machine interfaces and neural technologies advance. 

Virtual reality (VR), such as the metaverse, is another way to alter our perception of the physical space in which we live. Immersive VR may make our bodily experience less central to our identity. Changes in human embodiment will also impact human dignity and alter our identities. The development of metaverse VR will raise a new set of questions:

What will our real identity be between virtual and real spaces?  

Will we retain free will as a key pillar of our identity? 

How will our different identities be reconciled when AI technologies start ‘optimising’ us? 

Will technology ‘tolerate’ our inherently human imperfections? 

Whether they are inside us, like implants and biotech changes, or outside, like the metaverse, these challenges to human embodiment and dignity will speed up discussions about the future social contract.14Erin Green argues that dealing with AI meaningfully should be anchored in understanding that our ‘embodied experience shapes all reasoning, both theological and technological’. In Green, E. (2020). Sallie McFague and an ecotheological response to artificial intelligence. The Ecumenical Review, 72(2), 183–196. doi:10.1111/erev.12502

L’esprit de Genève, which puts people at the center of technological and scientific developments, gives Switzerland and Geneva a place for these critical discussions. 


2. Freedom and the right of choice 

As a city of refuge for the persecuted and dissidents from all over the world throughout history, freedom has always been of great importance in Geneva.  At the core of freedom is the right of choice in personal, economic, and political life. The ability to choose is essential for human dignity, well-being, and societal progress. The right to choose is realised through freedom of movement, thought, expression, and religious practice, among other things. 

For a long time, the internet has been a major enabler of choice by helping people overcome geographical, social, and other limitations.15Switzerland proposed the concept of digital self-determination as a way for citizens to be empowered on all matters related to their personal data. https://via.diplomacy.edu/https://digitale-selbstbestimmung.swiss/wp-content/uploads/2021/04/Digital-Self-Determination-Discussion-Paper.pdf#annotations:NeD1dFbREeyK91Pb3-CEeQ. 

But digitalisation started to have a big impact on the right to choose, from governments and tech companies censoring and filtering online content to more advanced ways of limiting our choices in the name of “optimization” (the idea that AI should know better than us what is good for us, from choosing partners to buying goods to making political decisions). With awareness of these risks, UNESCO has called for assessing the ‘sociological and psychological effects of AI-based recommendations on humans’ decision-making autonomy’. 

Civic space and freedom of information are also impacted by the tendency of tech platforms to foster ‘bubbles’ and binary atmospheres in debates framed as ‘my opinion vs. wrong opinion’. The space for free and civic discussions has been shrinking worldwide. Critical and alternative thinking is often missing in public debates at a time when it is badly needed. This ideological shift creates fertile ground for mis/disinformation. There is a growing need for free, neutral discussion spaces. Geneva can provide such spaces, not only in diplomatic settings but also in public or academic debates.

In the digital age, many societies seek the best way to balance individual freedom and social responsibility. Geneva’s long history since Calvin’s time could be a good example of how to do this. The rule of law and respect for the common good could help us come up with solutions that give people as many options as possible while considering the needs of our communities and society as a whole.


3. Openness and inclusion

Openness and inclusion have been fundamental characteristics of Geneva for centuries. In the digital realm, openness and inclusion flourished in the early years of the internet, with more people getting connected. However, these trends have been slowing down recently, and there is a risk that they could be reversed with increased Internet fragmentation. Thus, there is a need for a renewed push for digital openness and inclusion.

The open-source movement, for example, should play a more important and active role in developing digital infrastructure and apps.

Digital inclusion is more than just being connected and accessing the internet. First, access should be affordable. But what’s even more important is that the full use of digital potential requires the right digital skills, multilingualism and content in local languages, and the participation of women, youth, and other parts of society that have been left behind in the past or since digitalisation began.

All of these aspects of inclusion should be considered holistically. For example, development assistance for increasing connectivity and internet access should be paired with aid for improving digital skills, creating enabling environments (like policies, regulations, and institutions), and addressing all other aspects of inclusion comprehensively.

Geneva is home to many entities that work on various aspects of digital inclusion, such as the ITU, which works on connectivity; the WTO; and the UN Conference on Trade and Development (UNCTAD), which works on e-commerce and the digital economy. As a well-known donor country, Switzerland has a long history of helping small and developing countries effectively and impactfully. This convergence of International Geneva and Swiss development assistance could facilitate a holistic approach to digital inclusion worldwide.


4. Diversity and subsidiarity

Diversity starts with our uniqueness compared to other human beings: age, gender, race, culture, religion, profession, and other aspects of our identity. Diversity is also about our local communities, regions, and nations. Respect for diversity is key to building a prosperous, inclusive, and harmonious society. 

As diversity nurtures innovation and creativity, it has helped spur many digital developments. Since its early days, the internet has been a key promoter of diversity because it connects people worldwide. However, tech platforms have made it easier for groups to form “echo chambers,” or places where people with similar views stay together and don’t hear other ideas. Diversity in the digital world may not have a bright future because tech companies may put profit ahead of diversity and other non-commercial values.

The principle of subsidiarity ensures that policy decisions get close to the people who are affected by them. Subsidiarity could also prevent abuses by higher-level authorities while contributing to administrative and policy decentralisation. Geneva and Switzerland have a long tradition of diversity and subsidiarity. 


5. Progress and well-being

Science and technology are the main driving forces behind progress aimed at improving societal well-being. Since Calvin’s era, support for science and progress has long been a tradition in Geneva. Voltaire and other thinkers also saw science as a force for progress. Today, it’s no surprise that the European Council for Nuclear Research (CERN), the Swiss Federal Institutes of Technology Lausanne (EPFL), and the University of Geneva are among the most prominent scientific institutions in Europe and beyond.

As the link between scientific progress and human well-being is often contextualised around Agenda 2030, Geneva-based scientific and UN organisations have been working on using science to realise the Sustainable Development Goals (SDGs) dealing with health, food, poverty, climate change, and more. 

During the pandemic, the link between digitalisation and human well-being became much clearer when most societal activities were carried out via digital networks.


6. Trust and confidence

Trust and confidence are values that resonate strongly with Geneva and Switzerland. Switzerland is a country with a high’ trust capacity’ in the fields of engineering, finance, education, and governance. Trust in the online domain is as important as it is offline. It is the social glue that binds people, communities, and countries together. It helps improve societies’ well-being, success, and stability by reducing conflict and making it easier for people to work together. 

There are many levels of trust and confidence in the digital space, from trusting the technology itself to trusting the companies that develop and provide the services or products to trusting the governments that should protect our rights both online and offline. 

Trust in technology, the government, and tech companies is built through clarity about the roles and responsibilities of digital actors, participation in creating digital policy, oversight (especially of actions that could affect rights and freedoms), and confidence-building measures between countries and digital communities. 

Technology itself – as it has been argued in the case of blockchain – could facilitate trust and confidence. While the search for the ‘automation’ of trust continues, Geneva and Switzerland should focus on contributing their traditional trust capacity to discussions and processes focused, for example, on protecting data, ensuring cybersecurity, and finding future digital governance solutions. Some projects, like the Swiss Digital Initiative’s Trust Label and Trust Valley, are examples of such efforts.


7. Peace and security

In Esprit de Geneve, peace has a central role. Geneva has been the place where many peace negotiations have been conducted throughout history. It also has a vibrant role in other peace-related activities: mediation, conflict prevention and resolution, peace-building, etc. This connection between peace and security can be clearly seen in the work done by international organisations in Geneva.

But peace also goes beyond security, as it is more than just the absence of violence and conflict. Peace requires a comprehensive approach to human development that addresses the root causes of conflict; such an approach would result in greater stability and less social inequality. 

Digital technology impacts all phases of peace-related activities, from focused ones dealing with conflict resolutions to a broader, more holistic approach to peace. The links between peace and digitalisation are highlighted across a wide range of activities in Geneva. For example, the Geneva Internet Platform (GIP), Humanitarian Dialogue, the UN Office in Geneva (UNOG), and Swiss Peace have all started researching and networking on cyber mediation. Peace is central to debates on the cyber aspects of disarmament. The human rights community in Geneva addresses false information and hate speech, which are becoming more important to international peace and security efforts. 


8. Entrepreneurship and human agency

Geneva has been a trading post for centuries. It can be traced back as far as Roman times. Business dynamics have taken off since Calvin’s time, when he created a theological framework that encouraged entrepreneurship.

Today, Geneva has a strong banking sector, a fast-growing tech industry, and a lot of innovative start-ups. In the city’s tradition, the private sector has helped solve social problems by supporting humanitarianism and participating in activities supporting the SDGs’ realisation. 

Geneva is home to Fongit, one of Europe’s oldest and most successful start-up generators. It helps researchers from CERN, EPFL, and the University of Geneva turn research breakthroughs into business opportunities.

As tech companies worldwide look for the best ways to combine entrepreneurship and social responsibility, International Geneva has a few things that make it stand out: the Calvinist tradition of combining entrepreneurship and care for the community, a thriving academic and business scene, and an international governance space.


9. Environment and natural habitat 

Man argues. Nature acts. 

Voltaire

The interplay between the environment and digitalisation is at the core of modern governance. Progress and industrialisation have put a lot of stress on our environment. Because of this, the environmental agenda has become more important than ever, with issues like pollution, climate change, and protecting biodiversity. 

Digitalisation has both negative and positive effects on the environment. Examples of unfavourable effects include the significant energy use by data servers, which accounts for 2% of the world’s electricity consumption, and the extensive exploitation of rare materials to produce digital products, resulting in e-waste that harms natural habitats. On the positive side, digitalisation is used to find and track environmental problems and to model possible solutions for climate change, ocean pollution, overfishing, and many other problems.

In Geneva, the interplay between the environment and digitalisation is addressed in policy discussions on climate and weather (at the World Meteorological Organization – WMO), climate change (the Intergovernmental Panel on Climate Change – IPCC), pollution (the UN Economic Commission for Europe – UNECE), and environment (the UN Environment Programme – UNEP – Office for Europe). The main challenge is to promote more convergence between environmental and digitalisation policy spaces.


10. Solidarity and the common good

Solidarity and the common good have been important pillars of L’esprit de Genève, introduced during Calvin’s time and carefully nurtured for centuries. It is shown in the work of humanitarian organisations such as the ICRC and the support provided to the poor and migrant populations, who are the most vulnerable part of society.

Solidarity and the common good are becoming more important as societies worldwide try to restore social stability, which is increasingly shaken. Restoring social stability is not only about sorting out existing societal problems but also accounting for emerging ones. Regarding social media, online games, VR, and other online platforms, solidarity takes on a new shape and meaning. Empathy and emotions are nurtured in different ways in terms of both shape and depth. 

We are going through a rupture in the traditional, face-to-face social and emotional bonds people have had since the beginning of time. The ways we engage with others and develop emotional and social links will shape the social fabric of tomorrow, with far-reaching consequences for family life, law, and other aspects of society. 

Common goods are tangible aspects of solidarity in society. The concept of digital common goods (digital commons) is increasingly discussed in the context of data governance and AI development. The open-source movement places the common good at its foundation. Data and AI are discussed as common goods that could be used for sustainable development, reducing inequality, and promoting social peace.


11. Equality, justice, and fairness

Geneva’s religious, legal, and political traditions could be very helpful in figuring out how to deal with the effects of digitalisation on equality, justice, and fairness. Equality, justice, and fairness are important for keeping society together and making the economy and society work well. 

Today’s world isn’t as fair as it could be because digital technology tends to concentrate data and economic power in the hands of a few. As digital growth does not ‘lift all boats equally’, many communities are left behind. 

Inequality in the digital sphere takes multiple forms, such as unequal access to networks and devices (e.g., because of a lack of infrastructure or money), gaps in digital skills, and gender imbalances (e.g., men still tend to be more represented in engineering and computer industries than women). Language barriers and generational divides are also causes of digital divides. 

The search for justice and dispute resolution are also associated with Geneva’s legal institutions. In 1872, Great Britain and the USA came to Geneva to settle their dispute in the Alabama Arbitration, one of the first international legal arbitrations. Geneva is also one of the key places for companies to settle business disputes through arbitration. The adjustment of the arbitration system to the digital realm is gaining momentum. WIPO has set up a system for dealing with disagreements about internet domain names. Researchers and academics seek new ways to settle online disputes, like the University of Geneva’s Massive Online Micro Justice approach. 

Since AI apps can amplify biases based on race, gender, age, and other factors, fairness has become an important issue. Most of the time, AI is biassed because it was built using biased data that reflects a biased reality. For example, AI makes decisions on providing loans or prioritising patients based on a wide range of gender, racial, and age biases. In Geneva, academic institutions’ research and policy discussions on human rights should both address fairness in AI.


12. Compromise, trade-offs, and pragmatism

Achieving a win-win solution is the holy grail of public policy. But the reality is that we often end up with a zero-sum outcome in which some gain and others lose. Because digital technology has so many different technological, economic, moral, and policy aspects, good trade-offs are difficult to find.  Geneva has a long history of solving problems practically, often by compromising to find the best trade-offs.

This part of EspriTech de Geneve is crucial as the need for delicate trade-offs in the digital world grows. Here are some examples:

Geneva’s traditions and current policy environment favour finding trade-offs between digitalisation’s many and sometimes contradictory impacts on society.


Parting thoughts

Uncertainty is an uncomfortable position. But certainty is an absurd one. 

Voltaire

We are at the end of this historical and contemporary walk through Geneva’s thinking and philosophical landscapes. Among the vast tapestry of ideas, traditions, and concepts, a few resonate in addressing the dilemmas we face today:

This Geneva journey is just one of many that humanity has taken, and is taking, in search of formulas to deal with the impact of digitalisation on society. This search is happening worldwide, from local communities to national parliaments, from regional organisations to the UN. 

Geneva can contribute to the search for a social contract for AI and our future in general in a timely, balanced, and impactful manner.

The Vienna Nexus: Five thinkers who coded the operating system of modernity

If you look for ‘the source code of the operating system’ of modernity, you will end up in Vienna between the two world wars. After the dissolution of the Austro-Hungarian empire in 1918, Vienna was a vibrant hub of intellectual debates on society, from philosophy to economics and psychology.

Among the many thinkers of that period, five are the most relevant for our era: Ludwig Von Mises, Joseph Schumpeter, Friedrich Hayek, Sigmund Freud, and Ludwig Wittgenstein. Each of them contributed critical ideas that shape our society and digital developments in particular.

Their ideas got new life and a broader impact as they left Vienna before the outbreak of the Second World War. Many made prominent academic careers in the United States, where their ideas and concepts have influenced economic and political life. We need to understand the opus of these five Vienna thinkers to comprehend the concepts behind our society, digital transformation, and the emerging AI era.

This blog aims to recycle the ideas of Vienna thinkers and show their relevance for our current debates on digital and AI developments.

Ludwig Von Mises: Choice is king

 Photo of Von Mise

If one were to summarise Ludwig Von Mises’s central thesis in a tweet, it might proclaim: “Individual choice and consumer sovereignty are the keystones of the digital era.” At the heart of Von Mises’s philosophy lies the concept of praxeology, which focuses on deliberate actions guided by personally chosen objectives.

Often labelled a market fundamentalist, Von Mises insisted that market forces outperform any form of governmental regulation. He was vocal against government interference, maintaining that it perilously skews free choice. His philosophy has inspired a generation of cyber-libertarians and resonates well with digital platforms whose business models hinge on monetizing consumer choices. 

However, Von Mises’s principles of free choice are facing growing scrutiny in the modern digital era. Critics contend that the apparent freedom offered by digital platforms can be misleading, as genuine free choice becomes twisted by algorithms, creating echo chambers and filter bubbles.

The criticism Von Mises levied against government intervention as a market distortion may need reassessment, considering the need for public control of the enormous monopolistic power of big tech companies. 

In a new twist, big tech companies are calling on governments to regulate AI in order to prevent a potential ‘AI Armageddon’. On first glance, it can sound paradoxical given previous corporate aversion to any government intervention. However, by using a fear-mongering framing of AI, major tech companies may aim to establish new AI monopolies by preventing newcomers in AI mainly from open-source communities. Supporters of this interpretation usually quote discrepancies between tech companies’ words and actions. While warning about AI threats, they increase investment in developing more powerful AI. Whatever AI governance brings, Von Mises’s principles of free choice and consumer sovereignty will remain at the centre of policy and regulatory debates. 

Joseph Schumpeter: Creative destruction

 Face, Head, Person, Photography, Portrait, Adult, Male, Man, Indoors, Attorney, Courtroom, Room, Joseph Schumpeter

Joseph Schumpeter is renowned for the theory of “creative destruction,” illustrating the continuous cycle where new industries emerge and old ones decline, a phenomenon ever-present in the digital age. This process was evident when Facebook replaced MySpace, Uber disrupted traditional taxi services, and Airbnb posed a challenge to the established hotel industry.

While supportive of innovative and creative destruction, Schumpeter warned of the possible rise of monopolistic practices stemming from technological advancement. His view that monopolies become transient due to competition may be proven wrong in the digital era as tech giants strategically acquire start-ups, tame competition, and solidify their monopolistic positions. 

In the unfolding era of AI, Schumpeter’s idea of “creative destruction” gains fresh significance. AI technologies are on the brink of disrupting diverse industries, from manufacturing automation to a healthcare revolution in diagnostics. Like the manufacturing upheaval that the assembly line once sparked, AI is likely to create major destruction of white collar work with enormous impact on economy and society.

AI’s rapid proliferation also triggers concerns over monopolistic domination and power concentration. Tech behemoths like Google, Amazon, Meta, and Microsoft spearhead AI research and development, possessing the means to amass top talent and assimilate burgeoning startups, thus furthering their control. This is a scenario Schumpeter explicitly cautioned against – a monopolistic landscape that impedes competition, innovation, and consumer freedom.

Schumpeter’s insights also extend to the labour market, as AI’s rise has ignited discussions about itsl impact on employment. The conventional Schumpeterian belief that vanished jobs would be replaced by new ones must be reassessed in the AI epoch, as AI may eliminate more jobs than it creates.

Furthermore, Schumpeter’s emphasis on entrepreneurship’s role is echoed in a surge of startups seeking to employ AI for problem-solving. 

In conclusion, Schumpeter’s theory of creative destruction will have continuous relevance. Although the destruction of current practices, businesses, and jobs is a given, a creative response is needed with new ideas, jobs, and business models. 

Friedrich Hayek: Knowledge as a key to progress

 Face, Head, Person, Photography, Portrait, Adult, Male, Man, Clothing, Formal Wear, Suit, Mustache, Accessories, Tie, Glasses, Friedrich Hayek

Knowledge, a key concept in Hayek’s opus, is also the pillar of digital and emerging AI economies. Hayek advocated knowledge-driven innovation as a pathway to societal efficiency, a principle that has undeniably fueled many technological advancements.

In particular, his work on ‘tacit knowledge’, understood as a mix of intuition and skills, is central for AI developments. This form of knowledge, unearthed through data mining and behavioural analysis, sheds light on behaviour patterns and has significant economic and political implications. The battle to codify the tacit knowledge of the world’s population is at the centre of AI competition. 

The digital age resonates strongly with Hayek’s concept of dispersed knowledge. He envisioned optimal economic outcomes stemming from decentralised decision-making that mirrors a collective sum of individual insights. This idea found tentative expression in blockchain technology, with its decentralised ethos. However, the full realisation of this promise remains elusive, and the concept of dispersed knowledge faces potential threats from centralisation and monopolisation by leading AI entities.

Furthermore, Hayek warned about ‘the pretence of knowledge,’ as an overconfidence in our ability to comprehend and manipulate complex systems. This caution finds a poignant echo in the world of AI, where neural networks and deep learning models often elude even their creators’ understanding. This mysterious ‘black box’ aspect of AI can spawn unexpected outcomes and hazards, reaffirming Hayek’s insights about the boundaries of our comprehension.

Hayek’s caution against centralisation and his support for individual liberties open vital discussions about data privacy and digital control in today’s interconnected world. With large tech corporations enjoying unparalleled access to personal information, debates surrounding data ownership, privacy, and the government’s regulatory role have become increasingly significant.

In conclusion, Hayek’s intellectual legacy is critical for understanding the key dilemmas of the AI era. His thoughts on dispersed and tacit knowledge, coupled with his warnings about the limitations of our understanding, offer essential guidance for navigating the complexities of AI. Simultaneously, his reservations about centralization and his stress on individual autonomy contribute valuable warnings against the risk of monopolising AI power.

Sigmund Freud: The psychological underpinnings of social media and AI

 Face, Head, Person, Photography, Portrait, Lamp, Table Lamp, Adult, Male, Man, Clothing, Coat, Accessories, Glasses, Sigmund Freud

Sigmund Freud, widely recognised as the founding figure of psychoanalysis, laid the psychological groundwork that has resonated strongly with the modern digital economy. He asserted that human behaviour predominantly stems from subconscious and often irrational impulses.

These insights have taken on new life in our digital age, where the myriad traces we leave online furnish tech companies with invaluable data about us that is subsequently processed, packaged, and sold to advertisers.

In the era of digital technology and AI, Freud’s proposition that the unconscious mind exerts a profound impact on our actions has become a guiding force. It shapes the methodologies behind targeted advertising and recommendation engines that social media and e-commerce giants employ. These sophisticated algorithms uncover and exploit unconscious preferences and desires, seeking to guide and manipulate consumer choices and behaviours.

Freud’s ‘Pleasure Principle,’ the theory that individuals instinctively pursue pleasure and evade pain, finds its contemporary reflection in the design ethos of social media and digital interfaces. The instant rewards of notifications, ‘likes,’ and other digital affirmations serve to engage users in ways that can foster addictive behaviour patterns.

The depth of Freud’s understanding of human emotions and drives also offers a potential foundation for the burgeoning field of ’emotional AI’ or affective computing. These systems aspire to comprehend, interpret, and react to human emotions, often grounding their approach in Freudian conceptions of emotional dynamics.

Freud-inspired digital developments triggered a vivid ethical debate. There is a sharp criticism of leveraging psychological tenets to shape and potentially control user behaviour, frequently beyond conscious awareness or explicit consent. Such practices could diminish self-esteem, exacerbate social divides, and cause serious mental health problems.

Freud’s exploration of defence mechanisms might illuminate our understanding of privacy and security responses in our data-driven world. Individuals might unconsciously deploy mechanisms like denial or repression to cope with an increasingly invasive digital environment where personal information is ceaselessly spied on and privacy is constantly at risk.

In conclusion, Freud’s nuanced understanding of the human psyche has found unexpected yet potent applications in our digitally infused world. His theories continue to enrich our comprehension of our relationship with technology by driving advertising strategies, shaping the user experience, and spurring vital dialogues on ethical norms and regulations. In an era where the boundaries between the mind and machine continue to blur, Freud’s legacy offers a thoughtful lens through which to view our evolving digital and AI existences.

Ludwig Wittgenstein: Conceptual father of AI – from causation to correlation

 Face, Head, Person, Photography, Portrait, Body Part, Neck, Animal, Kangaroo, Mammal, Ludwig Wittgenstein

Ludwig Wittgenstein’s philosophy influenced the cognitive landscape for the development of AI. The arc of his intellectual career, from his embrace of logical positivism to his nuanced examination of language in context, has resonated powerfully with the evolution of AI.

Wittgenstein explored the connections between formal logic and language in his early work, which is best represented by the “Tractatus Logico-Philosophicus.” This insight resonates well with the foundational principles of early AI, when its development revolved around rule-based systems that were based on formal logical rules.

However, Wittgenstein’s later work marked a paradigmatic shift in his thinking. He conceived language as inherently contextual, intimately intertwined with social practices rather than rigid, formal structures. 

Wittgenstein’s “Philosophical Investigations”, his seminal later work, elucidated the idea that the meaning of language is intrinsically linked to its use within the diverse and lived experiences of human communities. This perspective corresponds to modern AI developments, which give higher weight to contextual probability than logical certainty. Large Language Models, such as ChatGPT and Bard, are designed to ‘understand’ the context of words in sentences and learn from vast amounts of data about how language is used.

Wittgenstein’s thoughts about ‘language games’ and the situated nature of language understanding can be seen as a precursor to current AI development. His idea that language is a social activity grounded in specific forms of life offers a fresh perspective to AI developers who attempt to train models to understand human language in all its complexity and diversity.

In the era of AI, Wittgenstein’s philosophy also raises questions about the extent to which AI can truly understand human language. Wittgenstein pointed out that understanding language goes beyond simply recognising patterns and structures; it involves understanding the cultural, social, and emotional contexts that give the language its meaning.

In conclusion, Wittgenstein’s views on language and meaning have significant implications for AI, guiding the shift from rule-based systems to context-centred AI models. His work continues to inspire AI developers as they attempt to create AI systems that can understand and engage with human language in deeper ways. It remains to be seen if AI’s digging deep into linguistic patterns will surpass the limits of our understanding that Wittgenstein wrote about. As AI’s advances into Wittgenstein’s ‘grey zones’ of language understanding, it could be a moment when AI may become sentient, if ever.

Parting thoughts

The intellectual frameworks developed by the Vienna thinkers provide a profound understanding of the roots of our digital age. From Ludwig Von Mises’s advocacy for individual choice to Friedrich Hayek’s emphasis on knowledge, these thinkers provide a foundation for understanding the digital economy’s market dynamics. Sigmund Freud’s insights into human psychology underpin the mechanisms of digital advertising, user engagement, and emotional AI. Ludwig Wittgenstein’s thinking on language corresponds very well with the latest AI developments centred around Large Language Models.

However, their ideas are not merely historical relics; they continue to stimulate reflection and debate. Schumpeter’s caution against monopolistic tendencies is echoed in contemporary discussions about tech giants’ dominance. Hayek’s warnings about knowledge centralisation resonate in debates about data privacy and AI control. Freud’s theories raise ethical concerns about manipulating user behaviour in the digital world. Wittgenstein’s insights about language challenge AI developers’ attempts to mimic human knowledge and consciousness. 

In sum, the Vienna thinkers have left a lasting imprint on human society, shaping how we think about economics, psychology, and language. As we grapple with the rapid evolution of digital technologies and their impact on our lives, their legacies continue to prompt reflection, criticism, and inspiration as we tread the intricate path of AI developments.