How can we deal with AI risks?

Clarity in dealing with ‘known’ and transparency in addressing ‘unknown’ AI risks

In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI, compared to more immediate issues: we’re warned about the risk of extinction, the risks from future superintelligent systems, and the need to heed to these problems. But is this focus on future risks blinding us from tackling what’s actually in front of us?

Types of risks

There are three types of risks:

In this text, you can find a summary of three types of risks, their current coverage, and suggestions for moving forward.

 Diagram, Venn Diagram, Disk

Short-term risks include loss of jobs, protection of data and intellectual property, loss of human agency, mass generation of fake texts, videos, and sounds, misuse of AI in education processes, and new cybersecurity threats. We are familiar with most of these risks, and while existing regulatory tools can often be used to address them, more concerted efforts are needed in this regard.

Mid-term risks are those we can see coming but aren’t quite sure how bad or profound they could be. Imagine a future where a few big companies control all the AI knowledge, just as they currently control people’s data, which they have amassed over the years. They have the data and the powerful computers. That could lead to them calling the shots in business, our lives, and politics. It’s like something out of a George Orwell book, and if we don’t figure out how to handle it, we could end up there in 5 to 10 years. Some policy and regulatory tools can help deal with AI monopolies, such as antitrust and competition regulation, as well as protection of data and intellectual property. Provided that we acknowledge these risks and decide we want and need to address them.  

Long-term risks are the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risks that could see AI evolve from servant to master, jeopardising even humanity’s very survival. These threats haunt the collective psyche and dominate the global narrative with an intensity paralleling that of nuclear armageddon, pandemics, or climate cataclysms. Dealing with long-term risks is a major governance challenge due to the uncertainty of AI developments and their interplay with short-term and mid-term AI risks.  

The need to address all risks, not just future ones

Now, as debates on AI governance mechanisms advance, we have to make sure we’re not just focusing on long-term risks simply because they are the most dramatic and omnipresent in global media. If we are to take just one example, last week’s Bletchley Declaration announced during the UK’s AI Safety Summit had a heavy focus on long-term risks; it mentioned short-term risks only in passing and it made no reference to any medium-term risks.

If we are to truly govern AI for the benefit of humanity, AI risks should be addressed more comprehensively. Instead of focusing heavily on one set of risks, we should look at all risks and develop an approach to address them all.

In addressing all risks, we should also use the full spectrum of existing regulatory tools, including some used in dealing with the unknowns of climate change, such as scenario building and precautionary principles. 

Ultimately, we will face complex and delicate trade-offs that could help us reduce risks. Given the unknown nature of many AI developments ahead of us, trade-offs must be continuously made with a high level of agility. Only then can we hope to steer the course of AI governance towards a future where AI serves humanity, and not the other way around.

IGF 2023: Grasping AI while walking in the steps of Kyoto philosophers

The Internet Governance Forum (IGF) 2023 convenes in Kyoto, the historical capital of Japan. With its long tradition of philosophical studies, the city provides a fitting venue for debate on AI, which increasingly centres around questions of ethics, epistemology, and the essence of human existence. The work of the Kyoto School of Philosophy on bridging Western and Asian thinking traditions is gaining renewed relevance in the AI era. In particular, the writings of Nishida Kitaro, father of Japanese modern philosophy, shed light on questions such as human-centered AI, ethics, and the duality between humans and machines. 

Nishida Kitaro, in the best tradition of peripatetic walking philosophy, routinely walked the Philosopher’s Path in Kyoto alone. Yesterday, I traced his paths while trying to experience the genius loci of this unique and historic place.

 Person, Walking, Clothing, Coat, Path, Accessories, Glasses, Footwear, Shoe, Backpack, Bag, Architecture, Building, Outdoors, Shelter, Plant, Vegetation, Tree, Walkway, Garden, Nature

On the Philosopher’s Path in Kyoto

Here are a few of Nishida Kitaro’s ideas that could help us navigate our AI future:

Humanism

Nishida’s work is deeply rooted in understanding the human condition. This perspective serves as a vital reminder that AI should be designed to enhance human capabilities and improve the human condition, rather than diminish or replace human faculties.

Self-Awareness and Place

Nishida delved deeply into metaphysical notions of being and non-being, the self and the world. As the debate on artificial generative intelligence advances, Nishida’s work could offer valuable insights into the contentious issues of machine consciousness and self-awareness. It begs the question: what would it mean for a machine to be ‘aware,’ and how would this awareness correlate with human notions of self and consciousness?

Complexity

Nishida paid significant attention to the complexities inherent in both logic and epistemology. His work could serve as a foundational base for developing algorithms that can better understand and adapt to the complexities of human society.

Interconnectedness

Nishida’s philosophy is critical of dualistic perspectives that often influence our understanding of humans versus machines. He would likely argue that humans and machines are fundamentally interlinked. In this interconnected arena, beyond traditional dualistic frameworks (AI vs humans, good vs bad), we should formulate new approaches to AI.

 Book, Publication, Person, Reading, Adult, Male, Man, Novel, Face, Head, Accessories, Glasses, Kitarō Nishida

Nishido Kitara, founder of the Kyoto School of Philosophy

Absolute Nothingness

Nishida anchors his philosophy in absolute nothingness, which resonates strongly with Buddhism, Daoism, and other Asian thinking traditions that nurtured the concept of ‘zero’, which has shaped mathematics and, ultimately, our digital world. Nishida’s notion of ‘absolute nothingness’ could be applied to understand the emptiness or lack of inherent essence in data, algorithms, or even AI itself.

Contradictions and Dialogue

Contradictions are an innate part of human existence and societal structures. For Nishida, these contradictions should be acknowledged rather than considered aberrations. Furthermore, these contradictions can be addressed through a dialectic approach, considering human language, emotions, and contextual elements. The governance of AI certainly involves many such contradictions, and Nishida’s philosophy could guide regulators in making the necessary trade-offs.

Ethics

Nishida’s work aims to bridge Eastern and Western ethics, which will be one of the critical issues of AI governance. He considers ethics in the wider socio-cultural milieus that shape individual decisions and choices. Ethical action, in his framework, comes from a deep sense of interconnectedness and mutual responsibility. 

Nishida Kitaro would advise AI developers to move beyond just codifying ethical decision-making as a static set of rules. Instead, AI should be developed to adapt and evolve within the ethical frameworks of the communities they serve, considering cultural, social, and human complexities. 

Conclusion

As the IGF 2023 unfolds in the philosophical heartland of Kyoto, it’s impossible to overlook the enriching influence of Nishida Kitaro and the Kyoto School. The juxtaposition is serendipitous: a modern forum grappling with the most cutting-edge technologies in a city steeped in ancient wisdom. 

While the world is accelerating into an increasingly AI-driven future, Kitaro’s work helps outline a comprehensive ethical, epistemological, and metaphysical framework for understanding not just AI but also the complex interplay between humans and technology. In doing so, Nishida’s thinking challenges us to envision a future where AI is not an existential threat or a mere tool but an extension and reflection of our collective quest for meaning. 

A Philospher’s Walk in the steps of Nishida Kitaro could inspire new ideas for addressing AI and our digital future. 

Read more on Nishida Kitaro’s work on the Stanford Encyclopedia of Philosophy.