AI optimism in geopolitically pessimistic Davos

In the serene backdrop of the Swiss Alps, the World Economic Forum (WEF) in Davos stands as a barometer for the year’s technological zeitgeist. Tracing back to 1996, when John Barlow’s proclamation of cyberspace’s independence echoed through these halls, WEF has consistently heralded the arrival of groundbreaking tech paradigms – from the rise of social media to the advent of blockchain.

After an exceptionally tech-shy Summit in 2023, this year, artificial intelligence (AI) restored tech optimism in Davos. The blueness of the tech sky stands out even more against the gloomy and cloudy global geopolitics. The 24 AI-centric sessions out of the 235 at WEF featured 135 speakers and a flurry of 1101 arguments, with the majority being positive (600), some neutral (319), and a few negative (182).

Is this optimistic dose of AI a panacea for global problems, the maturation of AI discourse, or yet another tech sleepwalk?  Here are some reflection points aimed at answering this question.

Optimistic dosage:  The WEF AI debates have been rated a solid ‘8’ on DiploAI’s optimism scale. Conversations have revolved around AI’s potential to elevate productivity, combat diseases, and solve environmental crises. The economic forecasts present AI as a trillion-dollar boon, a narrative contrast to AI doomsday scenarios from spring of 2023

From extinction to existing risks: Sam Altman and the signatories to several letters in 2023 on extinction AI risks have recalibrated their language at WEF. Risks of extinction or existential threats are no longer ‘in’. AI risks were qualified as problems that humanity will overcome, just as we have done with technologies in the past. It’s unclear if the threat of AI extinction has vanished, they’ve discovered something new, or corporate interests have taken precedence over any other AI-related concerns.

AI gurus owe us an answer to these questions, as their predictions that AI would destroy humanity were made with great conviction last year. If we do not get an explanation, the next time they ‘scream’, no one will take them seriously. Furthermore, such an approach would erode trust in the science that underpins AI.

The majority of the discussion on the AI risks focused on misinformation, job losses, and reform of education.

IPR and Content for AI: The New York Times’ court case against OpenAI over the use of copyrighted material to train AI models was frequently mentioned in Davos. Traceability and transparency in AI development will be critical for a sustainable and functional AI economy. 

AI Governance: The narrative of last year, that governance should focus on AI capabilities, has given way to a focus on AI apps and uses. It makes AI less unique and more governable, just like any other technology. This approach, used by the EU AI Act, is also gaining popularity in the United States. The WEF discussions revealed more similarities than differences in the regulation of AI in China, the United States, and Europe.

AI governance pyramide

Open source AI:  The stance on open-source AI as an unmanageable risk softened at WEF. Yann LeCun of Meta argued that open-source AI is beneficial not only for scientific progress but also for controlling the monopolies of large AI tech companies and incorporating diverse cultural and societal inputs into AI development. Open-source AI will gain traction in 2024, posing a significant challenge to proprietary software such as OpenAI.

AI and Development: According to the UN Technology Envoy, Amandeep Singh Gill,  AI will not save the SDGs if current trends continue. This candid assessment was more of an outlier in WEF discussions. For example, there was little discussion of the AI-driven widening of digital divides and the increasing concentration of economic and knowledge power in the hands of a few companies.

Sam Altman’s admission—’no one knows what comes next’ with AI—encapsulates the dual nature of this AI juncture: it is both alarming and reassuring. The uncertainty voiced by those at the forefront of AI development is concerning, hinting at a possible ‘Frankenstein moment’. At the same time, it is encouraging that Sam Altman and others speak frankly about their knowledge of the impact of AI without the fear-mongering of the last year.

The current confusion around AI potentials and risks underscores the need for a mature, nuanced conversation on AI’s future. While the unpredictability of ‘unknown’ AI risks persists, we must navigate the ‘known’ challenges with agile, transparent, and inclusive regulatory frameworks. The Davos debates have made strides in this direction, aiming to steer us away from a dystopian AI future through informed, balanced dialogue rather than fear.