Last week, as the corridors of the UN General Assembly (UNGA) buzzed with the chatter of global leaders, our team at Diplo delved into an unusual experiment: the analysis of countries’ statements using the ‘hybrid intelligence’ approach of combining artificial and human intelligence. Some of our findings were more than just intriguing; they were paradoxical. A dash of AI hallucinations could spark creative problem-solving and novel approaches to humanity’s challenges, such as the rising number of conflicts, climate change, and controlling AI itself. Traditionally used in psychology and literature, hallucination has gained new use to describe the fabrication of facts and reality distortion by AI platforms such as ChatGPT and Bard. The risk of hallucinations is built into the way AI works. Namely, AI makes informed guesses with a high probability of hitting the mark. AI does not provide logical or factual certainty. Yet, AI’s guessing is very plausible and realistic, as we can see from the answers and stories provided by ChatGPT. Hallucinations could be used to describe some features of diplomatic language. They often relate to the proverbial vagueness of diplomacy to avoid uncomfortable truths or sync national interests with prevailing global values. Sometimes, diplomats must slip into a hallucination to reinterpret facts and reality to create constructive ambiguity, which is critical to reaching a negotiation compromise. Practically speaking, they use metaphors, analogies, ambiguities, and other linguistic techniques as essential negotiation tools. Every September, the UNGA offers a unique lab for studying the interplay between language and diplomacy with extensive use of metaphors, cliches, signalling, and nuanced language as global leaders address the audience in the UN Main Hall and, equally importantly, the public back home. This year, the UNGA in vivo lab gained new relevance in testing large language models (LLM) on diplomatic speeches. Diplo’s AI, supported by human expertise, sifted through this linguistic treasure trove, capturing the essence of each statement while identifying patterns of diplomatic and AI hallucinations. As we reported from UNGA speeches and debates, we gained new insights on both AI and diplomacy. In most instances, AI hallucinates by simply reformulating existing diplomatic jargon into new phrases. While these remixes did not offer much new insight, every now and then, AI hallucinated unexpected insights, such as: So, we’re confronted with a dilemma about when and whether we should allow AI to hallucinate. As a default, we need AI to reflect reality and be as perfect as possible for most uses. For example, summaries from UN discussions should accurately mirror what was said. Last week, as we fine-tuned our AI to minimise hallucinations, the quality of our reporting from the UNGA improved constantly. But in that quest for perfection, we sacrificed AI’s probabilistic ability to make up facts and think outside the box. It made us wonder whether, in some cases, we might want to allow AI to hallucinate. Think of it this way: There are a growing number of so-called creative sessions aimed at stimulating unconventional thinking, whether they’re brainstormings, idea labs, hackathons, incubators, unconferences, innovator cafes, paradigm-shifting seminars, ingenuity forums, Zen koans, thought experiments, and so on. They are used to trigger shifts in usual thinking by identifying paradoxes, juxtaposing ideas, and finding novel solutions for existing problems. Why not use AI’s imperfections to help us think outside the box? Thus, perhaps some AI systems should be left to hallucinate intentionally. Who knows? We may discover that the undiscovered genius of AI and humans working together lies in their imperfections.AI hallucinations
Diplomatic hallucinations
The UNGA: In vivo lab for language, diplomacy, and AI
The double-edged sword of AI hallucinations
The dilemma of perfection