This post is part of the AI Apprenticeship series: By Dr Anita Lamprecht (supported by AI) What happens when AI agents begin speaking in their own language—a language humans can’t understand? While this may seem efficient, as seen in R2-D2’s droid-like chatter in Star Wars, it raises urgent questions about interpretability and governance. How can we ensure accountability when the inner workings of AI communication are locked away in new forms of ‘black boxes’? Let’s test the learnings from Diplo’s AI Apprenticeship online course to explore this question. Remember the ‘droidspeak’ of R2-D2 in Star Wars? Scientists from Microsoft and the University of Chicago recently introduced a new multi-agent language: DroidSpeak. DroidSpeak is a novel framework enabling multi-agent communication through a language specifically designed for AI agents. The goal is to enhance efficiency and scalability in cross-LLM communication. Think of R2-D2, who communicates effectively in short beeping noises with tech systems across the Star Wars universe, from other droids to the computer systems of starships. Embed video: https://www.youtube.com/watch?v=NsYQcZa7hCA Why worry about AI agents adopting the language of a beloved sci-fi character? While R2-D2’s incomprehensible beeps are charming and efficient, should we replicate this in real-world AI systems? After all, why should AI agents use human language to interact with each other? This raises a critical question: what are the implications of AI agents communicating in a language humans don’t understand? While R2-D2’s ‘droidspeak’ may seem harmless and entertaining, today’s AI systems face a similar challenge: should they prioritise machine-like efficiency over human interpretability? To address this question, we will briefly examine Diplo’s AI system and compare it with others to see what is possible. Current multi-agent systems that use large language models (LLMs) communicate in natural (human) language. For example, when we use Diplo’s research assistant, we can read how the agents collect the relevant sources in human language. This makes the agents’ work more transparent and the output interpretable. When we use systems like Gemini or ChatGPT, we cannot see how the agents collect the information. Does it mean they are not interpretable for the average user? A crucial lecture from our AI Apprenticeship course concerns the question of governance of known and unknown aspects: known aspects, like the common architecture, can be directly addressed and governed, while unknown aspects require a risk control approach. Unknown means that we cannot sufficiently understand how a part of an AI system works. Our lecturer identified ‘interpretability’ and the black box problem of neural networks as the most critical ‘unknown’ for current governance discussions. The ‘black box problem’ refers to the opacity of decision-making processes within AI systems, where even developers can’t fully explain how an outcome was reached. This challenge of interpretability is central to the development and governance of AI. There are possibilities to interpret the critical decision-making process post hoc. However, such retrospective interpretation is only an approximation and cannot deliver the required certainty. The complexity of the process is usually used as the main reason behind the black box problem. Are such systems complex by nature? AI systems, while complex and emergent, are ultimately created by humans. This means that seemingly ‘unknown’ aspects often arise from deliberate design choices of human architects. We have learnt that the common architecture is a known factor and can, therefore, be addressed and governed. Therefore, the question of governance is rather a question of timing than complexity. Complexity and its relation to the time horizon of governance are among the core themes of my futures literacy research. To better understand this dynamic, we analyse the governance of AI from the perspective of futures literacy, the ability to make sense of the narratives and building blocks of the future. In 1980, David Collingridge published a book discussing ‘one of the most pressing problems of our time’: controlling technology and the lack of timely understanding of its social effects; also known as the Collingridge dilemma. Collingridge identified a critical challenge in managing technology: the earlier we act, the less we understand its impacts, but the longer we wait, the harder it becomes to control. This dilemma is especially relevant for the example of this post: DroidSpeak. Early governance could help address interpretability before it’s too late and turns into a new dimension of black box problems. Over the decades, we’ve learnt a lot about how technology shapes society. Can this knowledge and awareness allow us to act, react, and gain agency during early AI development? Is the right timing now to address DroidSpeak and govern its development towards interpretability? The development of DroidSpeak and similar AI frameworks is not just a question of efficiency – it’s a question of trust, accountability, and transparency. Without interpretability, we risk creating systems that act beyond our understanding and control. By prioritising governance and AI literacy now, we can ensure that the next generation of AI technologies serves humanity’s interests – not just technocratic efficiency and scalability. The AI Apprenticeship online course is part of the Diplo AI Campus programme.
Introduction
DroidSpeak
Speaking droid and the question of interpretability
Governance as a question of timing
Governance as a question of AI literacy