Africa’s philosophy of Ubuntu offers a moral language for governing intelligent systems that increasingly govern us. Artificial intelligence has produced a striking paradox. The more global the technology becomes, the narrower the moral vocabulary through which it is governed. The systems now being designed in laboratories, firms and ministries travel quickly across borders, institutions, and cultures. They adjudicate credit, identify faces, classify texts, sort welfare recipients, support diagnosis, optimise logistics, and increasingly mediate the everyday relations between persons and power. Yet the ethical language used to justify and regulate these systems remains disproportionately derived from a small number of philosophical traditions. What passes for universality in AI governance is often the global circulation of a particular moral anthropology rather than the patient construction of a genuinely plural ethical order.The paradox of a global technology
The prevailing frameworks are by now familiar. They invoke fairness, transparency, accountability, privacy, safety and human oversight. These principles deserve serious respect. They emerged in part as a response to real technological harms and they remain indispensable to any credible effort to govern powerful computational systems. The philosophical difficulty begins when they are treated not as historically situated achievements but as self-evident and exhaustive expressions of ethical reason. In that moment, universality ceases to be an aspiration and becomes a habit of exclusion. One conception of the person, the polity and the good life takes on the appearance of neutrality. Other moral worlds are invited to participate only after the grammar of the conversation has already been settled.
My master’s research, undertaken in Contemporary Diplomacy with a specialisation in Internet Governance and guided intellectually by Dr Jovan Kurbalija within the wider educational infrastructure of DiploFoundation, emerged from dissatisfaction with that narrowing of ethical imagination. The thesis asked whether the African philosophy of Ubuntu could serve not merely as a contextual supplement to dominant frameworks but as a genuine paradigm for thinking about artificial intelligence governance in African settings. The answer I arrived at was not that Ubuntu should replace every existing norm, but that it exposes the incompleteness of a field that has come to mistake one civilizational vocabulary for the whole of moral thought.
Ubuntu is often expressed as the idea that a person is a person through other persons. The danger of repetition has made the phrase sound simple, even sentimental, but philosophically, it is demanding. It unsettles the image of the human subject presupposed by much modern law and ethics. The person here is not first an isolated rights-bearing unit who later enters into social relations. Personhood is formed within relation. Dignity is not merely possessed; it is recognised, sustained and sometimes damaged through the quality of social life. Responsibility is reciprocal. Justice is not fully captured by procedure, because wrongdoing is not only a violation of a rule but also a wound in the texture of communal existence.
Once that ontology is taken seriously, the ethical stakes of artificial intelligence begin to look different. Consider the dominant emphasis on individual consent in contemporary data governance. Consent is important, but in relational societies, it is often insufficient as the sole moral test of legitimacy. Data about one person may implicate a family, a language community, a lineage, a village, a customary land arrangement or a wider history of extraction. Likewise, harm cannot be reduced to what can be measured at the level of the isolated subject. An algorithm may be formally non-discriminatory in a narrow statistical sense and yet still damage trust, displace locally legitimate forms of judgment or deepen the authority of distant institutions over communities that have little power to contest them. To think ethically from Ubuntu is therefore to move from a thin morality of compliance toward a thicker morality of relation.
This shift has concrete implications for AI governance. It means that accountability cannot be terminated by documentation, model evaluation, or a technical audit alone. The ethical question is not only whether a system satisfies a predefined standard, but whether those who live under its effects can understand, contest, and reshape it within institutions they recognise as legitimate. It means that impact assessment should not stop at privacy or bias narrowly conceived, but also ask whether a system fractures social reciprocity, erodes communal trust, or marginalises ways of knowing that are central to collective life. It means that explainability is not merely a property of a model but a political relation between those who wield a system and those who must live with its judgments.
The legal consequence of this difference is especially significant. Much contemporary AI governance assumes that the subject of law is the individual claimant and that the remedy is compliance secured by the state. Yet many African societies live within conditions of legal pluralism, where statutory law, customary authority and religious norms coexist and interact. In such settings, legitimacy cannot be produced solely by formal regulation. Oversight must be intelligible across multiple moral and institutional registers. An Ubuntu-informed framework, therefore, points toward forms of governance that liberal templates rarely imagine: community review bodies, negotiated and collective forms of consent, and interpretive institutions able to respond not only to private injury but also to fractures in shared life.
Over the past six months, while contributing to Guinea’s National Artificial Intelligence Strategy, I have seen firsthand how these philosophical questions surface in state practice. African governments are being asked to adopt AI in domains ranging from agriculture and health to administrative modernisation and security. They are also being offered governance templates that arrive preformatted by external institutions, external markets and external histories. These templates are not without value; they often provide urgently needed structure. But they also reveal a recurring asymmetry. Local actors are expected to translate their concerns into a language already sanctioned elsewhere. The categories that appear first are those of imported risk management. The categories that struggle for space are those of communal legitimacy, linguistic justice, historical memory, customary authority, and the moral economy of shared life.
This difficulty is not accidental. It is one expression of a larger pattern in which technological modernity continues to carry colonial afterlives. Throughout Africa, governance technologies were long introduced through external administrative logics that treated local societies as objects of classification, surveillance, and reform. AI can easily extend that history if embedded in public institutions without corresponding attention to the ethical worlds within which those institutions operate. A biometric identity system may be celebrated as efficiency and inclusion while also deepening administrative distance, mistrust or dependence. A predictive agricultural tool may improve yields while weakening customary forms of ecological knowledge. A language technology may expand access for some while further marginalising communities whose languages remain outside the computational frame.
It is in this context that continental initiatives matter. The African Union’s emerging work on artificial intelligence, the policy coordination efforts of Smart Africa, and the proliferation of national strategy processes all signal that the continent is no longer willing to remain a passive recipient of digital norms. Africa is attempting, however unevenly, to become a site of norm production. But that project will remain shallow if African ethical traditions appear only as rhetorical markers attached to institutions whose operative assumptions remain imported. The problem is not solved by inserting the word Ubuntu into a preamble. It is solved, if at all, by institutional translation. That translation may take the form of multilingual public consultations, community-centred data stewardship, relational impact assessments, legal recognition of collective harms, and governance bodies able to mediate between statutory regulation and customary or social legitimacy.
The appeal of such an approach extends well beyond Africa. Contemporary AI harms in many parts of the world reveal the limits of an ethics centred too narrowly on autonomous individuals. Platform systems have altered the texture of public life, not only by violating privacy but by degrading common worlds. Automated decision systems influence labour markets, not only through bias but through the reorganisation of dependency and precarity. Surveillance architectures do not merely collect information about discrete users; they reshape the terms on which people inhabit public space. In each of these cases, the language of isolated rights captures something important but not enough. Relational philosophies such as Ubuntu help disclose that what is at stake is not simply the protection of the individual subject, but the maintenance of a habitable social order.
This is why the question of moral pluralism in AI governance should not be misconstrued as a plea for cultural exceptionalism. The argument is not that Africa needs one ethical framework while the rest of the world keeps another. The argument is that no adequate governance of planetary technologies can emerge from a monoculture of moral thought. Ethical pluralism is not an indulgence; it is a condition of realism. Technologies that operate across diverse societies will generate harms, expectations and claims that cannot be interpreted through a single civilizational lens. A mature global order for AI would therefore not treat non-Western philosophies as local colour. It would treat them as sources of conceptual revision capable of enlarging what the global conversation means by intelligence, fairness, responsibility and human dignity.
This conviction also explains the direction of my proposed doctoral research. The next stage of inquiry will move beyond Ubuntu as an ethical framework for contemporary governance and investigate the deeper historical question of African epistemic traditions in relation to artificial intelligence and digital geopolitics. The project begins from a proposition that is at once historical and political. AI should not be understood in Africa only as an imported technological rupture. It can also be approached as a new layer placed upon much older traditions of classification, prediction, interpretation, and knowledge stewardship on the continent.
Ancient Egyptian mathematics, the Yoruba Ifa corpus, African fractal architectures, monastic traditions of hermeneutic reasoning, manuscript cultures of astronomy and administration, and other precolonial forms of structured knowledge all suggest that Africa’s intellectual history contains sophisticated resources for thinking about intelligence and prediction. These traditions are not modern machine learning in disguise, and any facile equation would flatten both history and philosophy. Their importance lies elsewhere. They demonstrate that abstraction, inference, recursion and custodianship are not foreign to African thought. To recover these histories is therefore to challenge a geopolitical narrative in which Africa appears only as a market for AI, a source of critical minerals, or a terrain of competition among larger powers. It is to restore Africa as a producer of concepts.
That restoration matters for contemporary governance because material sovereignty without epistemic agency is fragile. A continent may possess minerals, markets, and youthful demographics and still remain subordinate if the governing concepts of its technological future are authored elsewhere. The deeper promise of linking Ubuntu, AI ethics, and older African epistemologies is that it allows Africa to enter the digital century not merely with demands for inclusion, but with intellectual propositions of its own. It allows a critique of the present to become a proposal about the future.
In the end, the philosophical problem of AI governance is not reducible to regulation, even though regulation remains necessary. It concerns the image of the human and the social order that our technologies presuppose and reproduce. If artificial intelligence is becoming part of the infrastructure through which contemporary life is organised, then its ethics cannot be left to a single tradition that mistakes its own history for humanity as such. Ubuntu reminds us that intelligence without relation is incomplete. The longer arc of African intellectual history reminds us that the continent’s engagement with knowledge, prediction and stewardship is older and richer than the digital present. To take those insights seriously is not to provincialise global ethics. It is to rescue it from provinciality.
The graduation ceremony in Valletta
In early March, inside a baroque basilica in Valletta, I received a master’s degree from the University of Malta. Academic ceremonies rarely change the world. Yet they can illuminate the question one is prepared to pursue for the next decade. As the Latin formula conferring the degree echoed through the nave, my thoughts were not on the idea of personal achievement. They returned instead to the central question of the dissertation that had brought me there. Who defines the ethics of artificial intelligence?

The ceremony began with a procession from the university’s Valletta campus. We walked through the narrow streets of the old city toward the Basilica of Our Lady of Safe Haven and Saint Dominic. Cafés and bistros were filled with people who paused to watch the unusual sight of graduates and professors in academic robes moving slowly through the city. Some clapped as we passed. Tourists stepped aside with curiosity. We followed each other quietly through the streets until the basilica appeared at the end of the route.
Inside the church, the ritual unfolded with careful precision. When my name was called, I stepped forward toward the presbytery where the university’s officials were standing. The Rector placed the academic cap on my head. The Chancellor handed me the scroll confirming the degree. I signed the register of graduates.
At the end of the ceremony, we rose together to pronounce the solemn declaration of graduates. The moment was brief yet symbolic. In that centuries-old basilica, surrounded by ritual and history, the question that had guided my research felt even more urgent. In an era shaped by artificial intelligence, the decisive issue is not only what these systems can do. It is those who have the authority to define the ethical principles that will govern them.
About the author
Emmanuel Elolo Agbenonwossi is a researcher and policy adviser working on artificial intelligence (AI) governance, digital policy and cyber diplomacy. He recently graduated summa cum laude from the University of Malta with a Master’s in Contemporary Diplomacy, specialising in internet governance. He served as principal consultant with the United Nations Development Programme on the development of Guinea’s national AI strategy and advised several multilateral organisations and regional initiatives. His work contributes to policy discussions and programmes led by institutions such as Smart Africa, ECOWAS and several African governments. His proposed doctoral research examines the intersection of African epistemologies, AI governance and the emerging geopolitics of the digital age.