The United Nations System Chief Executive Board for Coordination (CEB)1 produced, starting from UNESCO work, its own “ethical approach” consisting of ten “Principles for the Ethical Use of Artificial Intelligence in the United Nations system”. All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision-making. As UNESCO and CEB enouncements are not identical, we would extract from the CEB list three key principles which are different in formulation and content. Do not harm: AI systems should not be used in ways that cause or exacerbate harm, whether individual or collective, and including harm to social, cultural, economic, natural, and political environments. […] The intended and unintended impact of AI systems, at any stage in their lifecycle, should be monitored in order to avoid causing or contributing to harm. Sustainability: Any use of AI should aim to promote environmental, economic and social sustainability. To this end, impacts of AI technologies should continuously be assessed and appropriate mitigation and/or prevention measures should be taken to address adverse impacts, including on future generations. Human autonomy and oversight: The United Nations system organizations should ensure that AI systems do not overrule freedom and autonomy of human beings and should guarantee human oversight. All stages of the AI system lifecycle should follow and incorporate humancentric design practices and leave meaningful opportunity for human decision-making. Human oversight must ensure human capability to oversee the overall activity of the AI system and the ability to decide when and how to use the system in any particular situation, and the ability to override a decision made by a system. As a rule, life and death decisions or other decisions affecting fundamental human rights of individuals must not be ceded to AI systems, as these decisions require human intervention.2 The Secretary-General of the United Nations, António Guterres, could not miss the opportunity to claim his own imprint on the list of attempts to define the role of the world organization in handling AI from a perspective of global governance. The Secretary-General’s convened a High-Level Advisory Body on Artificial Intelligence whose work has culminated in the adoption of its Final Report: “Governing AI for Humanity“, which establishes a comprehensive framework for global AI governance. The report emphasizes the need for an inclusive and cooperative approach to AI governance, recognizing that current frameworks are insufficient, and that the development of AI is largely controlled by a few multinational companies. The report issued several recommendations for establishing a robust global governance framework, among which: The report does not deal with the issue of ethics but uses a list of risks associated with the AI which is highly relevant for an ethical perspective. The industry will always have an upper hand in all stages of AI life cycles. The UNESCO Recommendation on the Ethics of Artificial Intelligence appears to be the central piece produced by the United Nations system and recognized as such by the General Assembly in its first resolution on AI, titled “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”.4 However, the adoption of this resolution should not be overestimated. First of all, we are dealing again only with a recommendation, with a political declaration, with a framework for international cooperation. No binding rules. The industry will always have an upper hand in all stages of AI life cycles. The international machinery will always need more time to keep up with the new developments and will have to beg for resources to do anything meaningful. Just raising the flag of ethics and drawing the border between right and wrong will never be enough. Secondly, and even worse, the governments of major powers and their military establishment will keep have hands free on developing AI system for military purposes. The resolution 78/265 does not hide this truth and make it clear: the resolution „refers to artificial intelligence system in the non-military domain.” (sixth preamble paragraph). The Pandora’s box will be open for ever. [Part 4 of 6-part series] Dr Petru Dumitriu was a member of the Joint Inspection Unit (JIU) of the UN system and former ambassador of the Council of Europe to the United Nations Office at Geneva. He is the author of the JIU reports on ‘Knowledge Management in the United Nations System’, ‘The United Nations – Private Sector Partnership Arrangements in the Context of the 2030 Agenda’, ‘Strengthening Policy Research Uptake’, “Cloud Computing in the United Nations System”, and “Policies and Platforms in Support of Learning”. He received the Knowledge Management Award in 2017 and the Sustainable Development Award in 2019 for his reports. He is also the author of the Multilateral Diplomacy online course at DiploFoundation.Principles for the Ethical Use of Artificial Intelligence in the United Nations system: new wine in old bottles?
An initiative by the United Nations Secretary-General
Risks associated with the AI
AI and military uses?