Dr Jovan Kurbalija, conductor of the orchestra called DiploFoundation, proposed a project called HumAInism: Visionary governance for humanity with artificial intelligence. The main aim of the vision would be to find a way towards an outline of a ‘new social contract’. A wise initiative in the right place! The old social contract of Jean-Jacques Rousseau was born in Geneva, too. As a first spontaneous and amateurish reaction to this lofty initiative, I was wondering if a new social contract is the suitable means to address the impact of artificial intelligence. I believe that we should rather start by redefining the human condition besieged by artificiality. Why? Well, simply because by lifting artificial intelligence onto the pedestal of the future of humanity, as we do, we actually consent, with considerable enthusiasm and optimism, to delegate some of the specific functions of the human being to the more competent algorithms. If that is the case, we have to contemplate, inter alia, a number of possible implications. This kind of prospective governance should reflect on the consequences of the use of artificial intelligence on our brainpower. I do not want to go through the famous checklist of ifs by Rudyard Kipling, but I can afford to ask myself, ‘shall I be a Man?’ if: The list could continue, but as short as it is, I have reasons to believe that, at the invitation of artificial intelligence, my brainpower may take a long and underserved break, for many of its functions will be useless. My fingers will eventually be more practical than my brain. Despite its generous promise for improved efficiency, productivity, enhanced collective intelligence, and other goodies, artificial intelligence may lead to a new kind of polarisation between two fundamental social strata: the minority of coders and the majority of users. The more we rely on coders and programmers, the more we increase the gap between their knowledge and our ignorance, their sight and our blindness. An increasing number of people will depend on a decreasing number of coders/programmers. The latter’s minds will still work, ours will be at leisure. If the coders write smart contracts, we may not need philosophers to write social contracts. We may anticipate a worrisome Gini coefficient of intelligence distribution, but humankind will keep moving. We should not forget that all new technologies initially served good intentions and later turned against humans. Fire was meant to bring warmth and cook meat, but then it served to burn people at the stake and in Holocaust ovens. Dynamite was intended to help excavation for construction, then it served to destroy human settlements. Atomic energy was supposed to produce more light and electric power, then it served to annihilate cities and any form of life. Planes were invented to carry people, but they are also used to carry and drop bombs over them. Drones were acclaimed as tools to carry mail and medical aid to inaccessible places, and soon enough they were used for terrorist attacks. The paradigm of a prospective governance should be reversed: before advertising the potential benefits, we should analyse and prevent the potential dangers. Should we not stop equalising new technologies with inevitable progress? Should we not redefine progress? Undermining factors such as money, manipulations, fake news, junk knowledge, etc., already challenge democracy. The social media mobilisation can act as a disruptive factor, but it is not capable of generating rational, systematic, constructive solutions, as well as accountability for its own slogans. With the disappearance of classes and the transformation of the masses into mere statistics and media ratings, artificial intelligence may take us to a new kind of totalitarianism. We already witness the emergence of new technology-enabled illegitimate powers across borders, ideologies, and classes. Power corrupts power holders! While we can assume that a possible rebellion of artificial intelligence against humans can be avoided by good faith programming, we may not be able prevent the abuses of the new power. The need for a superior intelligence has accompanied us since the dawn of humanity. So far, religious beliefs have survived all advances in science and technology because religion was created and accepted as a higher moral guide, above empirical knowledge and evidence. Artificial intelligence will undermine irreversibly the human propensity to have faith in a superior spiritual power. The algorithms may replace God and create a vacuum in the human soul. * * * Therefore, studies about future governance should envisage all the consequences of outsourcing human brain functions to artificial intelligence. Hopefully, there are still some ifs in the poetic prescription of Kipling, which can make the difference: If you can dream—and not make dreams your master; If you can think—and not make thoughts your aim; one could still be a Man! Dr Petru Dumitriu is a member of the Joint Inspection Unit (JIU) of the UN system and former ambassador of the Council of Europe to the United Nations Office at Geneva. He is the author of the JIU reports on ‘Knowledge Management in the United Nations System’, ‘The United Nations – Private Sector Partnership Arrangements in the Context of the 2030 Agenda’, ‘Strengthening Policy Research Uptake’, ‘Cloud Computing in the United Nations System’, and ‘Policies and Platforms in Support of Learning’. He received the Knowledge Management Award in 2017 and the Sustainable Development Award in 2019 for his reports. He is also the author of the Multilateral Diplomacy online course at DiploFoundation.The twilight of brainpower?
A new kind of polarisation?
Exploring the dark side and redefining progress?
The danger of a new totalitarianism?
Creating new Gods?