Author Image

Keeping AI in check

Jesús Cisneros
Published on March 16 2020
The message emphasizes the importance of keeping AI in check due to its economic potential and associated risks. Ethical and human rights compliant regulations are crucial, as AI's pervasive nature requires protection of data privacy and consideration of ethical dilemmas, such as in autonomous decision-making. Accountability, transparency, and understanding of AI systems are highlighted as essential for building trust and ensuring alignment with societal values and human rights. Initiatives advocating for explainable AI aim to increase public comprehension and oversight, promoting a more responsible and accountable AI landscape.

Artificial intelligence (AI) is a broad term that encompasses high-end technologies capable of ‘performing human-like cognitive processes such as learning, understanding, reasoning and interacting’, according to Szczepanski. Nowadays, societies are exposed to AI through smartphones, virtual assistants, surveillance cameras able of recognising individuals, personalised advertising, and automated cars, to name but a few examples.

In short: 

According to McKinsey Global Institute’s 2018 report, AI has the potential to inject an additional $US13 trillion to the world economy by 2030. In general terms, this immense amount of wealth is in itself a desirable outcome of technological progress. Nonetheless, the report also warns that AI, under the current circumstances, is likely to deepen the gaps between countries, companies, and workers and that this ‘needs to be managed if the potential impact of AI on the world economy is to be captured in a sustainable way’.

As any other instrument in human hands, AI can be simultaneously a source of positive outcomes and bring about trouble. The originality of AI, however, the thing that sets it apart from previous innovations, is its pervasiveness and the fact that in order to function it needs to process big amounts of data, much of it inadvertently collected from people going about their daily lives. This in particular needs to be regulated to make sure that AI developers respect the privacy of this data. People have the right to know what information is being collected from them and for what purpose.

Moreover, AI, especially through machine learning, creates devices capable of making autonomous decisions that can have an impact on people’s lives. For example, an autonomous car is able to decide whom it would rather hit (and probably kill) in the case of an accident, where the circumstances leave it no other option, according to a set of instructions embedded in its system.

The ethical dimension of AI has become more visible after public scandals showed to what extent systems could harm people with their decisions and their bias. The Cambridge Analytica case, and the bias detected in recruitment algorithms (that would discriminate against applicants from minority groups) or banking services (that would reject credit applications from members of certain groups) are iconic in this respect.

The Moral Machine, an experiment run by MIT, shows in what types of situation AI would need to make a decision with ethical or moral implications. By using these kinds of models, scientists highlight that much of the time there is no easy answer to an ethical dilemma. When an automated car has to decide between hitting a pedestrian crossing the street, or hitting a wall and harming its passengers, what is at stake is the machine’s ability to discern the lesser evil. But identifying the lesser evil is a tricky business since there aren’t any universal definitions to guide moral reasoning. Data collected by the Moral Machine shows that people from different cultural backgrounds:

YouTube player

have a preference for different values when they have to decide what the most acceptable outcome of an unavoidable tragic situation, like a car accident, would be. The complexity of teaching machines how to reason ethically lies in the difficulty to predict a comprehensive catalogue of situations raising ethical dilemmas, and to agree on what the better course of events for each of those situations is. The Embedded EthiCS initiative at Harvard explores this issue and advocates for the incorporation of ethical discussions in the training of scientists, so that they think through these questions while conceiving and developing AI systems. Developers have to be confronted with the need to ethically justify the specific instructions that they put in their systems, and the outcomes they expect to produce.

Another way of making sure AI systems are not harming the core values of societies is to incorporate the respect of human rights in their conception. Unlike the haziness of ethical questions, compliance with human rights standards is facilitated by the fact that these rights are codified and universal.

International organisations, like the Council of Europe (CoE), have promoted a discussion on what the possible conflicts between AI development and human rights are. CoE has come to the conclusion that states play a fundamental role in making sure that tech companies do not inflict damage to human rights and the fundamental freedoms of people. A ten-step guide published by CoE starts with the need to conduct a human rights impact assessment on AI systems.

Technology is a social product, and as such, it integrates values that orientate the way it operates, even if there was no intentionality in the heads of its creators in the first place. A system that violates privacy rules, that causes indiscriminate damage in a person’s life or property in the case of an accident, or that discriminates against certain groups of people should be held accountable, even if there was no direct human intervention at the moment the damage was produced. AI is challenging in this respect, because the notions of responsibility of the analogue reality aren’t always applicable in the digital world.

Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intelligent machines limit themselves to follow the instructions embedded in them by their human creators. There’s human responsibility behind every step of the creation and operation of AI systems. To trust AI, it’s necessary to hold those in charge of the development and use of high-end technologies accountable for the effects that their actions might have on the lives of individuals. To make this possible, it’s important to establish an appropriate governance that includes smart regulation, and independent, transparent, and accountable institutions in charge of enforcing them.

Since AI is a broad, developing field, the institutional framework set up to regulate it should be flexible and capable of evolving over time as well.

The most urgent task in order to make AI trust-worthy should be making this technology understandable to the public. This could be done by imposing on developers an obligation to explain and justify in clear terms, the decisions behind the systems they develop. DARPA’s project on explainable AI (XAI) is moving in this direction. By advocating for AI systems’ explainability, it moves forward to a scenario where more people understand the functioning of AI and are able to keep it under scrutiny. A more accountable AI is, after all, the best insurance against any abuse of this technology that could harm the values defended by contemporary societies.

Jesús Cisneros is a Mexican diplomat, currently in charge of political and multilateral affairs at the Embassy of Mexico in France. He’s a graduate of the National School of Administration (ENA) in France, and holds a Master’s Degree in public communication from the Sorbonne University. He has successfully completed Diplo’s online course on Artificial Intelligence: Technology, Governance, and Policy Frameworks.


cross-circle