Author Image

Is it the future yet?

Jelena Dincic
Published on March 9 2020
The text discusses the predictions made in the film Bladerunner, the development of artificial intelligence (AI), its current applications in various sectors, challenges such as bias and job loss, and solutions including regulations, upskilling, and a universal basic income. It emphasizes the need for careful handling of healthcare data, cybersecurity measures, and the importance of legislation to ensure AI benefits humanity. The text also suggests the possibility of an international treaty on AI, involving different stakeholders and experts for a comprehensive and evolving approach.

November 2019, when the film Bladerunner is set, came and went. How well did this film – made in 1982 – predict the future? Well, we don’t have androids (replicants) that think and look like us, and there are no flying cars (yet). But, technology has progressed greatly, what seemed like science fiction then – video calling for example – is now part of our everyday lives. There are more and more ‘smart homes’ with virtual assistants which can identify our voices, turn on our lights, adjust the room temperature and answer all sorts of queries. Autonomous vehicles are being tested in many major cities, particularly in the United States; one report predicts that ‘Once technological and regulatory issues have been resolved, up to 15 percent of new cars sold in 2030 could be fully autonomous‘. AI is becoming a big part of our everyday reality and we need to look at how it can help humanity and not harm it.

bladerunner

Bladerunner poster. Visualisation available here.

Before I go any further, I have to mention that there is no ‘one’ definition of AI. But, in this blog we’ll use the English Oxford Living Dictionary’s definition which states that AI is: ‘The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.’  AI today is driven by huge amounts of data and most of it uses deep learning (a type of machine learning) and natural language processing (a branch of AI that helps computers understand, decode, and employ language). Computers are fed large amounts of data to process and are trained to perform human-like tasks, such as to identify speech and images and make predictions.

There are two main types of AI: generalised and specialised. In generalised AI, the computer system would be able to display the same kind of intelligence that human beings do – to combine all the different data it has been fed, learn from it, and make intelligent decisions based on what it knows. This type of AI has not been created yet, and some argue, never will be – as it is impossible to replicate the human mind. Specialised AI – the type of AI used today – is developed to address only a specific goal, for example, to play chess; to analyse medical data; or to drive a car.

How can (specialised) AI help mankind?

AI is already being used in several sectors such as medicine, finance, and could possibly help us reach the sustainable development goals (SDGs).

•      In healthcare, AI is being used in telemedicine/telehealth: A patient living in a remote area who might otherwise not have access to a doctor is monitored (blood pressure, heart, blood sugar) via a wearable AI device, which, if it detects any alarming changes can send the information to a physician. AI is also used in predictive analytics and precision medicine: it analyses a patient’s medical data, genetic history, lifestyle, environment etc. AI can analyse large sets of data quickly and come up with personalised treatment plans for each patient and even predict medical outcomes. An article in Healthcare Business & Technology argues that ‘In medicine, predictions can range from responses to medications, to hospital readmission rates. Examples include predicting infections from methods of suturing, determining the likelihood of disease, helping a physician with a diagnosis, and even calculating future wellness.’ One such example is: IBM Watson for Oncology: a cognitive computing system which uses AI algorithms to generate treatment recommendations for cancer patients.

telemed

Designed by studiogstock / Freepik

•      AI is being used by banks for credit assessment and AI-powered algorithms are being used for trading at the stock market. In e-commerce, AI analyses our data in order to find out our preferences and propose similar items we may want to purchase. For example, Morgan points out that it ‘plays a huge role in Amazon’s recommendation engine … Using data from individual customer preferences and purchases, browsing history and items that are related and regularly bought together, Amazon can create a personalized list of products that customers actually want to buy’.

•      ‘AI for social good’ means that AI could be used to help society, such as by helping us reach the SDGs. According to one discussion paper, ‘Through an analysis of about 160 AI social impact use cases, we have identified and characterized ten domains where adding AI to the solution mix could have large-scale social impact. These range across all 17 of the United Nations Sustainable Development Goals and could potentially help hundreds of millions of people worldwide. Real-life examples show AI already being applied to some degree in about one-third of these use cases, ranging from helping blind people navigate their surroundings to aiding disaster relief efforts.’ 

Some challenges

AI is not perfect and is often biased. While writing her thesis at MIT, Joy Buolamwini discovered gender and skin-type bias in commercial AI systems. These biased systems are being used not only in ‘harmless’ everyday apps, but also in AI face recognition software for surveillance. Some countries have video surveillance on their streets that uses face recognition software. The argument is that it reduces crime. But what if the software makes a mistake? And what if it is used to target people by race? Or to target people who attended a political protest? What about our right to privacy? Our images are being stored without our consent.

There are also privacy and ethical issues to consider when it comes to the gathering, storing, and sharing of healthcare data.

AI can also be used for criminal activities, such as in cyber-attacks and stealing information through phishing – which could lead to bringing down infrastructure such as power stations, hospitals etc.

Another issue is that AI and automation will lead to job loss. How can this be dealt with?

Some solutions

One way to avoid the misuse of AI face recognition software is to follow San Francisco’s lead and ban face recognition surveillance. Some would argue that that is an extreme reaction, and that with proper regulation, such technology should be used. Personally, however, I’m not sure that even with regulations in place, there won’t be significant abuse of such technology.

In order to avoid AI bias such as that discovered by Buolamwini, the data sets used in training an AI system need to be a lot more inclusive, and there needs to be awareness about such biases in order to fix them. In fact, after Buolamwini published her findings, the companies that she had evaluated had quickly made significant improvements in their facial recognition software.

facial recognition

Gender Shades. Visualisation available here.

When using personal data in healthcare, one must tread carefully. Patients must give their consent, and strict regulations need to be put in place to protect the patients and make sure their data is safe and can’t be used against them, by, for example, insurance companies to raise their premiums or deny them care.

When it comes to cybersecurity, it is important to have national strategic frameworks on cybersecurity and AI in place, and there should be regional and international co-operation between national computer emergency response teams (CERTs) and similar institutions.

How to solve job loss due to AI and automation? One of the solutions is upskilling, but not everyone can afford to go back to school. This is why some propose universal basic income (UBI), which is not without its challenges; as Elon Musk pointed out at the World Government Summit in Dubai in 2017. He asked, ‘How will people then have meaning? A lot of people derive meaning from their employment. If you’re not needed, what is the meaning?’            

It is not only low skilled labour that is at risk of being replaced by robots, ‘Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI’ according to Bowcott. This is why governments need to make sure that education systems adapt quickly to ensure that future generations are skilled for the kind of jobs that will be available to them alongside AI.

Moreover, new employment and labour legislation are needed. One proposal is for governments to introduce ‘quotas of human workers’. Bowcott suggested that governments could determine which jobs could be performed only by humans, such as, for example, childcare.

Norms and regulations need to be put into place to make sure that humanity benefits from AI and isn’t harmed by it. An international legally-binding treaty on AI is one possibility. The digital world, which AI is a part of, goes beyond borders, and such a treaty would be in the interest of all countries, to protect their citizens. It should take into consideration existing norms and regulations and should be an inclusive process, involve different stakeholders and experts and take their opinions into account in order to come up with a treaty that works for everyone. Lastly, any legislation or treaty related to AI should be constantly evolving and adapting to accommodate developments in the field and avoid ‘law-lag’.

Jelena Dinčić holds a BA in Photography from the Ecole de Condé, Lyon, and a degree in film directing from the Ecole de Cinéma, Geneva. In addition to numerous photography and film projects, she has been working at Diplo since 2015 tackling course coordination, editing, and communications. She has successfully completed Diplo’s online course on Artificial Intelligence: Technology, Governance, and Policy Frameworks.


cross-circle