Author Image

Top digital policy developments in 2019: A year in review

Sorina Teleanu
Published on January 20 2020
The top digital policy developments of 2019 include increased focus on issues like online misinformation, surveillance technologies, Internet shutdowns, and digital taxation. Antitrust investigations against tech giants, concerns over online rental business models, and regulatory concerns surrounding Facebook's Libra cryptocurrency are also highlighted. Various initiatives and regulations were introduced to address these digital challenges globally, emphasizing the need for balancing technological advancements with privacy, competition, and consumer rights.

As the new year begins, let’s take a look back at the major developments that shaped digital policy in 2019. 

Some issues were in focus throughout the year, such as those related to tackling the spread of online misinformation and violent extremism.

Most developments were in line with the predictions we made in early 2019, particularly in areas such as artificial intelligence (AI) and digital identities. New breakthroughs in AI applications were accompanied by new AI-related plans and strategies from governments, as well as new international initiatives that will look into the economic, social, human rights, and ethical implications of the technology. The future of work was another much-debated question: The main areas of interest here included the status of workers in the gig economy and the impact of automation and AI on the jobs market. 

One crosscutting trend was increasingly intense governmental scrutiny of tech companies, and in some cases new steps taken to regulate them. We saw continued efforts to tax the digital giants, with many countries taking unilateral action despite no agreement having been reached at the international level. Competition authorities opened new probes into the market behaviour of Amazon, Apple, Facebook, and Google. 

As companies operating in the platform economy expanded their businesses, more concerns were raised over their impact on traditional businesses and consumers. Facebook’s cryptocurrency Libra generated intense discussions focusing on its potential impact on financial stability and monetary sovereignty.

In the digital rights sphere, data protection watchdogs launched further investigations into the privacy practices of tech companies, fining some for non-compliance with privacy rules. Surveillance technologies attracted more public scrutiny, with facial recognition technologies (FRTs) in the headlines throughout the year. Internet freedom continued to decline: a growing number of Internet shutdowns was recorded around the world. 

Within the framework of the UN, discussions continued of the norms and rules required to govern state behaviour in cyberspace, as the sixth Group of Governmental Experts and the Open-ended Working Group each started work this year. The trade war between China and the USA continued, exacerbated by President Trump’s decision to ban American companies from selling technologies to Huawei. 

Lastly, the UN Secretary-General’s High-level Panel on Digital Cooperation issued its report in June 2019, calling on all actors to step up their efforts towards more efficient digital co-operation.

This year-in-review report discusses 2019’s top 20 digital policy developments (arranged by theme). It brings together the expert analysis published by the Geneva Internet Platform (GIP) on the GIP Digital Watch observatory, the Digital Watch newsletter’s identification throughout the year of the main trends and focus-areas in the digital world, and other research and studies undertaken by DiploFoundation’s team. Special thanks go to Prof. Jovan Kurbalija, Katarina Anđelković, and Andrijana Gavrilović for their support in building this report, and to all Diplo/GIP topic experts, whose contributions to the Digital Watch newsletter are reflected here.

Our reflections will continue throughout 2020. Join our briefings on the last Tuesday of every month for a regular recap of global and regional developments: The next briefing on 28 January 2020 will focus on predictions for 2020. All digital policy developments are further analysed in our monthly newsletter, available in English and French.

Find the latest updates, trends, processes, and other resources on the GIP Digital Watch observatory at https://dig.watch, and keep track of global events using DeadlineR, a notification system that can be accessed through our calendar of digital policy events.

Comments are welcome! Get in touch via gip@diplomacy.edu

The main developments in 2019:

1. UN groups discuss state responsibility in cyberspace
2. As cyber-attacks continue, new regulations and policies emerge
3. Data breaches increase in number, reaching a record high
4. Privacy comes into focus as the number of investigations increase
5. Internet shutdowns threaten online freedoms
6. Tackling online violent extremism remains a priority
7. Internet companies take new measures to address online misinformation
8. EU adopts Copyright Directive, but concerns remain 
9. Governments move towards new digital tax rules
10. Antitrust investigations target tech giants
11. Online rental businesses face pushback
12. Facebook’s cryptocurrency Libra attracts regulatory concerns
13. Huawei controversy has far-reaching implications
14. As AI technology evolves, more policy initiatives emerge
15. Facial recognition generates human rights concerns
16. Future of work fuels more debates
17. More digital identification programmes take off
18. Digital inclusion needed to achieve sustainable development
19. Digital technologies converge with other sectors
20. UN panel calls for improved digital co-operation


1. UN groups discuss state responsibility in cyberspace

In December 2018, the UN General Assembly (UNGA) approved the creation of two groups to further explore issues around the need for responsible state behaviour in cyberspace: an Open-Ended Working Group (OEWG) and a new Group of Governmental Experts (GGE). The two groups – proposed in resolutions put forward by Russia and the USA, respectively – started work in 2019.

GGE-OEWG

The reports of previous GGEs (published in 20102013, and 2015) confirmed that international law applies to cyberspace, and outlined a number of voluntary norms, confidence-building measures (CBMs), and capacity-building priorities. The new GGE and the OEWG are expected to look into how international law applies to cyberspace and to further develop norms and CBMs. In addition, the OEWG is also expected to establish regular dialogues between states and to clarify concepts requiring discussion. 

Both groups had their first sessions, as well as informal consultations, in the second half of 2019. Reports from the open sessions suggested that their discussions were notably similar. Each group emphasised the importance of implementing the norms outlined in the previous UN GGE reports. On capacity building, both noted the need for sensitivity to regional and national contexts and that the principles of national ownership, transparency, and sustainability must be respected. It was also clear that capacity-building activities should be co-ordinated to avoid duplication of effort. However, neither group answered the question of how international law applies to cyberspace.

Why is this significant?

Countries are increasingly investing not only in defensive cyber-capabilities but also in offensive ones. This increases the risk of cyber warfare, threatening stability in cyberspace and beyond. Although most countries have agreed in principle that international law applies to their behaviour in cyberspace, serious risks remain since they have different positions on how exactly international law applies and what such law means in the case of cyber-attacks.

Some of the questions that remain unresolved and on which the GGE and OEWG hope to make progress include:

What next?

Although the achievements of the GGE and OEWG are expected to be fairly modest, hopes are nevertheless high: the alternative is the transformation of cyberspace into a battlefield. The groups’ endeavours will continue in 2020, with second and third sessions scheduled for both between February and August. The OEWG is also expected to present its report at the 75th UNGA in September, while the GGE will do the same at the 76th UNGA in 2021. 

Discussions of cyber norms, CBMs, and the roles and responsibilities of actors in cyberspace are also ongoing outside of these groups. Examples include the G7 plans for a Cyber Norm Initiative, the Global Commission on the Stability of Cyberspace (which presented its final report in November 2019, reiterating their proposals for eight new voluntary norms), the Internet Governance Forum, the Global Forum on Cyber Expertise, and the Geneva Dialogue on Responsible Behaviour in Cyberspace.

Read more: New Year’s in New York: A Crowded Cyber-Norms PlaygroundPart 1 & Part 2

Follow relevant issues and processes on the GIP Digital Watch observatory: UN GGE & OEWG |  Cyberconflict and warfare

Back to the list of developments

2. As cyber-attacks continue, new regulations and policies emerge 

At the start of 2019, the World Economic Forum predicted that cyber-attacks, data fraud and theft, and critical information infrastructure breakdowns were going to be among the year’s top 10 global risks. The number of cyber-incidents reported throughout the year indicates that the prediction was accurate.

Top digital policy developments in 2019: A year in review

The targets of cyber-attacks were multiple and diverse, from public authorities and banks to hospitals and online services. Australia detected an attack on the national parliament’s computer network, for example, while a ransomware attack prompted Johannesburg to shut down several municipal systems. Other attacks targeted government systems in the Canadian territory of Nunavut, the US state of Louisiana and the city of New Orleans, and the UK Labour Party.

The Maltese Bank of Valletta was affected by a cyber-breach that allowed hackers to transfer €13 million to accounts in the USA, the UK, the Czech Republic, and Hong Kong. A French hospital and nuclear power facilities in India and the UK were also targeted by cyber-criminals. 

Significant incidents in the field of online services occurred when hackers exploited a Whatsapp security breach to install surveillance malware on smartphones, and when a massive distributed denial of service (DDoS) took Wikipedia offline in several parts of the world. 

There were also reports of cyber-attacks against critical information infrastructures, some of which were alleged to have been co-ordinated by states. Examples included cyber-operations against Iranian computer systems that control missile launches, Russia’s electrical grid, and Venezuela’s hydroelectric power operations.

Why is this significant?

These and other cyber-attacks were reminders of the continued vulnerability to cyber-threats of individual users, public institutions and governmental networks, critical information infrastructures, online services, financial and healthcare institutions, etc. 

Faced with this reality, governments and intergovernmental organisations are developing a range of new policy and regulatory initiatives. The EU Council, for instance, adopted the EU Law Enforcement Emergency Response Protocol (outlining procedures, roles, and responsibilities for key agencies tasked with responding to cross-border cyber-incidents), as well as a decision allowing the EU to impose sanctions in response to cyber-attacks. The European Parliament adopted the Cybersecurity Act, which sets out cybersecurity certification schemes for products, processes, and services. NATO started work on a Cyber Security Collaboration Hub to allow its member states to gather information and collaborate in an encrypted workspace.

In the Philippines, a Cybersecurity Management System Project was launched to protect government agencies from cybersecurity threats and attacks. The US National Security Agency announced that it will create a new Cybersecurity Directorate to prevent and eradicate foreign cyber-threats.

Another consequence of growing vulnerabilities in cyberspace is that law enforcement agencies (LEAs) are exerting pressure on tech companies to provide backdoor access to their platforms and user data (as part of cybercrime investigations, for example). Facebook’s plan to introduce end-to-end encryption in its messaging apps was met with demands from the USA, the UK, and Australia that the company either drop the encryption plans or allow backdoors for LEAs. To the displeasure of governments, Facebook eventually decided to move ahead with encryption.

We predict that there will be many more cyber-attacks in the months and years to come. It is also inevitable that questions will continue to be asked about the effectiveness of policy in ensuring protection from or developing responses to cyber-attacks, as well as about their potential human rights implications.

Towards a UN Convention on cybercrime?

In the last days of 2019, the UNGA initiated a process that could result in a UN convention on cybercrime. It adopted a resolution – proposed by Russia and 26 other countries – mandating the establishment of ‘an open-ended ad hoc intergovernmental committee of experts’ tasked with developing a ‘comprehensive international convention on countering the use of ICT for criminal purposes’.

Voting in the UNGA for this resolution was divided, with 79 votes in favour, 60 against, and 33 abstentions. 

This proposal for a UN cybercrime convention represents a challenge the attempts of the Council of Europe (CoE) and its allies to evolve the 2001 Budapest Cybercrime Convention (adopted under the CoE framework, but ratified by more than 60 countries) towards a global convention. 

Follow relevant issues on the GIP Digital Watch observatory: Cybercrime | Critical infrastructure | Network security

Back to the list of developments

3. Data breaches increase in number, reaching a record high

According to the research firm Risk Based Security, 2019 was the ‘worst year on record’ for data breaches. Data leakages and public exposures of private personal information made the news regularly throughout the year.  

Data gathered by DiploFoundation showed that around 10 billion records were publicly exposed during 2019, representing an almost 100% increase over 2018. The largest number of data incidents were recorded in the healthcare industry, but the majority of exposures – roughly 3 billion records – originated in social media data breaches. 

top_10_breached_industries_graph

Some of the major data breaches recorded in 2019 involved Facebook (compromising the privacy of about 500 million users), US bank Capital One (involving the data of more than 100 million US citizens and 6 million Canadian residents), Bulgaria’s National Revenue Agency (5 million citizens affected), online graphic design tool Canva (approximately 139 million users affected), and the Indian Department of Medical, Health and Family Welfare (exposing the medical records of more than 12.5 million pregnant women).

Data breaches have a number of possible causes. Sometimes the organisations processing data do not have adequate technical measures in place, rendering their systems vulnerable to cyber-attacks. Misconfigured or insufficiently secured databases, back-ups, and services are among the most frequent causes of incidents. The ‘human factor’ can be involved as well, with data losses, misdelivery, and other types of human error rendering data and systems vulnerable. 

Why is this significant?

Personal data is collected and processed in a wide range of sectors: healthcare, social security, tax collection, educational systems, online services, and the entertainment, banking, and travel and accommodation industries,  to name just a few. Protecting data from breaches and unauthorised disclosure should be a priority for all data-processing entities. But as the growing number of data breaches shows, many such entities (whether public or private) fail to take appropriate measures and implement effective policies.

Many jurisdictions around the world have legal and regulatory frameworks in place for the processing of personal data, obliging organisations to meet certain standards and ensure the confidentiality and integrity of their systems and the data they process. Existing frameworks, such as the EU’s General Data Protection Regulation (GDPR), are providing inspiration and precedents to other countries and regions eager to strengthen their privacy rules. 

But rules and regulations are not in themselves enough: of equal or greater importance is the extent to which they are followed and enforced. In the end, privacy and data protection are matters of trust, and with every data breach that occurs trust in the digital realm is eroded. Organisational self-interest should also be a factor: users are less likely to use the services or products of entities with a record of breaches.

End-users have a responsibility as well: not only to protect their own data, but to hold the entities processing that data accountable for their shortcomings and failures. 

Follow relevant issues on the GIP Digital Watch observatory: Privacy and data protection | Network security | Cybercrime

Back to the list of developments

4. Privacy comes into focus as the number of investigations increase

Potential problems regarding privacy and data protection are not limited to breaches. The way in which companies themselves collect and use individuals’ data is a source of concern as well, and a large number of investigations were launched in 2019 by data protection authorities (DPAs) around the world.

Top digital policy developments in 2019: A year in review

A number of Internet giants were forced into the spotlight over the past year as a result of several data protection scandals. Facebook, Google, Apple, Microsoft, and Twitter, among others, are now under investigation for potential GDPR violations, and may face substantial fines. In Ireland alone, the Data Protection Commissioner opened multiple investigations into the privacy and data protection practices of AppleGoogleFacebook, and Twitter and their compliance with existing legal frameworks. The UK’s Information Commissioner’s Office launched similar investigations targeted at video sharing app TikTok and the face-editing photo app FaceApp. In the USA, Californian authorities confirmed an ongoing investigation into Facebook’s compliance with state privacy laws. 

The collection and use of personal data by smart speakers and assistants (such as Amazon’s Alexa, Apple’s Siri, and Google Assistant) also generated concerns and triggered action from DPAs in countries including Germany and Luxembourg.

Privacy investigations were also accompanied by several fines and financial settlements. Examples include a €1.5 million fine imposed on Facebook in Brazil following the Cambridge Analytica scandal; a €9.5 million fine on German ISP 1&1 Telecom; a €14.5 million fine on German real estate company Deutsche Wohnen SE; the Facebook/US Federal Trade Commission (FTC) settlement for US$5 billion in the Cambridge Analytica case; and the US$700 million settlement between the FTC and Equifax over a 2017 data breach. But many wonder whether these fines and settlements are significant enough to incentivise companies to materially improve their privacy practices.

Why is this significant?

These cases illustrate DPAs’ increasing confidence in and willingness to launch privacy investigations. This should motivate companies to prioritise protecting the privacy and personal data of their users. 

Over the course of 2019 we saw several Internet companies updating their privacy policies and launching new initiatives in this area. For example, Google introduced new privacy settings for its services, and opened a privacy-focused research centre in Germany. Facebook launched several Privacy Cafés in the UK, to advise users on how to change their privacy settings. Twitter announced enhancements to its global privacy policy and opened a Privacy Centre to host information about the company’s privacy and data protection work. 

What remains to be seen is whether and how these new policies and initiatives will result in real improvements in the protection of users’ privacy. Over the coming years, DPAs and courts around the world will continue to look into the privacy and data protection practices of companies. And a growing number of countries are adopting or exploring the adoption of new privacy laws, with examples ranging from Uganda and India to Kenya and the USA.

The right to be forgotten does not apply globally

Companies on which DPAs impose fines for failure to comply with privacy rules have the right of appeal. In 2016, the French DPA fined Google €100 000 for refusing to delist sensitive information from Internet search results globally upon request, but in an appeal to France’s supreme administrative court, the company argued that the right to be forgotten should not apply beyond the EU. The French court referred the case to the Court of Justice of the European Union (CJEU). 

In September 2019 the CJEU ruled that the right to be forgotten does not apply globally, as there is no obligation under EU law for a search engine operator to comply with the right to be forgotten beyond the EU territory. However, the court noted that operators are required ‘to put in place measures discouraging Internet users from gaining access from one of the [EU] member states to the links in question which appear on versions of that search engine outside of the EU’. 

The decision was welcomed by Google and human rights organisations as a win for global freedom of expression. Some commentators also saw it as proof that the EU cannot extend the applicability of its legislation beyond its territory.

Follow relevant issues on the GIP Digital Watch observatory: Privacy and data protection | Right to be forgotten

Back to the list of developments

5. Internet shutdowns threaten online freedoms

A number of high-profile Internet shutdowns took place in 2019, in which governments restricted access to the Internet either by preventing access to certain services (such as social media and messaging apps) or by making the Internet itself completely unavailable on a temporary basis.

More than 15 cases of Internet restriction were recorded in Ethiopia alone during 2019, leading the UN Special Rapporteur on the right to freedom of opinion and expression, David Kaye, to urge authorities in the country to refrain from practices that limit citizens’ capacity to exercise their human rights. 

Access to the Internet – via both mobile and fixed lines – was also disrupted in Iran amid protests against rising fuel prices. Authorities justified the measure as necessary to keep the nation safe. Several Internet disruptions were reported in Iraq as well during a series of anti-government protests. A temporary restriction of access to services such as Facebook, Instagram, Twitter, and WhatsApp was registered in the southern part of Turkey in October, in the wake of the military operations in nearby Northern Syria. 

A major Internet shutdown by the Indian authorities was registered in the crisis-prone region of Kashmir. UN human rights experts described the measure as ‘a form of collective punishment of the people’ living there, and urged the Indian government to bring it to an end. Other notable Internet restrictions were reported in Algeria and Russia, and Indonesia was urged to end Internet shutdowns in the Papua and West Papua provinces. 

These and other cases reported during the year indicate that Internet shutdowns are becoming increasingly common, and while most are partial and brief, others can be drastic and of significant duration. The Sudanese population was cut off from the virtual world for more than a month, for instance, and Chad faced the longest Internet shutdown ever recorded, lasting more than a year.

Why is this significant?

Governments that impose Internet restrictions do so for a wide range of reasons and in diverse contexts. Some restrictions occur during conflicts and mass protests. Others are introduced prior to, during, or after elections, and even during national school examinations. 

Despite frequent official claims that Internet shutdowns are needed to maintain stability or safeguard national security, such actions are often used to mask political instability and dissent, thus affecting democratic processes and leading to the isolation of certain regions and populations.

Internet blackouts also have grave direct and indirect consequences for human rights and are frequently criticised by the international community on these grounds. In 2016, the United Nations Human Rights Council condemned ‘measures to intentionally prevent or disrupt access to or dissemination of information online’ and called on all states ‘to refrain from and cease such measures’. Similar calls are regularly made by UN bodies and other organisations such as Internet Without Borders, Freedom House, Access Now, and the Committee to Protect Journalists. 

Internet restrictions – be they partial or total – have economic implications as well. They prevent businesses from conducting activities and negatively impact foreign investments, employment, productivity, and sales. A study conducted by the Brookings Institute estimated the annual cost of Internet shutdowns worldwide at US$2.4 billion. The Cost of Shutdown tool shows that in Venezuela alone a day without the Internet results in losses of approximately US$400 million, and that in India the figure is around US$1 billion.

The map illustrates the state of Internet outages worldwide. The mapping is based on data gathered from Netblocks and AccessNow.

 

<!--
!function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async");
//-->

Follow relevant issues on the GIP Digital Watch observatory: Access | Freedom of expression

Back to the list of developments

6. Tackling online violent extremism remains a priority

The online distribution of content by terrorists and violent extremists remained a key concern for governments in 2019. Following incidents such as the Christchurch attack and the Halle shooting governments increased the pressure on tech companies to institute new solutions to combat the spread of such content. 

In response, Internet companies committed to strengthening their policies and actions in this regard. Initiatives include enhanced terms of use, new ways for users to report harmful content, continued investment in detection and removal technologies, and the introduction of checks on live-streaming. They also emphasised the importance of educational efforts to help users understand the implications of terrorist and extremist content online.  

Public-private initiatives have also been launched: examples include the partnership between Facebook and the UK’s Metropolitan Police (aimed to help the company develop AI algorithms to detect and stop the online streaming of violent attacks), and the November 2019 action in which European LEAs and online service providers disrupted the online activities of ISIS.

In the public policy realm, a law passed in France forces Internet companies to remove harmful content from their platforms within 24 hours of its being identified as such. Australia introduced stricter intermediary liability rules, making it illegal for social media platforms to fail to remove violent content. Canada launched a Digital Charter to protect Canadian citizens online and created a National Expert Committee on countering radicalisation to violenceNew Zealand and Germany also announced plans for new measures to combat violent extremism online.

Among 2019’s most prominent new initiatives at an international level were the Christchurch Call, the handbook for online counter-terrorism investigations launched by INTERPOL, and the G20 statement on preventing the exploitation of the Internet for terrorism and violent extremism

Why is this significant?

Curbing the spread of violent extremist content online, both now and in the future, is imperative. However, although self-regulatory measures on the part of tech companies and new laws and regulations on that of governments and international organisations may be promising, they also bring challenges and risks. 

That content may be posted, distributed, and consumed across a range of jurisdictions makes it difficult to co-ordinate efforts to block access to or remove such content and, where appropriate, implement punitive legal measures. Moreover, content blocking and removal can be controversial, and questions in this regard include: how effective is it to block access to a resource hosting violent content when that content can easily be moved to another location? How can we balance counter-extremist actions with the right to freedom of expression? There is a fine line between protecting safety and security and promoting online censorship. This concern was highlighted by David Kaye, UN Special Rapporteur on the right to freedom of opinion and expression, who argued that ‘violent extremism’ could be used by governments as the ‘perfect excuse’ to limit freedom of expression. 

The right balance in content policy between protecting freedom of expression and countering genuine abuses will only be found through continued dialogue between the human rights and security communities.

Implementing the Christchurch Call

In the wake of the Christchurch attack of March 2019, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the Christchurch Call. Supported by over 40 countries, several intergovernmental organisations, and multiple tech companies, the Call ‘outlines collective, voluntary commitments from governments and online service providers intended to address the issue of terrorist and violent extremist content online’.

In the second half of 2019 several steps were taken towards implementing the Call. The Global Internet Forum to Counter Terrorism (GIFCT) will become an independent organisation that will drive and co-ordinate the tech sector’s efforts to fulfil the Call’s commitments. In addition, a crisis response protocol was established to co-ordinate and manage governments’ and tech companies’ joint actions to police online terrorist activity in the future. Other initiatives include the elaboration of the Countering Violent Extremism Toolkit, and the enhancement of the database established by the tech industry in 2016 to share hashes (unique digital identifiers) for terrorist content that has been identified and removed. 

Follow relevant issues on the GIP Digital Watch observatory: Violent extremism | Content policy

Back to the list of developments

7. Internet companies take new measures to address online misinformation

Online misinformation, especially in the context of elections and political campaigns, is an issue on which Internet companies have been under particular pressure from governments. In response, several companies have introduced new measures and policies on political advertising and the distribution of political content.

Twitter took a significant step when it banned the paid promotion of political content (defined as ‘content that references a candidate political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome’). Some welcomed the initiative as an encouraging step towards a reduction in online misinformation and as a possible model for other companies to follow suit. Others described it as ‘unnecessarily severe and simplistic’, arguing that it may disadvantage challengers and political newcomers and that it allows Twitter to decide what is political speech and what is not.

Google took a less dramatic step. Election ads on its platforms can now only be targeted using general data (age, gender, and general location), which will make it more difficult for advertisers to engage in microtargeting, a practice that allows ads to be directed towards very small and specifically defined groups of people.

Facebook’s initial position was not to interfere with political advertising, in the interests of freedom of expression and to avoid ambiguity as to what constitutes political speech. Instead, the company focused on transparency measures: it required advertisers to confirm their identities, for example, and enabled users to see who has placed and paid for adverts. But in light of the steps taken by Twitter and Google and following an increase in public scrutiny, Facebook has reportedly started to consider measures to prevent microtargeting by political advertisers.

Why is this significant?

The spread of misinformation online can influence electoral processes, undermine trust in democratic institutions, and facilitate foreign interference in domestic affairs. Recent measures focused on political advertising may mitigate these problems, but several questions remain. Will Twitter’s ban on political advertising be more effective than Google and Facebook’s focus on how users are targeted by political ads? Should it be left to companies to determine the definition of ‘political’ content? And how can proportionality be achieved between preventing the spread of misinformation and protecting freedom of expression?

Moreover, new policies applying to paid political content may not be enough to address the issue of misinformation. Companies have been complementing these policies with other initiatives, in areas such as fact checking and media literacy for end-users.

As with other areas of digital policy, if the measures taken by companies do not prove effective, governments are ready to impose hard regulations (as in Singapore) or even cut access to online service (as was the case in Sri Lanka) to prevent the spread of misinformation.

Deepfakes 

Misinformation is not a new phenomenon, but technological progress has given rise to new methods for producing and distributing it. Deepfakes are one very notable example.

Deepfakes use machine learning and neural network technology to falsify images and footage, and can be deployed to discredit opponents in political campaigns or to damage personal reputations. The technology is becoming increasingly sophisticated and accessible, and tech companies and policymakers have started investigating possible solutions to limit its risks.

Google and Facebook, for example, have built collections of fake videos and are making them available to researchers developing detection tools. Twitter is working on a policy to combat deepfakes and other synthetic media on its platform. Researchers have also suggested the use of blockchain as a weapon in the fight against deepfakes. In the USA, the state of California criminalised the publication of false audio, imagery, or video in political campaigns, and China has introduced new regulations banning the distribution of deepfakes without proper disclosure that the content has been altered using AI.

Follow relevant issues and trends on the GIP Digital Watch observatory: Content policy | Fake news

Back to the list of developments

8. EU adopts Copyright Directive, but concerns remain

Following two years’ intensive debate, in 2019 the EU adopted new copyright rules for the digital age in the form of the Directive on copyright and related rights in the digital single market (the Copyright Directive)

A source of controversy since it was first proposed by the European Commission, the directive continues to be contentious even after its adoption. According to the Commission, the new legislation is necessary to ensure that copyright rules are fit for the digital age, ensuring improved choice of and access to content online and across borders and creating a more fair online environment for creators and the press. But many critics have argued that the directive will have negative consequences for the right to freedom of expression and access to information.

Why is this significant?

The most heated debates over the Copyright Directive concern Article 15 (the so-called ‘link tax’, previously Article 11) and Article 17 (the upload filter, previously Article 13).

copyrights-L

Article 15 was described by some commentators as a new form of copyright for text aggregated (for instance) on platforms such as Google and Facebook and that contains more than a snippet (‘individual words or very short extracts’, in the directive’s language) from an article or other publication. The use of such text must be licensed by the publisher and a fee paid for the use (in our example, by the online platform). The directive also requires that the author of a work incorporated into a press publication be paid an appropriate share of the revenue received by that publication from online platforms or other information service providers. 

Article 17 requires an online service provider wishing to share content other than its own to obtain an authorisation (licence) from the rights holder before doing so, even if the distribution involves content uploaded by users). Failure to do so will leave the provider liable for unauthorised distribution, unless they undertake their ‘best efforts’ to prevent such a situation. Given the enormous amount of content available online, Internet companies argue that the only way to ensure compliance with this article is through the use of upload filters. Even if the use of such filters is not explicitly required by the directive, it remains unclear whether alternatives are available. But the problems with filters are multiple and potentially serious:they can block legitimate uses of content, for example, and since they are expensive to implement they would bring additional and unfair challenges for smaller companies. 

Supporters of the directive greeted it as a welcome step towards a strengthened position for rights holders that will ensure fair remuneration. On the other hand, critics pointed to several problems. To start with, each EU country is free to define ‘snippet’ as it wants (the word can refer to anything from three words to full sentences or entire paragraphs), which is likely lead to regulatory fragmentation across the EU. The directive also gives publishers the right to ban others from linking to their publications, and this could result in various forms of censorship. Moreover, since it leaves open the possibility of large companies licensing the right to use snippets to each other, the directive could increase market concentration and have negative effects on competition. 

The requirement for service providers to monitor and restrict user-generated content before it is uploaded could impinge on freedom of speech and negatively impact the diversity of content online. The directive does make exceptions for the use of copyrighted work for the purpose of caricature, parody, or pastiche, but it remains to be seen how such content will be identified by upload filters.

As member states start to transpose the directive it is likely that the CJEU will be called on to clarify unclear provisions and offer more legal clarity for those responsible for putting them into practice.

France and Google clash over copyright rules 

France was the first country to transpose the Copyright Directive into national legislation. Internet companies reacted swiftly by criticising the law on multiple grounds, including not introducing a fair balance between the free circulation of information and copyright protection. 

Google was among the most vocal critics. The company made it clear that, as soon as the law comes into force, its news feed will only display previews and thumbnail images from news stories, unless the publishers agree to provide the full stories for free: ‘We believe that Search should operate on the basis of relevance and quality, not commercial relationships. That’s why we don’t accept payment from anyone to be included in organic search results and we don’t pay for the links or preview content included in search results.’

French authorities, in turn, criticised Google for ‘openly ignoring’ EU rules. They also announced plans to create a separate regulatory body that would have competence over activities of Google and other online platforms.

Follow relevant issues on the GIP Digital Watch observatory: Intellectual Property Rights

Back to the list of developments

9. Governments move towards new digital tax rules

Governments around the world are concerned that tech giants such as Apple, Google, and Facebook are not paying their fair share of tax in the countries where they operate. In 2019, efforts to address such concerns resulted in new taxes imposed at the national level, and renewed commitments from international organisations to tackle the issue.

In March, the EU put on hold its plans for an EU-wide digital tax following opposition from Ireland and the Nordic countries. Instead, the European bloc will wait for the Organisation for Economic Co-operation and Development (OECD) to complete its work on global tax rules for the digital economy; but if such rules are not agreed upon by the end of 2020 the EU will revisit its own plans. Meanwhile, several EU countries concerned that the OECD approach is too slow took steps in developing their own digital taxation rules. France introduced a 3% tax on revenues earned in the country by digital giants (attracting strong criticism from the USA). Digital taxes were also introduced in Austria, while Belgium, the Czech Republic, Italy, Slovenia, Spain, and the UK announced their intention to introduce similar measures. 

Outside the EU, countries including Canada, Egypt, Turkey, and Uganda have also considered digital service taxes, while Indonesia, Kenya, Malaysia, and Mexico have already introduced them.

At the intergovernmental level, the OECD proposed a two-pillar approach to digital taxation. In this approach, companies would be subject to taxation in the countries where their products or services are sold (even if they do not have a physical presence there). If companies were to continue to find ways to register their profits in low-tax jurisdictions, countries could apply a global minimum tax rate. It is hoped that a consensus-based solution to these issues will be agreed upon by the end of 2020. 

The final days of 2019 brought a last-minute extension of the World Trade Organization (WTO) Moratorium on Customs Duties on Electronic Transmissions, which has been in place since 1998. The Moratorium is now extended until June 2020, when it will be revisited at the 12th WTO Ministerial Conference in Nur-Sultan, Kazakhstan.

Why is this significant? 

There seems to be a broad consensus among governments and international organisations that existing tax rules are inadequate for today’s digital economy. Large tech companies operate across borders and generate revenues in countries where they do not have a legal presence. The complexity of their corporate structures and business models makes both the attribution of profits and co-ordination between authorities in taxing those profits a challenge. 

New global rules are needed to meet those challenges, but reaching a compromise on such rules is not straightforward. After failing to agree on an EU-wide digital tax, EU member states are struggling to agree on a common position on the OECD proposal, and overall as well the OECD process might take longer than expected. As we saw in 2019, however, some countries that are unwilling to wait any longer are taking unilateral action by imposing their own rules at a national level – even if only on an interim basis until global rules are adopted. France, for example, agreed (after strong US criticism) to repay tech companies the difference between its national tax and any tax that results from the mechanism being developed by the OECD. But despite the disparateness of current efforts, what is certain is that taxation of the digital economy is, and will likely remain, very high on governments’ agendas.

What will be the impact of new digital taxes? The obvious end-goal is an increase in tax revenues, but the scale of any increases will depend on the extent to which tech companies simply comply with the new rules. Some may decide instead to cease the provision of services in countries where they deem digital tax-rates disproportionate (as extreme as this measure may seem). Also, new taxes may negatively impact end-users: If companies have to pay significantly more tax and to do so in a greater number of jurisdictions, they may pass some of this burden on to their customers by introducing a fee for services that are currently free or by increasing the prices of other services. 

The map provides an overview of digital tax regulations worldwide. It indicates which countries have already adopted legislation on digital taxes and which have announced an intention to do so. The mapping is based on information from various sources, including the Tax Foundation, Financial Times, MNE Tax, and others.

 

<!--
!function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async");
//-->

Digital taxes
Infogram

Follow relevant issues on the GIP Digital Watch observatory: Taxation | Digital business models

Back to the list of developments

10. Antitrust investigations target tech giants 

In 2019, competition authorities in the EU, the USA, Australia, Japan, and other countries showed a growing interest in the market power of large tech companies. Investigations were opened into Amazon, Apple, Google, and Facebook to determine whether these companies are abusing their increasingly dominant positions in certain markets.

Amazon was placed under scrutiny by EU competition officials, who are looking into whether the company is using sensitive sales data gathered from other retailers on its platform to boost its own activities and, in doing so, preferentially advertising its own products. In the USA, the House of Representatives’ Antitrust Panel is investigating potential competition abuses by Amazon, while in France the company received a €4 million fine for abusive contractual clauses for retailers.

For Apple, antitrust investigations are mostly related to the company’s AppStore and the control Apple exerts over it. Spotify submitted a competition complaint to the European Commission against Apple for giving preferential treatment to its own music streaming services. This issue is also being investigated by the US Department of Justice and the Dutch competition authority. Similar complaints were filed by the providers of parental control apps in the EU, Russia, and the USA.

Google was fined €1.49 billion by the European Commission for abusing its dominant position in the online advertising sector. This is the third antitrust fine for Google by the EU in the last three years. In the USA, a bipartisan coalition of attorneys general launched an investigation into the company’s potential anti-competitive behaviour. And India’s Competition Commission is looking into whether Google has abused its position in the mobile operating systems market.

The US FTC started an investigation into Facebook for acquisitions of more than 70 companies, apps, and start-ups (including Instagram and WhatsApp) over the last 15 years. The company is also being investigated by a group of attorneys general over its acquisition practices and concerns regarding endangering consumer data, reducing the quality of consumer choice, and increasing the price of advertising. 

Why is this significant?

There is no question that the tech giants are under growing scrutiny from public authorities. It is unlikely, however, that various competition authorities’ currently ongoing investigations will be enough to determine significant changes in these companies’ behaviour and practices.

One reason is that antitrust investigations commonly take years to complete, with the result that by the time rulings arrive they are likely to be obsolete, as companies continually change the way they operate. Even where investigations result in fines, those too are expected to have little effect

As the FTC has suggested, a more effective solution could be the imposition of robust remedies obliging tech companies to act in line with competition rules. The UK competition authority apparently agrees: it launched a public consultation on possible measures such as the breaking up large companies or obliging them to introduce interoperability features to allow rivals to compete. 

New antitrust rules, better adapted to the challenges of the digital economy, could also be a way forward, at least according to the European Commission and G7 Finance Ministers.

Towards a new EU competition policy?

report prepared by the European Commission’s Directorate-General for Competition suggests that ‘competition policy should evolve to continue to promote pro-consumer innovation in the digital age’. Possible measures to ensure that this is the case include:

  • Empowering competition authorities to forbid tech company practices aimed at reducing competition, even where consumer harm cannot be precisely measured. To argue that a practice was not by nature anti-competitive the company in question would have to demonstrate its clear consumer welfare benefits. 
  • Improving the way in which market power is measured (e.g. by assessing all the ways in which a company is protected and can protect itself from competition).
  • Considering imposing on the investigated companies the burden of proof for demonstrating the pro-competitiveness of their conduct.
  • Requiring companies in dominant market positions to ensure data interoperability to create opportunities for smaller companies seeking to develop complementary services.
  • Imposing stricter controls on acquisition practices, especially the acquisition of start-ups by large companies.

Follow relevant issues on the GIP Digital Watch observatory: Digital business models | E-commerce and trade | Consumer protection

Back to the list of developments

11. Online rental businesses face pushbacks

The business models of the new sharing economy have received constant criticism, especially from the traditional businesses with which companies in the sharing economy compete. Representatives of the taxi industry have lodged numerous complaints against Uber, and those of the traditional hotel and accommodation industries have grown more vocal in calling for regulatory action against platforms such as Airbnb and booking.com. Public authorities themselves have started to look more carefully at how these platforms operate and at their implications for traditional businesses, homeowners, tourists, and even cities.  

In Europe, ten major cities (Amsterdam, ​​Barcelona, Berlin, Bordeaux, Brussels, Kraków, Munich, Paris, Valencia, and Vienna) have demanded help from the EU to address the ‘problems’ posed by Airbnb and other holiday rental websites. Public authorities argue that the explosive growth of these platforms has adverse effects on neighbourhoods, which become increasingly affected by ‘touristification’ and more nuisances. In addition, when homes can be lucratively rented to tourists, they disappear from the housing market, causing a shortage of homes in these cities and pushing up the prices. In the face of these challenges, local authorities believe that they should have the possibility to introduce their own regulations, adapted to local circumstances. And they call for new obligations to be imposed on online platforms, forcing them to provide information about their businesses to local authorities.

Beyond the impact on traditional businesses and local companies, the rise of online rental platforms has also led to consumer protection concerns. Authorities in the EU, for instance, have demanded that platforms comply with consumer protection rules and be transparent in the information they provide so as not to deceive users. In July 2019 the European Commission announced that, following investigations of and negotiations with the company, Airbnb had agreed to improve and clarify the way it presents accommodation offers to consumers.

Why is this significant?

Faced with concerns such as these, authorities all over the world have started taking various measures. In some cases, limitations have been placed on the number of days homeowners can rent-out their properties via online platforms. In other cases, Airbnb and similar platforms have been asked to provide local authorities with information about hosts (including names and addresses). Some cities require hosts that use online rental services to register directly with the authorities. Examples of cities that have introduced or are considering such measures include New York, London, Dublin, Reykjavík, Singapore, Tokyo, and San Francisco.

One key question connected to the operation of online rentals businesses concerns their status. Are they mere providers of online services, or should they be treated as accommodation providers? It is an essential question, since the answer to it will determine the rules and regulations with which these platforms have to comply.

In a landmark decision issued in December 2019, the CJEU ruled that Airbnb is an information society service and not a property agent. In justifying its decision, the court noted that Airbnb only provides a tool for presenting and finding accommodation for rent, and for facilitating the conclusion of future rental agreements. It also noted that the property owners are able to access other avenues for renting their residences and that the company does not limit the rent charged by those owners.

Home Sharing Clubs: lobbying for fair regulations

It is no surprise that Airbnb, booking.com, and similar platforms are persistent advocates for rules and regulations that are as ‘light’ as possible. But they are not alone.  

Airbnb, for instance, is supporting Home Sharing Clubs established by homeowners in cities where the company operates. These clubs – which act independently from Airbnb – are a way for hosts to co-ordinate their advocacy efforts in support of ‘fair home sharing laws in their communities’. As of January 2020, over 400 clubs were active around the world, with more than 3000 members in total.

Follow relevant issues on the GIP Digital Watch observatory: Digital business models

Back to the list of developments

12. Facebook’s cryptocurrency Libra attracts regulatory concerns 

In June 2019, Facebook announced the official launch of its cryptocurrency Libra, expected to go live in 2020. In October, the Libra Association – established in Geneva to ensure the governance of Libra – acquired a formal governance structure with the creation of the Libra Council, which includes representatives of 21 companies including Lyft, PayU, Spotify, Uber, Vodafone, and Facebook’s subsidiary Calibra.

Since its launch Libra has attracted significant attention from financial regulators. In the USA, Calibra head David Marcus and Facebook CEO Mark Zuckerberg testified in front of  Congress. Both emphasised that Libra would not compete with sovereign currencies, but rather act as a payment system. Moreover, Libra would not be released until fully compliant with US and global financial rules. 

France and Germany expressed concerns that Libra might pose serious risks to consumers, to financial stability, and even to the monetary sovereignty of European states. France clearly indicated that it intends to halt the development of Libra in Europe; the cryptocurrency may also face hurdles in IndiaJapan, and Russia. In September,  the Libra Association faced questions during a Bank for International Settlement (BIS) meeting, attended by representatives of 26 central banks.

Although the Libra Association is committed to working with financial authorities on ‘achieving a safe, transparent, and consumer-focused implementation’ of Libra, the scrutiny has had consequences. Just before the creation of the Libra Council several payment providers (Paypal, Visa, Mastercard, Stripe, Mercado Pago, and eBay) left the project, likely due to a reluctance to be part of Libra before regulatory clarity is achieved. 

Why is this significant?

Libra might become a game changer for the crypto industry. It could make easily transported, low-fee transactions, and programmable money accessible to many more people and thus promote financial inclusion. But if it succeeds in creating a user-base in developing countries, Libra will likely serve as a reserve or back-up currency for many users, which could weaken the already struggling fiat currencies in some of these countries. 

Some also claim that Libra could give Facebook increased access to and control over user data, including information of a financial nature, although the company was quick to deny this claim, explaining that Calibra will not share user data with Facebook or other third parties without users’ consent.

Other concerns are that Facebook’s vast user base could lead to Libra establishing a monopoly in the market, and that the cryptocurrency might stifle the development of other blockchain and FinTech-based services which offer similar solutions. 

These concerns will keep Libra under considerable scrutiny, as will the risks that Libra could pose to financial stability and the possibility of its misuse in criminal financial activities. In this context, Libra is expected to roll out in blockchain-friendly jurisdictions first before working its way up to the global level.

Cryptocurrencies: the main developments in 2019

Beyond Libra and the debates it generated 2019 also brought other new cryptocurrency developments, ranging from regulatory efforts to plans for national cryptocurrencies. Here are the most significant developments:

Follow relevant issues and trends on the GIP Digital Watch observatory: Cryptocurrencies | Libra cryptocurrency

Back to the list of developments

13. Huawei controversy has far-reaching implications

The trade war that has raged between China and the USA in recent years took on a new dimension in 2019 with the escalation of the Huawei controversy.

The controversy started in 2018 when the USA banned the use of Huawei products in government networks over security concerns. The company challenged the ban in court, arguing that ‘the US government has provided no evidence to show that Huawei is a security threat’.

In May 2019, US President Donald Trump issued an executive order banning the export of US technology to ‘foreign adversaries’ unless a special licence is issued by the Department of Commerce (DoC). Moreover, the DoC included Huawei on its Entity list – a trade blacklist forbidding US entities to trade with foreign entities without government approval. Several companies, including GoogleIntel, and Qualcomm, immediately announced they would no longer sell their technologies to Huawei. The company reacted by saying that the measures would harm jobs and the overall industry and economy in the USA and described the ban as unfair treatment.

By the end of May, however, the DoC issued a temporary licence allowing US companies to continue conducting business with Huawei. This licence, initially meant to apply for 90 days, was subsequently extended in August and November

A parallel development was that in November the Federal Communications Commission barred the use of its universal service fund to purchase equipment and services from Huawei and ZTE, a decision that Huawei challenged in court. In December, the House of Representatives passed a bill preventing the government from buying telecom equipment from Huawei and other companies deemed to be national security threats. The bill has to be approved by the Senate as well before it comes into force.

Why is this significant?

If applied as anticipated, the ban on US companies selling technology to Huawei could have multiple economic and geopolitical implications, especially in the medium and long term.

For instance, if Huawei devices can no longer use Google’s proprietary Android operating system (OS), this is likely to affect Huawei markets – particularly Europe and the USA – where customers are mostly dependant on Google services. In the long run, however, this could turn into a win for the Chinese company: as it goes ahead with developing its own OS and products, new users in developing countries may opt for the more affordable Huawei devices with its new OS.

The situation is similar in the case of companies such as Intel, Qualcomm, and Broadcom, on whose servers and switching chips, processors, and modems Huawei has been heavily reliant. While Huawei may initially be negatively affected if prevented from using such products, it would also push the company into accelerating the development of equipment of its own. In the long run, this could help Huawei to secure dominance in global mobile and telecom markets.

The restrictions raise other questions as well: Will US companies and the US economy at large be impacted as a result of cutting ties with Huawei, one of the world’s largest ICT vendors? If so, how? Will the Huawei case serve as a cautionary tale for companies and states, prompting them to safeguard their supply chains domestically and thus leading to a fragmentation in global technological innovation?

Huawei’s 5G equipment deployed around the world

Another key controversy surrounding Huawei in 2019 concerned the company’s 5G equipment. Concerns have been raised that the technology could be misused by the Chinese government to spy on other countries, and reactions have varied around the world.

The company concluded 5G agreements with the African UnionRussian telecom provider MTS, and Malaysian telecom firm MaxisUnited Arab Emirates telecoms company du indicated it had no concerns over the Huawei 5G equipment, which is already in use in the country.

India decided to allow Huawei to participate in 5G deployment projects, together with other international telecom companies. A similar decision was taken in Hungary. In Switzerland, Huawei and telecom company Sunrise concluded a deal on opening a joint 5G research centre.

In Germany, Telefonica Deutschland chose Huawei and Nokia to build its 5G network, although the government is still working on rules governing equipment suppliers. Canada continues to consider whether to allow the deployment of Huawei 5G technologies at a national level. The situation is similar in the UK, although comments made by prime minister Boris Johnson in December 2019 seemed to hint towards a ban. 

Despite vigorous lobbying from the USA, it seems that only a few countries – including Australia, Japan, and New Zealand – have so far banned the deployment of Huawei 5G equipment.

Follow relevant issues on the GIP Digital Watch observatory: Telecommunications infrastructure | Network security

Back to the list of developments

14. As AI technology evolves, more policy initiatives emerge

2019 brought many new developments in the field of AI. In the realm of technological innovation, we saw scientists announce a number of exciting applications, from using AI to develop vaccines to employing machine learning to decode speech directly from the human brain. Private companies and public institutions alike put AI to increased use in areas such as content policy, industrial applications, judicial and law enforcement systems, and so on.

In the policy sphere, numerous countries continued to work on developing and implementing AI strategies and plans. In the USA, the new American AI Initiative was complemented by sectoral AI plans and initiatives in defenseairforceenergystandardisation, and research and development. New strategies and action plans were adopted in the Czech RepublicMaltathe Netherlands, and Russia, among others. Several countries allocated funds to areas such as AI research and developmentAI for health, and AI-focused higher education. AI initiatives also took off at the regional and local levels; examples include the AI Commission established in the state of New York and the AI plan launched by the Flemish region in Belgium. 

At the intergovernmental level, some of the most prominent developments included the AI recommendations adopted by the OECD Council (also endorsed by G20), the G7 commitment to the responsible development of AI, and a new Ad Hoc Committee on AI established by the CoE.

Why is this significant?

Top digital policy developments in 2019: A year in review

While new developments are exciting, innovation in AI continues to cause concern. There is a risk that AI may widen digital divides and socioeconomic inequalities that already exist. For example, the 2019 edition of the Government AI Index showed that the best performing region in terms of AI readiness is North America while Africa and the Asia-Pacific region are the worst performing, thus suggesting continued and predictable inequalities between developed and developing nations.

As emphasised during the 2019 Internet Governance Forum (IGF) meeting, these and other risks associated with developments in AI require a wide range of measures in response. AI systems should embody core principles such as inclusivity, transparency, explainability, and trustworthiness. These principles, in addition to well-established human rights frameworks, should provide human-centric ‘guardrails’ for the development and use of AI. 

While self-regulation by tech companies might help address some of these challenges, there is an increasing number of voices calling for clear legal obligations to make companies more responsible. We may see legislative initiatives in 2020. The European Commission, for example, has already announced plans to develop ‘legislation for a co-ordinated European approach on the human and ethical implications of AI’.

The humAInism project

As we consider our approaches to the regulation and oversight of AI technologies we should ask ourselves: Can AI itself draft a social contract for the AI era? DiploFoundation’s humAInism project is set to answer this question.

The project will focus on:

  • Using AI to better understand digital policy complexities, such as the interplay between the economic, technological, human rights, and security aspects of data and AI policies.
  • Using AI to extract humanity’s wisdom on ethics, free will, human dignity, and other pressing issues in the digital era. To accomplish this, it will process a corpus of human knowledge codified in writings from ancient manuscripts to the latest books.

Follow relevant issues and trends on the GIP Digital Watch observatory: Artificial intelligence | Governmental AI initiatives 

Back to the list of developments

15. Facial recognition generates human rights concerns

Facial recognition technology (FRT) sparked controversy throughout 2019, as its use increased among tech companiespublic authoritiesLEAsbuilding ownersschoolsairportsbanks, etc.

FRT uses algorithms and machine learning to identify a human face from a photo or video. For example, FRT systems are used by LEAs to identify individuals by comparing their images with a database of known faces. In certain cases, such as Amazon’s Rekognition, the technology is also able to recognise facial expressions and emotions such as fear and happiness.

Like any other technology, FRT has its limitations and risks. Many FRT systems rely on algorithms that are largely trained on photos of males and white people. This brings the risk of bias and discrimination in decisions made based on FRT systems, especially when used by LEAs. The technology also generates privacy concerns. When it is used by social media companies or in street cameras, for example, questions arise about user awareness and consent, data processing practices, security, and data protection. When used extensively by LEAs, facial recognition could also lead to mass surveillance.

Why is this significant?

DW42-facial-recognition

Concerns about the impact of FRT on human rights have sparked debates on the regulation of the technology. In the USA, multiple state and municipal authorities have banned or are considering banning LEAs from using the tech. In the UK, the Information Commissioner has called on the government to introduce a statutory and binding code of practice on the deployment of live FRTs by LEAs. In Sweden, on the other hand, the national DPA has allowed the police to use FRT to help identify criminal suspects when it deemed that there were sufficient privacy safeguards in place.

Courts are also being asked to step into this debate. For example, legal actions were brought against the use of FRT in street cameras and by city authorities in both the UK and Russia. A lawsuit launched by US citizens in 2015 against Facebook’s use of facial recognition continued in 2019. In the UK, a court ruled that the use of facial recognition by the police was lawful and in compliance with privacy rules.

Tech companies have different positions on the use of FRT. Amazon continued to develop its Rekognition system and refused to stop selling it to LEAsGoogle said it would not sell general-purpose FRT before addressing tech and policy questions, while Microsoft called for regulations to govern the use of the technology.

Looking ahead, it is likely that companies will continue to develop and improve their FRT. What is less certain is whether future improvements will make the technology sufficiently safe to allay current concerns. At the moment, trust in FRT is hanging in the balance, and we are likely to see many more efforts to regulate its use.

Growing concerns about state surveillance

The risk of state surveillance is one of the concerns surrounding the use of FRT. But this is not the only technology generating such concerns. 

A report by the Carnegie Endowment for International Peace showed that at least 75 countries are actively using AI for surveillance purposes, including via FRT, smart city platforms, and smart policing tools. The AI Now Institute pointed to a surge in the integration of public and private surveillance infrastructure, and called for legislators to introduce regulations on transparency, accountability, and oversight regarding the use of such infrastructures. As the UN Special Rapporteur on extreme poverty and human rights Philip Alston pointed out in his 2019 report, even attempts to create digital welfare states can lead to surveillance. Beyond the declared purposes of using technologies to improve citizens’ wellbeing, digital technologies are often used to ‘automate, predict, identify, surveil, detect, target, and punish’.

The implications of state surveillance are serious. But the problem is less about the technology itself, and more about how it is used by governments. Are surveillance technologies used lawfully? Are there sufficient safeguards in place to avoid mass surveillance? How can a proper balance between protecting human rights and ensuring the safety and security of citizens be achieved?

Follow relevant issues on the GIP Digital Watch observatory: Artificial intelligence | Privacy and data protection

Back to the list of developments

16. Future of work fuels more debates

The future of work in the context of the expanding digital economy and the fourth industrial revolution fuelled significant debate in 2019. Issues included the impact of technological progress (in areas such as automation and AI) on the job market, the status of gig workers, and the need to prepare future workers for the new realities of the digital society. 018-L

Despite general agreement on the fact that automation and AI will bring changes to the world of work, differences of opinion remain over what these changes will mean and the degree of disruption they will bring. Recent reports and estimates on the more alarming side indicate, for example, that one in five jobs will be replaced by technology within five years, and that up to 20 million manufacturing jobs will be lost by 2030. On a more encouraging note, others argue that new jobs will be created as a result of technological progress to compensate for those lost. And there is also optimism that, as repetitive tasks are taken over by technology, humans will be able to focus more on tasks that require their unique skills.

Controversy continued over the status of workers in the platform economy (also called the gig economy). Ride-hailing companies such as Uber and Lyft were in the spotlight, as authorities and courts around the world issued a range of rules and decisions on whether drivers are employees or contractors. In the USA, California enacted new legislation giving more rights to gig workers, and Uber drivers were recognised as employees by a French court on the grounds that they cannot choose clients or decide rates. On the other side of the spectrum, courts in Brazil and Belgium ruled that Uber drivers are independent contractors, because (among other things) they are free to choose their times of work. The National Labor Relations Board of the USA reached a similar conclusion.

Why is this significant?

The changing world of work requires measures to protect current and future workforces and ensure they can benefit from new business models and technological progress. When it comes to the gig economy, more legal certainty would be useful for both companies and workers, especially considering the growing global patchwork of different rules concerning the status of Uber drivers and other gig workers. This is not an easy task for authorities, however, which are often tasked with finding a balance between the interests of companies and the rights and interests of individuals. The fact that workers themselves have differing views on whether they are employees or contractors complicates matters even further.

Countries also need to ask themselves whether they are ready to support their workforces in adapting to the changing labour market. The OECD, for instance, urged countries to start by scaling up and upgrading their adult learning systems, while also ensuring adequate public financing and tax incentives to support educational and training programmes. Some governments, such as those of Canada, Finland, and the UK, are already taking measures focused on training both the current workforce and young people for the future of work (e.g. by allocating funds for AI-focused higher education). Others, particularly in developing countries, may need international financial and technical support in adapting their educational and training systems to new realities. Beyond this focus on skilling and upskilling workers, others areas that may require government intervention include taxation, social security systems, and safety and health. Companies, too, have a moral duty to help their employees adapt to technological changes in  the workplace. 

California’s ABC test for gig workers

In September 2019 the state of California enacted Assembly Bill 5, intended to clarify the status of gig workers. The bill codifies a ruling issued in 2018 by the Supreme Court of California which applied the so-called ABC test to determine whether gig workers can be considered independent contractors. 

According to the ABC test, a worker is considered an employee by default, unless the hiring entity demonstrates that three conditions are met simultaneously: 

(a) the person is free from the control and direction of his/her employer in connection with the performance of the work; 

(b) the person performs work that is outside the usual course of the business of the employer; 

(c) the person typically works with other businesses (or independently) of the same nature.

If the ABC test is not passed, workers have to be granted employment rights including minimum wage, paid time off, and sick leave. 

Follow relevant issues on the GIP Digital Watch observatory: Future of work | Artificial intelligence | Digital business models

Back to the list of developments

17. More digital identification programmes take off

Countries around the world are rolling out sophisticated digital identification systems. For several years, India’s Aadhar system was considered the most sophisticated ID programme in the world and attracted significant attention. In 2019, new digital ID programmes appeared in both developed and developing countries.

Malaysia started working on a national system which will allow citizens to use a unique digital ID to access online services. Singapore initiated a digital identity programme in which citizens and residents will be issued crypto-based mobile digital identities to be used in public and private sector transactions. In Ireland, the city of Dublin launched preparations for a digital ID initiative which, among other things, will facilitate citizens’ access to certain public and private services. 

Authorities in the Catalan region of Spain started implementing a decentralised digital ID project based on an IdentiCAT application which uses blockchain technology and distributed ledgers. Blockchain-based digital ID programmes are also in the pipeline in Bermuda and Sierra LeoneUganda contracted a digital security company to provide digital identities and mobile registration systems for its citizens, and Guinea launched a new civil registration system and is working on providing biometric IDs.

Many digital ID programmes rely on mobile phone applications, thus reflecting a focus on accessibility and mobility, while also capitalising on the fact that mobile phones with data plans are increasingly popular, including in developing countries. Some digital IDs also involve the use of blockchain, demonstrating that the technology has an increasing number of applications. 

Why is this significant?

As these examples illustrate, digital ID programmes are launched for various purposes. In developed countries, they are mostly designed to simplify citizens’ access to online services. In developing countries, they are also intended to improve social and economic inclusion and help achieve the ‘leave no one behind’ overarching objective of the 2030 Agenda for Sustainable Development

Because digital IDs involve the collection and processing of personal data, states need to demonstrate that adequate safeguards are in place to ensure data security and privacy. In the absence of such safeguards, these systems risk becoming tools for state surveillance and other abuse by public authorities. Concerns about such risks were raised, for example, with regard to China’s system of state-issued smart cards that record and analyse a person’s interactions with governmental entities. In Jamaica, the Supreme Court found that the government’s plan to implement a national digital identification system was unconstitutional and in breach of the right to privacy.

Digital ID systems also need to be carefully planned to avoid disadvantaging vulnerable populations such as refugees and those in need of social services. 

Over the coming years, we are likely to see existing digital ID systems developed further and new programmes launched across the globe. If security and privacy considerations are properly addressed these developments hold considerable promise.

World Bank’s ID4D initiative

The World Bank’s Identification for Development (ID4D) initiative is helping countries to build inclusive and trusted digital identification systems by:

  • Assessing national identity ecosystems
  • Providing technical assistance to support the design of digital identification systems based on global standards of good practice
  • Supporting the development of legal and regulatory frameworks, including in the areas of privacy and data protection
  • Facilitating the sharing of knowledge and experience between countries
  • Facilitating dialogue on technical standards for digital identities
  • Raising awareness about the importance of identity systems

By the end of 2019 ID4D had supported 46 countries with assessments, technical assistance, and financing for ID systems, and an overall sum of US$1 billion has been made available for lending projects designed to support the roll-out of ID and civil registration programmes.

Follow relevant issues on the GIP Digital Watch observatory: Digital identities

Back to the list of developments

18. Digital inclusion needed to achieve sustainable development

Digital technologies are often seen as having the potential to contribute to help in achieving the sustainable development goals (SDGs). The Internet, blockchain, AI, and other technologies can help solve pressing challenges in sustainable development areas as diverse as education, gender equality, environmental protection, disaster relief, health, etc. 

But this potential cannot be fully exploited if the world continues to be split between those who have access to digital technologies and those who do not. Several reports published in 2019 indicated that the digital divide remains a problem. The International Telecommunication Union (ITU) report Measuring digital development: Facts and figures 2019 confirmed continuing barriers to Internet access and use, especially in the least developed countries. And while ICTs are accelerating sustainable development in small island developing states, accessibility barriers continue to limit their impact. A similar conclusion was reached in the 2019 Digital Economy Report of the UN Conference on Trade and Development, which drew attention to the widening digital divides that threaten to leave developing countries even further behind. Even in the EU, some countries still lag behind in digitalisation, and the gender digital gap persists

Faced with this reality, multiple actors of all kinds continued to call for and take measures towards meaningful digital inclusion across the world. Areas of action went beyond network deployment to include issues of gender, language, education, and finance, among others.

Why is this significant?

Addressing the multiple forms of digital divide will be an essential step in enabling citizens, communities, and countries worldwide to fully realise their digital potential. This responsibility lies with multiple actors, including governments, international development agencies, private companies, organisations in the technical community, etc.

Top digital policy developments in 2019: A year in reviewPromoting digital inclusion starts with ensuring that the right infrastructure is in place. Examples initiatives in this area in 2019 include the installation of Wi-Fi connectivity in schools (in Zimbabwe), the allocation of funds to implement innovative mobile connectivity solutions for rural communities (GSMA support in Ghana and Uganda), and the deployment of fibre optic networks to help connect remote areas (in Indonesia). 

A report by the Alliance for Affordable Internet showed that, in low- and middle-income countries, ​1GB of data costs 4.7%​ of average income, more than double the UN threshold for ​Internet affordability​. This proves that, beyond connectivity, digital inclusion is also about affordable access. Digital skills and competences are equally important, and they should include not only technical skills but also awareness about how to use technology in a meaningful and secure manner. As confirmed by the ITU, digital skills and capacity building are critical for achieving the SDGs.

Addressing gender inequalities and enhancing access for people with special needs will be other key tasks on the journey towards sustainable digital development. Efforts in this direction include initiatives such as the EQUALS global partnership, the eTrade for Women Network, and the ITU Accessibility Fund. Promoting multilingualism in the digital space and supporting the development of local content also promise to help bridge digital gaps at the global level. Here a notable initiative comes from the Internet Corporation for Assigned Names and Numbers (ICANN), which is spearheading efforts towards universal acceptance – the concept that Internet applications and systems must treat all Internet domain names (including those in scripts other than Latin) in a consistent manner.

A decade remains until 2030 – the deadline for implementing the UN Sustainable Development Agenda. If enabling policies and incentives are put in place there is still time for digital technologies to demonstrate their potential (as indicated in the 2019 Human Development Report by the United Nations Development Programme).

A Charter for Sustainable Digital Age

In October 2019, the German Advisory Council on Global Change proposed that the world commits to a Charter for a Sustainable Digital Age. Titled ‘Our Common Digital Future’, the proposed charter (under public consultation until the end of January 2020) outlines a series of principles and guidelines for the international community as it works towards sustainable development. These include the protection of human dignity, natural life-support systems, inclusion in and access to digital and digitalised infrastructures and technologies, as well as individual and collective freedom of development in the digital age. 

The charter is built around three key elements:

  • Digitalisation should be designed in line with the 2030 Agenda, and digital technology should be used to achieve the SDGs. 
  • Beyond the 2030 Agenda, systemic risks should be avoided, in particular by protecting civil and human rights, promoting the common good and ensuring decision-making sovereignty. 
  • Societies must prepare themselves procedurally for future challenges by, among other things, agreeing on ethical guidelines and ensuring future-oriented research and education.

Follow relevant issues on the GIP Digital Watch observatory: Sustainable development | Access | Capacity development | Multilingualism

Back to the list of developments

19. Digital technologies converge with other sectors

As rapid innovation in the digital field continues, new applications of digital technologies emerge in an ever greater number of areas. Self-driving vehicles are evolving towards complete autonomy, digital solutions promise increased efficiency and accuracy in healthcare, the interplay between biology and technology is generating exciting breakthroughs, and the potential development of lethal autonomous weapons is raising concerns across the world. The year 2019 was rich in developments in all these areas. 

Autonomous cars remained in the public view, as companies such as Audi, Ford, Tesla, Uber, and Waimo continued testing their technologies on public roads. In the USA alone, more than 80 companies tested over 1400 self-driving vehicles on public roads in 36 states. In healthcare and medical sciences, new uses of digital technologies ranged from AI tools to diagnose diseases and design new medicines to robotic nurses and solutions aimed at preserving the voices of people at risk of losing them.

At the intersection of technology and biology, Neuralink’s work on brain-machine interfaces sparked the imagination of many, promising progress in enabling direct communication between the brain and external devices such as artificial limbs. Advancements in genome editing (the insertion, deletion, modification, or replacement of DNA in the genome of a living organism) and cellular agriculture (which focuses on creating alternatives to the meat and dairy industry) also benefited from digital technologies such as AI and 3D printing. 

Why is this significant?

Technological progress has always generated concerns over potential impacts on economies, societies, and humanity at large. The speed with which digital technologies evolve exacerbates such concerns.

While fully autonomous cars are not here yet (some estimate that they could be available in five years’ time), they already have safety and security implications, as demonstrated by several accidents on public roads. This has encouraged more authorities (in countries such as the UK and Singapore) to move towards guidelines and regulations for the companies developing such vehicles. 

The risks associated with the potential development of lethal autonomous weapons systems (LAWS) continued to be explored by the dedicated Group of Governmental Experts. Issues considered include implications for international humanitarian law, military and security challenges, and the human element in the use of autonomous weapons. These and other considerations have determined the UN Secretary-General to call for a ban on LAWS.

Digital health – a field becoming increasingly attractive to investors –  has also become the subject of public policy, with new regulations and strategies introduced on an ongoing basis. The World Health Organization (WHO), for instance, issued a set of guidelines on how countries can use digital technologies to improve people’s health, while the CoE developed guidelines for the protection of health-related data.

In the field of biotechnology, urgent policy and regulatory questions include: What are the limits and ethical implications of human genome editing? What about the privacy and security implications of neuroscience advancements? How far should we allow brain-machine interfaces to go, and do we want machines to be able to read our minds? Some of these questions are already being explored by scientific and academic communities and, increasingly, by governments and intergovernmental organisations. For instance, an International Commission on the Clinical Use of the Human Germline Genome Editing was established in 2019 to develop a scientific and ethical framework to guide research in the area of human genome editing. And, at WHO, an expert panel tasked with examining ethical, social, and legal implications of gene editing began work in 2019.

Quantum computing

Quantum computing promises advanced computational power far beyond the capabilities of today’s technologies. This could pave the way for unparalleled innovations in areas such as medicine, electronics, cybersecurity, transportation, etc.

GoogleIBMIntelMicrosoft, and other major tech companies are making significant investments in quantum computing research and development. In October 2019, Google announced that it achieved ‘quantum supremacy’ with a quantum computer that carried out a specific calculation which would have taken a classical computer 10 000 years to complete. IBM soon challenged that claim, arguing that the problem solved by Google’s computer could also be solved in just 2.5 days through a different classical technique. 

We can expect the race for supremacy in quantum computing to continue at an ever-faster pace, as companies become aware of the significant benefits they can gain from developing quantum computers able to solve problems that classical computers cannot. Countries, too, are engaged in a similar competition, with the USA and China being currently at the forefront, and the EU, Japan, and others following closely behind.

Follow relevant issues and processes on the GIP Digital Watch observatory:  Emerging technologies | The rise of autonomous vehicles | GGE on LAWS

Back to the list of developments

20. UN panel calls for improved digital co-operation

In July 2018, UN Secretary-General António Guterres appointed a High-level Panel on Digital Cooperation to ‘propose modalities for working cooperatively across sectors, disciplines and border to address challenges in the digital age’. Following extensive consultations at the international level, the Panel published its report – The age of digital interdependence – in June 2019.

Top digital policy developments in 2019: A year in reviewOne of the report’s overarching conclusions was that ‘our dynamic digital world urgently needs improved digital cooperation’. In the Panel’s view, ‘effective digital cooperation requires that multilateralism, despite current strains, be strengthened. It also requires that multilateralism be complemented by multi-stakeholderism – cooperation that involves not only governments but a far more diverse spectrum of other stakeholders such as civil society, academics, technologists and the private sector.’ 

In addition to its clear call for strengthened co-operation in addressing digital policy challenges at the global level, the Panel outlined several recommendations and priority actions in four other areas: building an inclusive digital economy and society; developing human and institutional capacity; protecting human rights and human agency; and promoting digital trust, security, and stability.

One of the Panel’s key recommendations was for the UN Secretary-General to ‘facilitate an agile and open consultation process to develop updated mechanisms for global digital cooperation’. And it also went one step further by suggesting three such possible mechanisms: an IGF Plus, a Distributed Co-governance Architecture, and a Digital Commons Architecture.

Why is this significant?

This identification of a need for improved digital co-operation at the global level is more than just a statement. It is in fact an urgent call for action, prompted in particular by the rise of divergent national and regional policies and regulations that threaten to fragment the digital space. Achieving strengthened co-operation requires involvement and commitment from all those who are part of this digital space. 

Public consultations and debates held in the second part of 2019 indicated that the IGF Plus model proposed by the Panel has significant potential to improve digital co-operation. Many indicated that one significant benefit of IGF Plus is that it builds on an existing model, instead of creating a new one from the ground up. 

Taking advantage of the IGF’s multistakeholder nature, IGF Plus could act as a bridge between different communities, processes, and organisations, and as the connective tissue between UN bodies dealing with different digital policy issues. But for this potential to be fully explored, several issues need to be addressed, including increasing the participation of governments and the private sector; creating a strong link to the UN Secretary-General’s office as a step towards a more visible IGF; and ensuring financial sustainability. 

To follow up on the Panel’s report, the UN put in place a process consisting of a series of roundtables around the panel’s five key recommendations. The results of these roundtables are to be presented to the Secretary-General in the spring of 2020 and will serve as input for the development of a roadmap for action on the recommendations. It remains to be seen whether the follow-up process will lead to concrete steps towards the operationalisation of the IGF Plus model or other mechanisms.

What is the proposed IGF Plus?

In the Panel’s vision, IGF Plus would include:

  • An Advisory Group that would prepare IGF annual meetings and identify focus policy issues each year
  • A Cooperation Accelerator that would support co-operation among existing organisations and processes on specific issues
  • A Policy Incubator that would monitor, examine, and incubate policies and norms
  • An Observatory and Help Desk that would provide an overview of digital policy issues, co-ordinate capacity development activities, and provide help and assistance on digital co-operation and policy issues

Follow relevant issues and processes on the GIP Digital Watch observatory: UN High-level Panel on Digital Cooperation | Internet Governance Forum | Interdisciplinary approaches

Back to the list of developments


cross-circle