Highlights From Day 2
Inclusiveness and security at the core of the Internet
How can we make the Internet more inclusive? Discussions looked at measures and examples that foster inclusivity, such as the use of Internationalised Domain Names (IDNs), which encourages the use of domain names in local languages and scripts. But universal acceptance (UA) – the concept that all domain names should be treated equally, irrespective of the scripts they use – is still a challenge: many browsers do not support IDNs, and little progress has been made in achieving Email Address Internationalisation (EAI). The Universal Acceptance Steering Group is working on more effectively promoting UA, and ICANN has included this among its priorities for the next five years. Bringing UA issues to the attention of governments could also help promote more UA: Public entities could require that the services and infrastructures they acquire support UA.
Beyond inclusiveness, there is a strong need to preserve the security of the Internet core infrastructure, including through the adoption of Internet standards and protocols. Their adoption could be accelerated by incentives for network operators to implement core standards; including standards as a requirement in government procurement policies; raising awareness about the relevance of standards in universities and public institutions.
Toward trustworthy AI
Today, AI is at the heart of many online services – from search engines to social networks – with concerning implications. Look at the algorithms that decide what results we get when using search engines: this automated process can influence people’s choices and decisions without them even being aware of it.
Algorithmic bias is another challenge: If the data fed into an AI system is fed is biased, then the output of that system will be biased as well. How do we address this? Regulating the outputs of AI – for example, AI-generated decisions – could be a solution.
The misuse of AI to spread disinformation (e.g. via deepfakes) or influence electoral process is also concerning. Any regulations and policies designed to address such issues should take into account the need to protect freedom of speech.
While AI can contribute to sustainable development, it can also augment inequalities, both between countries, and between different communities and groups within the same country. For AI to truly benefit humanity, it has to be trustworthy and reliable. These two concepts embed multiple principles, from inclusivity, robustness, and accountability, to transparency and explainability. AI should be a means to an end, a solution to an existing problem rather than a solution in search of a problem. Trustworthy AI is intrinsically linked to ethical principles and human rights.
Existing human rights legal frameworks and ethical guidelines and principles should be considered to be sufficient: Instead of developing new frameworks, our focus should be on implementing existing ones in an efficient and harmonised manner. However, new rules may be needed to regulate AI systems such as lethal autonomous weapons (LAWs), that pose a threat to human life.
All actors have a role to play in promoting and ensuring trustworthy AI. Companies can lead by example, if they integrate human rights impact assessments and due diligence processes when developing AI solutions. Policy-makers should have a more pro-active approach toward addressing AI-related risks. And other actors such as journalists, academics, and founders of AI development and research centres should be involved as well.
Data governance frameworks and the risk of fragmentation
Data governance is growing in relevance due to the increasing volume of data generated by IoT, the acceleration of data sharing via 5G, and the core relevance of data for the growth of AI.
In the data governance triangle among corporations, governments, and individuals, the most power and influence lies in the hands of corporations. Governments are gaining more influence via data regulations. Individuals are the weakest actors in this triangle. Two main legal instruments of data governance are national data regulations used by governments, and contractual terms and conditions used by tech platforms. A lot of data governance is shaped by terms and conditions of online services. These are non-negotiable contracts which often give users little control over their personal data.
Countries worldwide adopt data regulations reflecting their technological development, legal systems, and policy priorities. Different data regimes are likely to trigger the fragmentation of digital space. It could lead to thousands of internets in the years to come. The emergence of ‘three digital kingdoms’, namely China, the USA, and the EU is also a possibility. We can expect a growing number of international initiatives designed to avoid digital fragmentation. For example, BRICS countries (Brazil, Russia, India, China, and South Africa) are intensifying data co-operation based on a 2017 declaration through which they agreed to promote jointly shared data protection norms.
Diagnosing the state of peace and conflict in cyberspace
Cyberspace is becoming increasingly militarised through the use of digital technologies. Countries are both victims and attackers. The distribution of cyber capabilities is multipolar and diverse: more than 50 states openly confirm the possession of offensive cyber capabilities, while many non-state actors possess them as well.
It is difficult to diagnose the state of peace and conflict in cyberspace. Possible indicators include the number of detected cybersecurity threats and breaches (and their gravity); national resilience to cyber-attacks and expenditure for offensive and defensive cyber capabilities (as a percentage of GDP); countries of origin of cyber-attacks or disinformation; the number of conventional reactions to cyber-attacks (such as diplomatic or political measures); adherence to (and actions developed from) international norms; as well as levels of freedom of information when fighting misinformation. We also need clarity on the types of data sources used for ‘measuring’ peace in cyberspace.
Toward a more stable cyberspace
Cyber-stability – which occurs when everyone can be reasonably confident in their ability to use cyberspace safely and securely – can be achieved through several means, according to the principles of the Global Commission on Stability of Cyberspace (GCSC). These include the shared responsibility of the different stakeholders involved, restraint by state and non-state actors if engaging in harmful actions, the requirement to avoid the escalation of tensions, and respect for human rights. Several norms and recommendations proposed by the GCSC add to the existing ecosystem of multiple initiatives.
Norms need not be imposed, but internalised as values and collective expectations by those that identify with them. However, lack of compliance even by states that traditionally complied is a disincentive for others. On the other hand, in order to adhere to norms, parties need to see the benefits of being part of the club. However, no institutional mechanisms exist to monitor and report compliance. Civil society organisations, such as human rights defenders, can play an important role in monitoring, researching, and collecting evidence related to compliance; to do this, however, norms need to be socialised and introduced to civil society as well.
Confidence building measures (CBM) help diplomats to avoid misperceptions and development of tensions. CBMs are of particular importance on the regional level, to foster trust – and ultimately, co-operation – between countries which often differ in capacity, language, culture and values. Other actors can contribute to increasing confidence, such as the private sector which provides know-how through public-private partnerships.
There is a rising trend by states – even those that promote the free flow of information – to invoke digital sovereignty in relation to national security, social stability, or data protection and privacy. Digital sovereignty trumps other political concerns such as exercising control in cyberspace. Yet different technical layers of the Internet – from cables to protocols to services – are assigned to state jurisdictions with different levels of clarity.
Protecting democracy and the modern economy
Online disinformation is increasingly being discussed within the context of cybersecurity, since it impacts the security of the infrastructure of democratic processes. Disinformation, however, goes beyond spreading false content, and includes manipulation of divisive domestic debates. According to a proposed framework by the Council of Europe, information disorder can be classified as the creation of information, the reproduction of information (transforming an idea into media that is consumed online), and the dissemination and further replication of information. Motives vary from political to economic, and are an important factor in deciding on the approach to take in response to this challenge.
Securing the supply chain is essential for protecting the modern economy. Judging by the results of activities based on the Charter of Trust, supply chain security can be strengthened by agreeing on baseline requirements and insisting on their adherence by supply chain entities, integrating cybersecurity in daily business and education, and establishing verification methods based on international standards, as well as assessment of suppliers. Rigorous checks and clear market pressure to avoid the use of non-compliant devices can address the challenge of insecurity in the IoT as well; yet, this requires a high level of awareness and transparency. However, the security of the digital components of products is still not part of the responsibility of product safety regulators. Education of employees and company leaders, and basic cyber-hygiene, are equally relevant factors.
Other cybersecurity issues include overcoming functional silos, improving inefficient international co-operation in cybercrime investigations, decreasing the gaps in skills and awareness, as well as addressing emerging issues like the security of electoral processes. On the regional level, there is an increasing number of examples of co-operation addressing cybersecurity challenges. The IGF, as well as the Global Forum on Cyber Expertise, can serve as matchmakers to ensure that the efforts of different actors are connected.
Safeguarding children’s rights
Child protection online remains an issue of high relevance. Some reports estimate that 25 million child abuse images are reviewed annually, many of which belong to very young children without access to Internet. The root cause of the problem, however, may not reside in the digital world, so blocking access to the Internet for children is not a solution. Technical tools such as parental control software can help, as well as legal solutions to protect minors. The technical and legal approaches, however, should take into consideration the privacy implications of the collection and processing of children’s personal information. Another challenge is that the youngsters rely much more on the Internet for advice than on their peers or parents.
To better protect the rights of children online, parental interventions, government legislation, technological solutions, and appropriate policies and legislations are important. However, in order to make a truly positive and sustainable impact, children have to be encouraged to voice their opinions and to take an active part in discussions and in developing solutions.
Many young people lack the information, know-how, and opportunity to become actively engaged in the Internet governance ecosystem and in defining their rights online. Another main challenge in current Internet governance discussions is that young adults often lack the kind of language and context attractive to youth. This acts as a barrier for the inclusion of youth opinions, suggestions, and demands in policy discussions. This year’s Youth IGF Summit is a notable exception and a good example to follow.
Human rights online: Closing the gaps
Fifteen percent of the world’s population is said to live with some form of disability, representing more than one billion people globally, and constituting the world’s largest minority, with eighty percent of persons with disabilities living in developing countries. Internet-enabled ICTs play an increasingly active role in shaping the latest trends in assistive technologies and specially-developed technologies for persons with disabilities. There is still a great need for the tech sector to be more aware of the challenges disabled people face, and simple solutions, like changing colours or having an alternative to keyboard shortcuts, can make a great difference.
The challenge of closing the gender gap in the digital environment persists. For different genders, privacy, security, and online safety are real and important considerations. Addressing gender discrimination and gender violence online should also include a close look at the role played by algorithms – both as a source of the problem and as part of the solution. A change from gender-responsive to gender-sensitive policies might be needed. The EQUALS in Tech Awards took place at a dedicated IGF session and honoured those that are helping girls and women gain equal access to skills and opportunities both online and in the tech industry.
Understanding legal uncertainties
Legal uncertainties are among the major risks for the sustainable development of the Internet. This was emphasised in the Internet and Jurisdiction Policy Network’s Global and Regional Status Report, launched yesterday. The report identifies several major challenges: the lack of agreement on substantive legal issues, the lack of shared understanding of key legal concepts, the risk of a ‘race to the bottom’ in regulation, distrust among Internet users who are unable to obtain legal redress, the voluntary or involuntary fragmentation of legal solutions, the risk of a downward spiral triggered by uncoordinated and quick-fix solutions.
Such legal uncertainty could favour the rules of the strongest and generate a perception among Internet users that they are just ‘subjects’ of legal rules, without the possibility to contribute to their development. The report calls for more coordinated efforts in addressing legal and jurisdictional challenges online.
The responsibility of states, companies, and academia is at the centre of the Christchurch Call to eliminate terrorist and violent extremist content online. This initiative is welcomed by tech companies which try to reduce regulatory uncertainty caused by a ‘patchwork quilt of laws’.
Regulations related to data protection, the independence of data protection agencies, as well as regulatory aspects of data localisation were at the heart of the debate on value and regulation of personal data. A comparison of regulations in the EU, Brazil, Russia, India, China, and South Africa showed common issues related to data subject consent, the responsibilities of platforms processing the data versus state responsibility, and the need for transparent processes.
In discussing the issues of dissemination of misinformation, freedom of expression, the responsibility of businesses for content moderation, as well as the need for transparency, the Dynamic Coalition on Platform Responsibility addressed different values which need to be balanced in this setting.
The use of existing legal instruments dominated discussion on the protection of the rights of users and overall regulation of AI. The regulation of AI should be centred on the application of existing human rights conventions. In the field of access to digital networks, flexible regulation should reflect the specific context and access needs of remote areas and marginalised communities worldwide.
The many facets of digital inclusion
Digital inclusion stayed in focus during the second day of the IGF. Access to digital networks must provide affordable connectivity – following the principle that one gigabyte of data should not cost more than 2% of a person’s average income.
Digital inclusion is a holistic challenge involving different aspects of the digital ecosystem: digital literacy, quality content in local languages, and privacy and security.
Community-led networks make connectivity more meaningful and sustainable for local communities and they can be supported, for example, by reinvesting the money from e-commerce platforms back into the community. Traditional telecom companies should also co-operate with new and innovative community networks, to bring connectivity to underserved communities.
Overall, technologies need to be put into service for people’s well-being and our thinking should not be confined to current technologies, but must also consider future potentials. For example, almost all aspects of development, including combating climate change, and improving agriculture and food security, can benefit from advanced digital technologies such as AI, IoT and 5G.
While talking about digital infrastructure, other aspects of physical infrastructure should not be forgotten; otherwise, the benefits from digital technologies for development cannot be reaped. In addition, failure to replace older technologies in a timely manner will open new gaps in connectivity for marginalised vulnerable groups.
Collaboration between countries, between the public and private sectors, and with local communities is paramount for providing access and closing the digital divide.
Taxing the digital economy
The issue of digital taxes is one of the most contentious digital policy topics of 2019. The current patchwork of proposals on how to tax tech companies ranges from global proposals put forward by the OECD, to national taxes which are either being planned or in the pipeline, especially in European countries. Taxing companies which do not have a physical presence in a country is one of the main challenges.
Discussions related to taxing tech companies and other service providers included a review of the implications for digital inclusion, human rights, and socio-economic development. The reasons for introducing digital taxes vary from increasing the revenue base, to stifling ‘gossip’ and dissent.
In developing and least developed countries, taxes on mobile network operations are frequently the state’s only significant tax revenue. In Africa, social media taxes are, however, becoming popular; the first was introduced in Uganda, while other countries quickly followed suit. Tanzania has now introduced a $900 blogger’s licence. This could be seen as surprising, in a context in which the introduction of taxes on Internet companies is gaining prominence mainly among developed nations.
A few insights were also shared about regulatory frameworks for distributed ledger technologies, which are currently under discussion in Europe. The European Central Bank is considering issuing a digital version of the euro that would be characterised as a stablecoin – a term used for currencies that are backed by real assets such as gold, silver, bonds, stocks, or other major currencies (e.g. dollar, euro).
Violent extremism: a problem of definition and clear responsibilities?
The sharing power of social media becomes problematic when the content shared is harmful (e.g. hate speech, violent extremism, and terrorist content). Finding more efficient approaches to tackle the spread of such content is a shared responsibility, mainly between tech companies and governments.
The problems with the implementation of a collaborative approach start with difference in basic definitions. First, there is no international agreement on what constitutes hate speech and disinformation. Second, there is no consensus defining the responsibilities of the major actors. After the Christchurch attacks of March 2019, the Christchurch Call was praised for its attempts to lay out the roles of governments and online service providers.
Tech companies are also under increased pressure to be more effective in their approaches to tackling the distribution of illegal and harmful content. How are they responding? Some companies, like Facebook and Google, have adopted more stringent content policies. Others have signed on to codes of conduct developed in co-operation with the public sector (one example is the EU Code of Conduct). Collaborative initiatives have also been launched – such as the Global Internet Forum to Counter Terrorism (GIFCT) – to facilitate common approaches and the sharing of good practices.
If self-regulatory measures are not working, governments are increasingly ready to step in with hard regulation. Germany’s Network Enforcement Act is only one example. But hard law instruments remain controversial. On the one hand, enforceable rules would help to define clear roles and responsibilities of stakeholders. On the other hand, such rules could result in censorship which often negatively impact human rights such as freedom of expression and the right to privacy. Respecting democracies’ institutional boundaries and legal frameworks should be key in all regulatory approaches.
Oftentimes, companies are required to implement more technical measures to tackle illegal and harmful content. But how effective are such measures? Using filtering algorithms can speed up the review and removal of content. But they also have limitations – such as difficulties in distinguishing harmful words from innocent ones, depending on usage, or differentiating between hate speech and legitimate content – which can negatively affect freedom of expression. Blocking access to content at the DNS level has its own challenges as well: It can be easily circumvented (content can be moved to another location), and could also have long term impacts on the stability of the Internet.
Data Analysis: Day 2’s Most Prominent Issues
More visible trends started emerging during Day 2 of the IGF (Wednesday), based on data analysis of close to 50 transcripts, carried out by Diplo’s Data Team. Interdisciplinary approaches remained the most dominant issue, followed by discussions on data governance and sustainable development, switching second and third places compared to Day 1.
New issues made it to the top 10: AI is now clearly among the most discussed issues, followed by privacy and data protection, and cyberconflict and warfare.
Overall, Day 2’s discussions tackled a very broad range of topics, resulting in an evenly distributed range of issues across the sessions. This is not only due to a busier schedule, but confirms that many of the issues are interrelated and are often tackled together in discussions.
The even distribution of issues was also reflected across baskets. Technology and infrastructure remained the most popular (19%), with cybersecurity a close second at 18%. The order of the other baskets remained the same as in Day 1.
A look at our word cloud for Day 2 shows the most prominent terms used in discussions. ‘AI’ is highly visible, as are ‘privacy’, ‘law’, and ‘governance’:
Prefix monitor: A continued trend called ‘digital’
The analysis of yesterday’s transcripts shows that ‘digital’ remained the most used prefix, whereas ‘online’ was replaced by the prefix ‘cyber’. The prefix ‘cyber’ moved to second place, owing to an increase in the number of cybersecurity-related discussions on Day 2. It will be interesting to observe how the prefixes – especially ‘cyber’, typically used in security discussions – fare by the end of this week. ‘Tech’ remained in fourth place, with a rather timid score of 8 percent. The prefix ‘net’ experienced a significant decline in comparison to Day 1, given that fewer sessions were dedicated to topics such as net neutrality.
Social media analysis: ‘Buzz’ building up since mid-November
What did the Data Team’s social media analysis – based on the official #IGF2019 hashtag on Twitter, Facebook, Instagram, YouTube and web resources – discover?
The atmosphere before this year’s IGF had been steadily building, particularly since mid-November. The first day of the forum attracted an impressive amount of social media buzz.
Over the past seven days (between 22 and 27 November), the official #IGF2019 hashtag was mentioned over 9 200 times across social media channels, reaching more than 110 million people.
As expected, most activity came from the host country, Germany, with just over 16% of all mentions. The USA (7,6%), UK (3,2%), France (3%) and Belgium (2,9%) also made it to the top 5 most active countries.
The topic cloud reflects the most popular hashtags. Words used by German Chancellor Angela Merkel in her opening speech were retweeted extensively.
AI Quotes of the Day
In yesterday’s IGF Daily Brief, we introduced IQ’whalo, a former coffee-maker turned opinion-leader, and the newest member of the Diplo team. His quotes are generated from an AI-powered analysis of yesterday’s transcripts.
‘I think a lot of research is happening in psychology and cognitive biases, but the question is, how do you actually measure this kind of effect and how much are you going to need to implement to drive a technology that will allow you to understand it and appreciate it as it is driving these societal problems?’
‘I have said before and over and over again that AI is going to be a critical driver of technology’s future development. That said, we have to engage with the communities that developed it and that is not going to be an easy thing. It is going to take a very long time to get there.’
Which begs the question: Are we going around in circles? Let us know: ai@diplomacy.edu
Have you met IQ’whalo?
Sitting on the far left, he was a panellist in yesterday’s main session on AI, on Applying Human Rights and Ethics in Responsible Data Governance and Artificial Intelligence, contributing with an opinion generated from all previous IGF sessions on AI.
IQ’whalo is the creation of Prof. Vladimir Veljašević from the Faculty of Fine Arts at the University of Belgrade, representing a non-anthropomorphised embodiment of AI. It uses the open-source Open AI model to generate synthetic text based on policy papers and transcripts from past IGF discussions.
In Focus: Humainism Project
Yesterday, DiploFoundation presented the humAInism project which aims to establish a closer link between the rapid development of AI on the one hand, and societal needs on the other.
The project – to be launched in the first quarter of 2020 – will address the current and often simplistic approach to AI, which is either utopian or dystopian. These approaches are based on our individual value systems that we should all bring to the discussion table. Still, they often cloud the main issue by hijacking the discussion in one way or another, thus adding complexity and confusion. Instead of getting caught in these pro or con dichotomies that steer us away from the subject, humAInism suggests that we bring AI to the table to help us make more reasonable decisions.
HumAInism has two main pillars:
- The use of AI as a tool for managing the complexity of AI policies, while allowing humans to make the final decisions
- Feeding AI with as much human knowledge as possible and see what it will suggest as guidelines or ‘a new social contract’ for the digital age
The most important aspect of the second pillar is the knowledge of the subject, i.e., the material that will be fed into the neural networks that are tasked with generating guidelines. This is also the most challenging part since not all human knowledge is written down and many cultures rely on oral traditions. The project will attempt to address this bias by partnering with actors who will codify oral knowledge into written form and make sure it is represented in the final output.
This initial stage of the project will last around six months, during which tech companies will feed material into the neural networks to come up with unique AI-generated formulations. These results will then be shared with representatives of social sciences to foster engagement in philosophical discussions, but in a practical way.
Don’t Miss Today
Emerging technologies and their interfaces with inclusion, security and human rights (NRIs perspectives)
09:00 – 11:00 | Convention Hall II & online
When it comes to national, regional, and youth IGF initiatives (NRIs), what are their best practices, action plans, activities, and challenges that the larger IGF community needs to be aware of? Focussing on sharing concrete examples, this session will address the situation of both developed and developing countries when it comes to the challenges and opportunities associated with existing, new, and emerging digital technologies. In particular, it will ask the following questions: (a) How can vulnerable groups be supported at the national and regional levels? (b) What about security when utilising these digital technologies? (c) Are there risks to human rights that demand special attention? (d) What are the policy options to ensure and enhance access for the least developed countries?
Governance challenges in the digital age: Finding new tools for policy-making
14:15 – 16:15 | Convention Hall II & online
Digital transformation is in full swing. A transformation of the approaches to design policies, regulations, laws, and practices needs to go hand-in-hand with this digital transformation. To ensure policy balances, a holistic approach has to be taken. This, in turn, requires a diversity of perspectives and contributions, and the involvement of the whole variety of stakeholders in the process. While such inclusiveness may challenge the sovereign role of states, it may also improve the decision-making process. What are the right models to follow, and are there limitations to establishing multidisciplinary and multistakehoder models? Are there concrete experiences of successful models in some of the Internet governance fields? What should be the role of governments, and which are the best models for the engagement of other stakeholders? These are some of the questions to be explored during the main session.