/* .igf-ai-report-block h2 {
color: #0693e3;
}
.igf-ai-report-block hr {
background-color: #bcb7b7;
} */
#knowledge-graph-card {
background: #fafafa;
border: 15px;
box-shadow: 0 0 13px 0 rgba(34, 62, 153, .1);
border-radius: 10px;
padding: 10px 20px;
margin: 20px 0;
box-shadow: 0 0 13px 0 rgba(34, 62, 153, .1);
}
#knowledge-graph-card iframe {
height: 500px;
}
/* Fullscreen button */
#full-screen-iframe-button {
font-family: "Montserrat", -apple-system, BlinkMacSystemFont, Roboto, Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
padding: 18px 32px !important;
font-weight: 600;
font-size: 16px !important;
max-width: 225px;
margin: 1em 0;
color: #FFF;
background-color: #b71520 !important;
border: unset;
border-radius: 25px;
position: inherit;
transform: translate(0%, 0%);
cursor: pointer;
float: right;
display: flex;
align-items: center;
}
#full-screen-iframe-button i {
font-size: 32px;
}
/* End fullscreen button */
span.click-for-more {
position: absolute;
right: 0;
bottom: 0;
}
div.user-initials {
display: inline-block;
margin: 10px;
border-radius: 50%;
aspect-ratio: 1;
width: 100px;
/* max-width: 100px; */
background-color: #00818c;
margin-bottom: 10px;
}
/* h3.user-fullname {
margin-top: 0;
text-align: center;
} */
p.user-initials-inner {
color: white;
display: table-cell;
vertical-align: middle;
text-align: center;
text-decoration: none;
height: 100px;
width: 100px;
font-size: 38px;
}
/* div.user-speech-stats {
display: grid;
grid-template-columns: 1fr 1fr 1fr;
gap: 10px;
justify-items: center;
} */
.argument-supporting-facts p.supporting-facts {
margin-bottom: 0;
}
.supporting-facts {
margin-bottom: 0;
}
.argument-topics,
.argument-sdgs {
display: flex;
flex-wrap: wrap;
align-items: center;
gap: 10px;
}
.argument-topics span,
.argument-sdgs span {
color: #fff;
background-color: #00879a;
border-radius: 50px;
padding: 5px 10px;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.12), 0 1px 2px rgba(0, 0, 0, 0.24);
text-align: center;
/* margin-bottom: 10px; */
/* transition: all 0.3s cubic-bezier(0.25, 0.8, 0.25, 1) transform 0.2s; */
/* cursor: pointer; */
}
/* STARO */
/* TABLE OF CONTENT */
/* div#igf-table-of-content{
border-radius: 13px;
text-align: left;
background-color: #fbf3dd;
padding: 20px;
margin: 0px 0 0px 0;
border: 2px solid #eeeeee;
color: #404040;
font-family: 'Titillium Web', sans-serif;
font-size: 1rem;
line-height: 1.5;
box-sizing: inherit;
flex: 0 0 25%;
padding-right: 2em;
}
div#igf-table-of-content h3{
margin-bottom: 5px;
margin-top: 0;
}
div#igf-anchorList {
color: #404040;
font-family: 'Titillium Web', sans-serif;
font-size: 1rem;
line-height: 1.5;
--mm-sidebar-collapsed-size: 44px;
--mm-sidebar-expanded-size: 440px;
box-sizing: inherit;
padding: 0 8px;
display: flex;
flex-direction: column;
}
div#igf-anchorList ul{
margin-bottom: 15px;
} */
/* a.igf-anchor-link{
font-family: 'Titillium Web', sans-serif;
font-size: 18px;
line-height: 1.5;
--mm-sidebar-collapsed-size: 44px;
--mm-sidebar-expanded-size: 440px;
box-sizing: inherit;
background-color: transparent;
color: #414141;
margin: 4px 0 8px;
padding-right: 16px;
text-decoration: none;
font-weight: 600;
} */
/* Mobile only */
@media all and (max-width: 480px) {
div.user-speech-stats {
grid-template-columns: 1fr;
justify-items: start;
}
span.click-for-more {
display: flex;
justify-content: flex-end;
position: inherit;
}
#knowledge-graph-of-debate iframe {
height: 300px;
}
}
/* NOVO */
h2.ai-reporting {
color: #0693e3;
}
/* hr.ai-reporting-separator {
background-color: #bcb7b7;
} */
div#wp-block-themeisle-blocks-advanced-columns-ed298b0c ai-reporting-table-of-content {
border-radius: 13px;
text-align: left;
background-color: #d3cbc0;
padding: 20px;
margin: 0px 0 0px 0;
border: 2px solid #eeeeee;
color: #404040;
font-family: 'Titillium Web', sans-serif;
font-size: 1rem;
line-height: 1.5;
box-sizing: inherit;
flex: 0 0 25%;
/* position: relative; */
padding-right: 2em;
}
div#wp-block-themeisle-blocks-advanced-columns-ed298b0c ai-reporting-table-of-content h3 {
margin-bottom: 5px;
margin-top: 0;
}
a.ai-reporting-anchor-link {
font-family: 'Titillium Web', sans-serif;
font-size: 18px;
line-height: 1.5;
--mm-sidebar-collapsed-size: 44px;
--mm-sidebar-expanded-size: 440px;
box-sizing: inherit;
background-color: transparent;
color: #414141;
margin: 4px 0 8px;
padding-right: 16px;
text-decoration: none;
font-weight: 600;
}
.speaker-card {
background: #fafafa;
border: 15px;
box-shadow: 0 0 13px 0 rgba(34, 62, 153, .1);
border-radius: 10px;
padding: 10px 20px;
margin: 20px 0;
}
.speaker-card:hover {
box-shadow: 0 0 13px 0 rgba(34, 62, 153, .3);
}
.speaker-card-header {
cursor: pointer;
display: grid;
grid-template-columns: 3fr 8fr;
align-items: center;
gap: 10px;
position: relative;
margin-bottom: 0;
}
.speaker-card-body {
margin-bottom: 0;
}
.speaker-card-body h3 {
margin-bottom: 10px;
margin-top: 0;
}
.speaker-profile {
display: grid;
justify-content: center;
justify-items: center;
}
.speaker-profile.center-speaker-profile {
display: flex;
align-content: center;
align-items: center;
}
.speaker-profile h3 {
margin-top: 0;
text-align: center;
color: #2d353d;
}
p.disclaimer-text {
font-size: 17px !important;
line-height: 20px
}
.user-speech-stats {
display: flex;
align-items: center;
justify-content: center;
}
.user-speech-stats p {
margin: 0;
text-align: end;
}
/*IZMENE ZA UNGA*/
.dw-term-description a {
color: #477c94;
}
.dw-term-description a:hover {
color: #477c94;
}
#speakers {
color: #b71520 !important;
}
#dw-sidebar-right {
display: none;
}
/*.innerbanner.contentWidth {
display: none;
}
.newsletter-subscribe-v2.pageWidth {
display: none;
}
#contact-hub {
display: none;
}*/
/* ================ ovo prebaciti da se aktivira i gasi preko JavaScripta ================ */
/* .speaker-card-body {
display: none;
}
hr{
display: none!important;
} */
/* Accordion styling */
.wp-block-themeisle-blocks-accordion.has-light-title-bg>.wp-block-themeisle-blocks-accordion-item>.wp-block-themeisle-blocks-accordion-item__title {
font-size: 20px;
background-color: #c3e7f5;
border-radius: 15px;
border: 1px solid #add1fd;
}
.wp-block-themeisle-blocks-accordion.has-light-active-title-bg>.wp-block-themeisle-blocks-accordion-item[open]>.wp-block-themeisle-blocks-accordion-item__title {
border-bottom-left-radius: 0;
border-bottom-right-radius: 0;
}
.wp-block-themeisle-blocks-accordion-item__title:hover {
color: #000000;
background-color: #e4f7ff !important;
transition: all 0.5s;
}
/*Q&A CSS*/
.accordion-title {
padding: 15px 30px;
/* Less padding on top and bottom (15px), more on the sides (30px) */
font-size: 28px;
font-weight: 700;
text-align: center;
/* Align text to the left */
background: #13415b;
color: white;
border-radius: 12px;
box-shadow: 0 12px 24px rgba(0, 0, 0, 0.1);
margin-bottom: 20px;
margin-top: 70px;
width: 100%;
/* Full width of the container or page */
transition: box-shadow 0.3s ease, transform 0.3s ease;
cursor: pointer;
box-sizing: border-box;
/* Ensures padding is included in the width */
}
/* Hover effect through JS (the class will be added via JS) */
.accordion-title.hovered {
box-shadow: 0 15px 30px rgba(0, 0, 0, 0.2);
/* Stronger shadow on hover */
transform: translateY(-5px);
/* Slight lift effect */
}
.accordion {
width: 100%;
padding: 0 10px;
}
/* Accordion Item Styles */
.accordion-item {
margin: 15px 0;
border-radius: 10px;
overflow: hidden;
box-shadow: 0 10px 20px rgba(0, 0, 0, 0.05);
transition: transform 0.3s ease, box-shadow 0.3s ease;
background-color: white;
position: relative;
}
.accordion-item:hover {
transform: scale(1.005);
box-shadow: 0 15px 25px rgba(0, 0, 0, 0.1);
}
/* Accordion Header */
.accordion-header {
padding: 20px;
font-size: 20px;
font-weight: 500;
cursor: pointer;
transition: background-color 0.3s ease;
user-select: none;
display: flex;
justify-content: space-between;
align-items: center;
background-color:#3f4142;;
color: #fff;
}
.accordion-header:hover {
background-color: #47494a;
}
/* Accordion Content */
.accordion-content {
max-height: 0;
overflow: hidden;
padding: 0 20px;
background-color: #f4eeee;
transition: max-height 0.5s ease-in-out, padding 0.5s ease-in-out;
}
.accordion-content p {
margin: 15px 0;
color: #333;
line-height: 1.6;
}
.accordion-item.active .accordion-content {
max-height: 500px;
padding-bottom: 30px;
}
.accordion-header::after {
content: '⌵';
/* Down arrow */
font-size: 18px;
transition: transform 0.5s ease-in-out;
}
.accordion-item.active .accordion-header::after {
transform: rotate(-180deg);
/* Up arrow when active */
}
@keyframes fadeSlideDown {
0% {
opacity: 0;
transform: translateY(-10px);
}
100% {
opacity: 1;
transform: translateY(0);
}
}
.accordion-item.active .accordion-content {
animation: fadeSlideDown 0.5s ease-in-out;
}
.dw-resources-s-wrapper {
display: none;
}
.wp-block-themeisle-blocks-accordion.has-light-title-bg>.wp-block-themeisle-blocks-accordion-item>.wp-block-themeisle-blocks-accordion-item__title {
background-color: #EDEAE6 !important;
border: 1px solid #EDEAE6 !important;
}
#questions-and-answers {
margin-top: 30px;
}
.prefix-section {
padding-top: 30px;
padding-bottom: 30px;
padding-left: 40px;
padding-right: 40px;
margin-top: 25px;
margin-bottom: 25px;
min-height: auto;
--background: #000000;
border-radius: 15px;
}
.statistics-section {
padding-top: 20px;
padding-bottom: 20px;
padding-left: 20px;
padding-right: 20px;
min-height: auto;
--background: #ffe6e6;
border-radius: 15px;
height: 100%;
}
.statistics-text {
line-height: 1.2em;
}
.prominent-sessions-section {
padding-top: 30px;
padding-bottom: 30px;
padding-left: 40px;
padding-right: 40px;
margin-top: 25px;
margin-bottom: 25px;
min-height: auto;
--background: #c9e9f6;
border-radius: 15px;
}
.prominent-sessions-link {
text-decoration: none;
color: #477C94;
font-size: 20px;
}
h3.fastest-speakers-text {
line-height: 1.2em;
margin-bottom: 15px !important;
}
.prefixes-text {
line-height: 1.2em;
}
.fastest-speakers-column {
align-self: flex-start;
}
.accordion-title h6 {
font-size:36px;
font-family: 'Montserrat', sans-serif;
color: #fff;
margin: 0;
}
.accordion-title {
padding:30px 40px;
}
Thematic summaries and recommendations
Digital cooperation and internet governance: The path forward
The IGF about the IGF
The future of the IGF, particularly in light of the upcoming WSIS+20 review, was a prominent theme in discussions. The IGF itself was created as a multistakeholder platform for internet governance, and the value of the multistakeholder approaches to governance has been reiterated over the years. The importance of multistakeholder collaboration in internet governance was emphasised this year as well, with a focus on issues such as stakeholder inputs, facilitating dialogue, and youth integration. Discussions also explored the potential contribution of national, regional, and youth IGF initiatives (NRIs) in shaping the future of the IGF post-2025. Furthermore, several speakers called for strategic thinking about the next 20 years of IGF.
A number of suggestions to improve the IGF was put forward during the discussions, such as: strengthening the IGF’s ability to communicate its messages to relevant policymaking spaces, and better organising and using the wealth of information from past IGFs. Alongside calls for developing mechanisms for year-round engagement and earlier consultations, there were also calls for making the IGF a permanent institution within the UN system.
Many also emphasised the need to improve the IGF’s financial sustainability. There were also suggestions that the forum could (and should) serve as a vehicle for facilitating the implementation of the Global Digital Compact (GDC) and contribute to the WSIS Plus 20 review process.
The implementation of GDC and WSIS Action Lines
The Global Digital Compact (GDC), adopted in September 2024, emerged as a central theme across multiple sessions. This landmark agreement was hailed as a comprehensive framework for addressing the multifaceted challenges of the digital age, from bridging digital divides to fostering safe and inclusive digital spaces. The importance of stakeholder partnerships and collaboration when it comes to transposing GDC commitments and calls into real action, the need to allocate sufficient resources to follow-up activities, the complementarity between the GDC and existing frameworks like WSIS and the need to ensure alignment between them were among the issues raised.
The interplay between GDC and WSIS processes came up in several discussions. There were reflections on how the GDC builds on the WSIS legacy and the need for meaningful synergies between the GDC and WSIS was highlighted, with calls for the IGF to serve as a flexible and ongoing mechanism for stakeholder engagement in addressing critical digital issues. There were also suggestions to explore the possibility of integrating GDC objectives into the existing WSIS framework, and to integrate GDC follow-up in the WSIS follow-up and review process.
From frameworks to action
A strong message coming out from several discussions was the need to translate global digital governance frameworks, like the WSIS outcome documents and the GDC, into actionable policies at the national and local levels. The adoption of such documents at the UN level – while an achievement in itself – needs to be followed up by concrete measures and actions if we are to achieve the vision for a ‘people-centred, inclusive and development-oriented information society’ (agreed at WSIS) and an ‘inclusive, open, sustainable, fair, safe and secure digital future for all’ (outlined in the GDC).
The need to better understand the local and regional digital realities and challenges and take them into account in global digital governance and cooperation processes was highlighted several times. There were also calls for promoting cross-regional collaboration and alignment in addressing digital challenges, strengthening regional coordination and representation in global debates, and addressing capacity constraints in developing countries.
Recommendations by DiploAI based on the discussions
- Use the WSIS Forum 2025 as a catalyst to bring together various processes and visions.
- Focus each process (WSIS Forum, IGF, etc.) on its unique characteristics to avoid duplication.
- Update WSIS Action Lines to incorporate new emerging issues raised by the GDC.
- Develop concrete targets for GDC implementation through the IGF.
- Ensure IGF discussions and outputs reach relevant government stakeholders.
- Improve cataloguing and accessibility of IGF archives and data.
- Discuss potential improvements to the IGF mandate and structure at the next IGF in Norway.
AI governance: Balancing innovation and responsibility
Defining the scope of AI governance
The discussions revealed a growing consensus on the need for comprehensive AI governance frameworks. Yet, participants grappled with defining the boundaries and scope of these frameworks.
The EU AI Act emerged as a potential North Star for global AI governance, with its risk-based approach and emphasis on fundamental rights protection. However, speakers cautioned against direct replication, recognising the need for context-specific adaptations. As one participant noted, ‘The EU AI Act may not be suitable for direct replication in other regions due to differing regulatory contexts.’
Interoperability emerged as a key theme, with discussions focusing on how to harmonise various governance approaches. Discussions explored this concept beyond technical aspects, encompassing legal, semantic, and policy dimensions. One speaker likened the challenge to ‘creating a universal language for AI governance that can be understood and implemented across different cultural and regulatory landscapes.’
The conversations highlighted the challenge of developing governance models that can keep pace with rapid technological advancements. This was likened to trying to build a runway while the plane is already in flight, emphasising the need for agile and adaptive regulatory approaches.
A significant focus was placed on developing risk-based approaches to AI governance. The importance of flexible, context-based risk assessment was emphasised, highlighting ongoing efforts to create risk-based governance frameworks. The discussions revealed the complexity of categorising AI risks across different jurisdictions and use cases. Additionally, the importance of cultural considerations in risk perception was highlighted, as different societies may have varying tolerances for risk.
Balancing innovation and regulation
A central tension in the discussions was how to foster innovation while ensuring responsible AI development. This challenge was aptly described as ‘trying to nurture a garden of innovation while building protective fences around it.’
The concept of regulatory sandboxes emerged as a potential solution, allowing for controlled experimentation with AI technologies. These sandboxes were described as ‘safe playgrounds for innovation,’ allowing developers to test new ideas while regulators observe and learn.
Transparency and explainability emerged as critical components of responsible AI development. The two are quite distinct: Transparency relates to how AI systems are designed and deployed, while explainability concerns justifying AI decisions. This nuanced understanding sets the stage for more effective governance strategies.
Open-source AI models were also discussed as a means to democratise access and foster innovation, particularly in the Global South. The session on open-source large language models likened this approach to ‘planting seeds of AI knowledge that can grow and adapt to local ecosystems.’
Ethical considerations and human rights
The ethical implications of AI development and deployment were a recurring theme across sessions. A human rights-based approach to AI was strongly advocated, emphasising the need for accountability, remedy, and reparation when violations occur.
The Council of Europe’s AI treaty was highlighted as a significant step towards embedding human rights considerations in AI governance. As one speaker poignantly stated, ‘We all see the upsides of AI. We all see the benefits for development, for economic opportunity. But with every new phase of digital technology, we’ve seen human rights, the rights of women and girls, the rights of freedom of speech, democracy jeopardised.’
Discussions also touched on the philosophical implications of AI, with some participants proposing novel concepts like the ‘right to be humanly imperfect’ in contrast to AI’s pursuit of optimisation. This idea, introduced in the session on intelligent machines and society, challenges us to consider what it means to be human in an AI-driven world.
Global South perspectives and inclusive development
The growing influence of AI on global development demands inclusive governance that prioritises voices from the Global South. Participants emphasised the risk of AI exacerbating digital divides and stressed the need to address disparities in computing power, data access, and algorithmic development. One speaker poignantly compared AI to water—an essential resource that can nourish and help societies flourish but, if mismanaged, can lead to floods or droughts. This metaphor captured the dual potential of AI: its ability to drive sustainable development while also posing risks if left unchecked.
Initiatives like the Hamburg Declaration on AI and SDGs aim to bridge these gaps by focusing on sustainability, equitable infrastructure, and governance frameworks that support developing countries, with plans for presentation at key global forums in 2025, including IGF.
The cultural context in AI governance was another critical theme, challenging the notion of universal ethics. A session on fairness in Asia highlighted how concepts of justice vary across regions, reinforcing the need for localised approaches. Together, these discussions point to a path forward: investing in local AI ecosystems, ensuring equitable participation, and fostering global collaboration to align AI with sustainable and inclusive development goals.
AI in specific domains
Several sessions focused on AI applications in specific domains, each presenting unique challenges and opportunities:
Education: Discussions explored how AI can foster cross-cultural understanding while raising concerns about overreliance on AI tools.
Warfare: The discussion on AI in warfare grappled with the ethical implications of autonomous weapons systems and the need for human control in military applications.
Government services: The discussions explored how AI can enhance trust and improve public services, while addressing concerns about data privacy and algorithmic bias.
Recommendations by DiploAI based on the discussions – Develop flexible, adaptive AI governance frameworks that can evolve with technological advancements while considering local contexts and cultural differences. – Prioritise human rights and ethical considerations in AI development and deployment, ensuring that AI systems respect fundamental rights and values. – Foster international collaboration and knowledge sharing, particularly between the Global North and South, to address the AI divide and promote inclusive AI development. – Implement regulatory sandboxes and other innovative approaches to balance innovation with responsible AI development. – Invest in AI literacy and capacity-building programs to empower diverse stakeholders to participate in AI governance discussions and implementation. – Develop interoperable governance tools and reporting frameworks to facilitate global coordination while respecting regional differences. – Prioritise transparency and explainability in AI systems to build public trust and enable effective oversight. |
Infrastructure: The foundation for the connected future
In an increasingly digital world, the infrastructure that underpins our online interactions is more crucial than ever. Discussions on digital infrastructure covered a wide range of topics, from the cutting-edge development of interplanetary networks to the grassroots efforts of local network operator groups.
The concept of digital infrastructure is expanding beyond traditional notions of cables and servers. This year’s discussions revealed a growing recognition of Digital Public Infrastructure (DPI) as a critical component of national development strategies. However, the definition of DPI varies depending on the viewer’s perspective.
Some experts described DPI as the ‘minimum necessary infrastructure to protect free internet, market and democracy,’ while others, defined it as ‘digital systems built on open standards that are interoperable and secure to provide services’. This diversity of definitions highlights the need for a more precise and value-driven conceptualisation of DPI to guide policy and implementation efforts.
The potential benefits of DPI were widely recognised, with examples such as Brazil’s PIX payments system demonstrating how DPI can break monopolies and increase competition. However, concerns were raised about the risks of centralisation and the potential for DPI to be used for control rather than public benefit, depending on the context.
From local to interplanetary
While discussions of advanced technologies like interplanetary networks captured imaginations, the importance of addressing basic connectivity issues remained a central theme. Network Operator Groups (NOGs) emerged as unsung heroes in this effort, playing a crucial role in maintaining and developing internet infrastructure at local and regional levels.
These volunteer-driven communities, where competing companies collaborate to solve technical problems, are essential for ensuring internet stability and promoting capacity building. However, NOGs face challenges in sustainability, community engagement, and adapting to evolving needs.
Interestingly, technologies developed for space communication, such as Delay Tolerant Networking (DTN), are finding unexpected applications in addressing terrestrial connectivity challenges. This cross-pollination of ideas between space and terrestrial applications demonstrates the unexpected ways in which innovation in one area can benefit another.
Governance and standards
As digital infrastructure becomes increasingly critical, the need for robust governance frameworks and universal standards has come to the forefront. Discussions emphasised the importance of multi-stakeholder collaboration in developing these frameworks, whether for terrestrial or interplanetary networks.
For terrestrial infrastructure, there was a strong call for universal standards that are flexible enough to be adapted to different contexts. As was emphasised, universal standards are ‘not negotiable’ but implementation may require customisation. This approach recognises the need for global coherence while allowing for local adaptation, much like how a river follows a common course while adapting to the unique contours of its landscape.
As for the interplanetary networks, experts stressed the importance of adopting governance models and technical standards similar to those used in the terrestrial internet. This approach aims to ensure interoperability and open participation as space activities become increasingly commercialised.
A recurring theme across discussions was the need to bridge the gap between technical and policy communities. As one speaker described, there is often a ‘0 degrees to 180 degrees’ gap between the technical community and the governance community. Addressing this disconnect is crucial for developing effective policies that support robust and inclusive digital infrastructure.
Recommendations by DiploAI based on the discussions
– Develop a clear, value-driven definition of Digital Public Infrastructure to guide policy and implementation efforts. |
Cybersecurity: Strengthening digital defences
Fortifying the digital citadel: Critical infrastructure protection
As our societies become increasingly digitalised, protecting critical infrastructure has become paramount. Like a digital citadel, our interconnected systems require robust defences against an ever-growing array of cyber threats.
Cyber norms and international cooperation play an important role in critical infrastructure protection, but challenges remain in implementation. The idea of building universal standards for digital infrastructure resilience was also explored, with consensus on the need for flexibility to accommodate diverse contexts.
The discussions also highlighted the vulnerability of transnational infrastructure, such as subsea cables and satellite systems, to cyberattacks. International cooperation and information sharing are necessary to protect these vital assets.
Several issues remained unresolved, including how to develop standards that remain current given rapid technological changes, how to address economic and technological disparities between countries in implementing standards, and how to establish common definitions and language around digital infrastructure resilience.
Developing economies face unique challenges in this area. For these countries, the importance of capacity building, international cooperation, and strategic resource allocation cannot be overstated. Enforcing existing laws and building capacity, rather than hastily creating new policies, was recommended.
AI: The double-edged sword
Artificial intelligence emerged as both a powerful tool for cybersecurity and a potential threat. AI was likened to a Swiss Army knife in the cybersecurity toolkit – versatile and powerful, but potentially dangerous if misused.
AI can enhance threat detection and response capabilities, but AI-generated threats like deepfakes and adversarial attacks are a matter of concern. An interesting analogy compared the development of AI to the Wright brothers’ plane, emphasising the need for continuous improvement and refinement.
In resource-constrained environments, AI can bring immense opportunities for improving cybersecurity and critical infrastructure security. AI enhances threat detection, automates data analysis, and addresses language barriers, making solutions more accessible. A unique advantage that AI has is in overcoming language barriers, as AI could make cybersecurity solutions available in multiple languages, thereby increasing accessibility for developing countries. The idea of AI as a ‘digital guardian angel’ for critical infrastructure in developing nations was therefore proposed. However, AI systems also face risks such as adversarial attacks, data poisoning, and privacy vulnerabilities. Additionally, AI-driven security must balance technical advancements with ethical considerations.
Safeguarding the digital playground: Online safety for children and youth
We tend to overprotect children in the offline world, and we underprotect them in the virtual world, it was underlined. Speakers advocated for a multistakeholder approach to child safety online involving governments, tech companies, educators, parents, and children themselves. The inclusion of children’s perspectives in the development of safety features and policies was identified as a vital component. Education emerged as a central theme, with experts calling for media literacy programs in schools.
The concept of ‘digital advocates’ – students who could support their peers in navigating online safety – was also introduced. This peer-to-peer approach was likened to a digital neighborhood watch, empowering young people to look out for each other in the online world.
However, there should be a balance between the protection of children and their privacy online. Another challenge is addressing online safety for children from different socioeconomic backgrounds, who have varying levels of vulnerability to online risks. Some of the solutions suggested include implementing safety by design – embedding safety measures into products from the outset, updating laws to address online violence, and strengthening social services for at-risk children.
An interesting metaphor compared online safety education to teaching children to swim – a vital life skill in the digital age.
The human element: Cybersecurity capacity building
Across various sessions, the importance of human capacity in cybersecurity was consistently emphasised, as was the critical need for tailored capacity building approaches and international cooperation was highlighted. Additionally, women are underrepresented in the cybersecurity field. This has to be addressed through a holistic approach combining capacity building, including training, mentorship, role modeling, community building, and real-world exposure.
Recommendations by DiploAI based on the discussions
– Pursue regional cooperation and agreements on critical infrastructure protection as a stepping stone to broader international cooperation. |
Human rights in the digital age
Bridging the gender divide
The digital divide remains a significant barrier to the realisation of human rights in the online world. Discussions at the IGF emphasised the persistent gender gap in technology-related fields, with women representing only 19% of entry-level positions and 10% of executive-level positions in tech. This disparity is even more pronounced in developing countries, where access to technology and digital skills training is limited.
To address this issue, speakers proposed various strategies, including targeted education programs, flexible work policies, and initiatives to promote women’s entrepreneurship in the digital sector.
The discussions also highlighted the importance of inclusive design in emerging technologies. The digital architecture must be built with ramps and elevators, not just stairs, to ensure accessibility for all users.
Children’s rights in the digital age
The protection of children’s rights in the digital world emerged as a critical theme across multiple sessions. Experts grappled with the challenge of balancing children’s right to privacy with the need for online safety measures.
The discussion on children in the metaverse revealed that 51% of metaverse users are under the age of 13, highlighting the urgency of addressing children’s rights in virtual environments. Speakers emphasised the need for age-appropriate design, child-friendly reporting mechanisms, and effective remedies in digital spaces.
Discussions also explored the impact of AI on children, highlighting risks such as privacy violations, data exploitation, and exposure to harmful content, and the lack of systems designed with children’s best interests in mind. Participants called for global standards, youth-inclusive policymaking, and increased awareness to ensure AI systems prioritise children’s rights and well-being.
A novel idea proposed was the development of an ‘AI code for children’ by the Five Rights Foundation, aimed at providing practical guidance on designing AI systems with children’s rights in mind. This code could serve as a ‘digital compass’ guiding developers and policymakers through the complex terrain of children’s rights in AI..
Encryption and privacy: The digital fortress
The debate on encryption and privacy rights versus public safety and law enforcement needs continue to be a contentious issue. Experts discussed the challenges posed by end-to-end encryption in investigating crimes against children, while also acknowledging the crucial role of encryption in protecting user privacy and security.
The concept of ‘hotness’ of keys was introduced, proposing systems that would make key theft detectable and thus deter abuse. This innovative approach could be likened to a ‘digital alarm system’ that alerts users when their privacy has been compromised.
Recommendations by DiploAI based on the discussions
– Develop comprehensive, multi-stakeholder approaches to address the digital gender divide, including targeted education, supportive ecosystems, and inclusive policies. |
Sociocultural: Navigating misinformation maze
Misinformation has become an ever-changing maze, where truth and lies twist and tangle like vines in a dense forest. Speakers at the IGF highlighted how social media platforms have become the primary conduits for the rapid spread of misinformation, acting as both facilitators of information exchange and amplifiers of falsehoods. It was noted that these platforms are now the main source of misinformation spread, creating a digital ecosystem where fact and fiction coexist in an uneasy balance.
The impact of this digital misinformation ecosystem extends far beyond virtual. A speaker’s poignant observation that ‘misinformation kills’ underscores the real-world consequences of online falsehoods, particularly in conflict zones. This stark reality serves as a reminder that the battle against misinformation is not just about preserving truth, but about safeguarding lives and societies.
Technological solutions and challenges
In the arms race between misinformation creators and fact-checkers, technology plays a dual role – both as a weapon and a shield. On one hand, AI-driven fact-checking tools offer the potential to rapidly identify and flag misleading content. On the other, AI-generated deepfakes and sophisticated misinformation campaigns pose new threats to information integrity.
The rise of generative AI in electoral contexts has added another layer of complexity to this issue. While the catastrophic impact once feared has not fully materialised, the presence of AI-generated content in elections has raised new concerns about the manipulation of public opinion and the integrity of democratic processes.
Multistakeholder approaches and collaborative efforts
A recurring theme throughout the discussions was the need for multi-stakeholder collaboration to effectively combat misinformation. This approach was likened to building a robust immune system for the digital information ecosystem, where various actors – from tech companies and governments to civil society organisations and academia – work in concert to identify and neutralise misinformation threats.
Election coalitions emerged as a promising model for such collaboration. These coalitions bring together diverse stakeholders to monitor and address misinformation during critical electoral periods, acting as a collective defence mechanism against the spread of false narratives.
The role of digital literacy and user empowerment
Amidst the technological solutions and regulatory debates, the importance of digital literacy emerged as a crucial component in the fight against misinformation. Speakers emphasised the need to empower users with the skills to critically evaluate online information, likening this process to teaching digital self-defence in an age of information warfare.
Pre-bunking strategies, which aim to inoculate users against misinformation before they encounter it, were highlighted as a proactive approach to building societal resilience against false narratives. This strategy can be compared to a mental vaccine, preparing individuals to recognise and resist misinformation attempts.
Balancing regulation and freedom of expression
The discussion on regulatory approaches to combating misinformation highlighted the delicate balancing act between safeguarding truth and protecting free speech. Policymakers and platforms face a challenge akin to walking a tightrope, striving to address harmful content without undermining fundamental rights.
As one participant aptly noted during the session Navigating the Misinformation Maze: ‘If a government or regulatory authority decides to step in and determine what constitutes misinformation, then who moderates the regulator?’ This highlights the need for transparent, accountable, and inclusive approaches to content moderation and regulation.
Recommendations by DiploAI based on the discussions – Develop and implement comprehensive digital literacy programs to empower users to critically evaluate online information. – Foster multi-stakeholder collaborations, such as election coalitions, to collectively address misinformation challenges. – Invest in AI-driven fact-checking tools while simultaneously developing ethical guidelines for AI use in content creation and moderation. – Encourage platforms to increase transparency in their content moderation policies and algorithmic decision-making processes. – Explore pre-bunking strategies as a proactive measure against misinformation spread.Balance regulatory efforts with robust protections for freedom of expression and access to information. – Support independent journalism and fact-checking organizations to maintain a diverse and reliable information ecosystem. |
Legal: Data governance and sovereignty dilemma
As the digital world continues to evolve at lightning speed, data governance becomes a key issue. A recurring theme across many sessions was the tension between data localisation and cross-border data flows – a tug-of-war between protectionism and globalisation in the digital space.
The critical importance of cross-border data flows for the global economy, innovation, and development was emphasised in discussions on data governance. However, they also acknowledged the challenges posed by increasing mistrust and restrictions. The discussions highlighted the need for a nuanced approach that goes beyond binary perspectives on data sharing.
Similarly, the complexities of implementing data localisation policies were explored. Speakers emphasised the importance of data classification to determine what information should be localised and what can be stored internationally. This approach recognises that digital sovereignty extends beyond data to include operations, infrastructure, and talent.
In the African context, a similar struggle was revealed. Speakers stressed the need to balance data localisation with enabling cross-border data flows, particularly in the context of implementing the African Continental Free Trade Area (AfCFTA). This balancing act was again likened to walking a tightrope, where too much restriction could hinder economic growth, while too little could compromise data security and sovereignty.
Harmonising diverse voices in data governance
Across various sessions, there was a strong consensus on the need for multi-stakeholder approaches to data governance.
To this end, speakers emphasised the importance of involving various stakeholders, including governments, industry, academia, and civil society, in shaping data governance for AI and other data-oriented technologies. This collaborative approach was seen as crucial for addressing the complex challenges posed by emerging technologies.
Similarly, the African data governance discussion highlighted the importance of a multi-stakeholder and multi-sectoral approach to address the challenges of data governance on the continent. This approach was seen as essential for building trust between governments and businesses regarding data sharing.
The regulatory sandbox: Nurturing innovation while ensuring protection
An approach that emerged in several discussions was the concept of regulatory sandboxes. These can be likened to a controlled playground where new technologies can be tested and refined before being released into the wild of the open market.
Speakers addressing the concept discussed the value of regulatory sandboxes as a means of fostering innovation while ensuring compliance with regulations. This approach allows companies to test new products or services in a controlled environment, under the supervision of regulators, helping to identify potential risks and regulatory issues before full market deployment.
Bridging the gap in data governance capabilities
A recurring theme across sessions was the challenge posed by varying levels of digital readiness and infrastructure across countries and regions.
To this end, speakers highlighted significant infrastructure challenges facing many African countries, including unreliable electricity supply and limited local technical expertise. Similarly, the discussions emphasised the critical need for capacity building in data governance across the continent.
Recommendations by DiploAI based on the discussions – Develop data classification frameworks to determine appropriate levels of localisation.Explore the use of privacy-enhancing technologies to enable secure cross-border data flows. – Harmonise data protection laws across regions to facilitate cross-border data sharing while maintaining adequate protections. – Develop targeted capacity-building programs for policymakers and regulators in developing countries. – Invest in data infrastructure development, including regional data centres and reliable energy sources. – Create mentorship and knowledge-sharing programs between countries with advanced data governance frameworks and those still developing them. – Implement regulatory sandboxes for emerging technologies in data governance. – Develop clear guidelines and processes for companies participating in regulatory sandboxes. – Use insights from sandboxes to inform the development of flexible, adaptive regulatory frameworks. |
Economic: Empowering the digital economy
Data flows and digital trade
Just as rivers and oceans have shaped global trade for centuries, data flows are now the lifeblood of the digital economy. However, increasing restrictions on these flows threaten to dam this vital resource. Data localisation undermines the open nature of the internet, discussions emphasised, by hindering global connectivity, enabling surveillance, and creating barriers for both large corporations and small network operators. Additionally, trade agreements were noted for increasingly lacking protections for cross-border data flows, further endangering the free flow of information.
The negative impacts of restricting data flows include economic fragmentation, hindered access to information, and human rights concerns. A growing reliance on national security justifications for these restrictions, particularly in the context of geopolitical tensions, was also discussed.
To address these challenges, several solutions were proposed. These included standardising global data protection practices, conducting comprehensive research on the economic effects of data localisation, and involving diverse stakeholders, including small businesses, in policy discussions. Awareness-raising and collaboration with international organisations were highlighted as critical for ensuring that data flows remain open and sustainable for the future of the internet and digital trade.
Breaking internet monopolies through interoperability
The digital economy, while offering immense opportunities, has also led to the rise of tech giants with unprecedented market power. Big tech companies maintain their dominance through strategic acquisitions, leveraging legal power, and creating barriers to competition. Interoperability was presented as a potential solution, with historical examples like IBM’s decline in the PC market due to increased interoperability.
Youth voices in the digital economy
The digital economy’s rapid evolution demands fresh perspectives, and youth can bring unique insights as early adopters and innovators of digital technologies. They are catalysts for innovation and social movements, bringing fresh perspectives and creativity to the table.
However, the path to meaningful youth engagement is not without obstacles: The digital divide, digital literacy gaps, lack of youth representation in important forums, and varying definitions of ‘youth’ across cultures all negatively impact youth participation in shaping the data economy and digital policies. A broader inclusion of diverse youth voices is needed, and the needle moved beyond tokenism to meaningful engagement.
Recommendations by DiploAI based on the discussions – Enable greater collaboration between the digital and trade communities to address misconceptions about data flows and sovereignty. – Raise awareness with national decision-makers about the importance of cross-border data flows for the internet. – Gather more evidence on the economic impacts of data localisation, especially in developing countries. – Advocate collectively for better interoperability. – Institutionalise youth participation in governance structures and include youth in multistakeholder forums and consultations. |
Development: Connectivity as the foundation
At the heart of digital inclusion lies the fundamental issue of connectivity. Speakers emphasised that meaningful access goes beyond mere internet availability, encompassing factors like affordability, digital literacy, and relevant content. While 67% of the world’s population uses the internet, affordability remains a significant barrier in many countries, with entry-level broadband costs exceeding 20% of GNI per capita in some regions.
The discussions highlighted innovative approaches to expanding connectivity, particularly in underserved areas. From community networks to low Earth orbit (LEO) satellite technology, diverse solutions are being explored to bridge the digital divide. These technologies are like digital bridges, spanning geographical and socioeconomic gaps to connect the unconnected.
However, the journey to universal connectivity is not without its challenges. Issues such as the high costs of infrastructure development, regulatory hurdles, and the need for sustainable business models in rural and remote areas have been highlighted. The metaphor of ‘leapfrogging’ was frequently invoked, suggesting that developing countries could skip intermediate stages of technological development to adopt more advanced solutions directly.
Empowering the marginalised
A recurring theme across sessions was the need to focus on marginalised and underrepresented groups in digital inclusion efforts. Discussions highlighted persistent gender gaps in digital access and usage, with women facing unique barriers such as affordability, digital skills gaps, and social norms that limit their participation in the digital economy.
Efforts to address these disparities were likened to cultivating a diverse digital ecosystem, where each group’s unique needs and perspectives contribute to a richer online environment. Initiatives like Pakistan’s Digital Gender Inclusion Strategy and Lithuania’s ‘No One Left Behind’ program for elderly digital literacy were highlighted as examples of targeted interventions to promote inclusivity.
The power of data and AI
The role of data and AI in development was a hot topic, with speakers emphasising both the potential and pitfalls of these technologies. Citizen-generated data was highlighted as a powerful tool for inclusive development, enabling marginalised communities to participate in decision-making processes and fill critical data gaps.
However, concerns were raised about the ‘data divide’ and ‘algorithm divide’ between developed and developing countries.
Balancing innovation and its environmental impact
Sustainability and environmental challenges took centre stage in discussions, emphasising the dual role of digital technologies as both contributors to and solutions for environmental issues. The digital sector accounts for approximately 4% of global greenhouse gas emissions, with hardware production alone making up 80% of its footprint. Concerns about e-waste were also raised, as projections suggest it could reach 82 billion tons by 2030 without intervention. Speakers called for comprehensive policies, circular economy practices, and sustainable IT procurement to address these pressing issues.
AI emerged as a key technology with the potential to accelerate progress on SDGs by up to 70%. From real-time policymaking and disaster response to climate modelling and sustainable resource management, AI can drive significant change. However, its growing energy demands, alongside those of data centres and algorithms, underscore the need for global frameworks to optimize resource use and mitigate its environmental impact.
The discussions also highlighted the role of emerging technologies like Geographic Information Systems (GIS) and big data in supporting sustainable agriculture and environmental monitoring. Collaborative efforts across stakeholders and sectors were deemed essential to align digital and green transformations, ensuring digitalisation supports sustainability while minimising its environmental footprint.
Recommendations by DiploAI based on the discussions – Invest in diverse connectivity solutions, including community networks and innovative technologies like LEO satellites, to reach underserved areas. – Implement targeted interventions to address specific barriers faced by marginalised groups, such as women, the elderly, and rural communities. – Promote digital literacy and skills development programs tailored to different demographics and contexts. – Foster multi-stakeholder collaboration and public-private partnerships to accelerate digital infrastructure development and adoption. – Develop comprehensive data governance frameworks that balance innovation with privacy and ethical considerations. – Leverage citizen-generated data and participatory approaches to ensure inclusive digital development. – Integrate digital technologies into various sectors, such as agriculture, education, and healthcare, to maximise development impact. |
Video summaries
Daily Summaries
Visual Summary
AI Assistant
Event Statistics
Total session reports:
236
Unique speakers
1319
Total speeches
1865
Total time
1016626.87
min
11.0 days, 18.0 hours, 23.0 minutes, 47.0 seconds
Total length
2189267 words
2189267 words, or 3.73 ‘War and Peace’ books
Total
arguments
3994
Agreed
points
554
Points of
difference
257
Thought provoking comments
1340
Prominent Sessions
Explore sessions that stand out as leaders in specific categories. Click on links to visit full session report pages.
Longest session
1
Session with most speakers
26
Session with most words
16999 words
Fastest speakers
1
Gabriel Kaptchuk
211.34 words/minute
2
Fiona Alexander
206.69 words/minute
3
MATILDA MOSES-MASHAURI
206.18 words/minute
Most Used Prefixes and Descriptors
digital
7254 mentions
during Internet Governance Forum 2024
The session that most mentioned
the prefix digital:
High-Level Session 4: From Summit of the Future to WSIS+ 20 (144 mentions)
ai
6854 mentions
during Internet Governance Forum 2024
The session that most mentioned
the prefix ai:
Day 0 Event #183 What Mature Organizations Do Differently for AI Success (219 mentions)
internet
5623 mentions
during Internet Governance Forum 2024
The session that most mentioned
the prefix internet:
Main Session | Policy Network on Internet Fragmentation (120 mentions)
online
3135 mentions
during Internet Governance Forum 2024
The session that most mentioned
the prefix online:
Open Forum #29 Multisectoral action and innovation for child safety (72 mentions)
cyber
2611 mentions
during Internet Governance Forum 2024
The session that most mentioned
the prefix cyber:
Open Forum #46 Africa in CyberDiplomacy: Multistakeholder Engagement (143 mentions)
Questions & Answers
Why do humans tend to be obsessed with building AI that matches human intelligence and has human attributes?
Throughout various discussions, the question of why humans are obsessed with building AI that matches human intelligence and has human attributes was notably addressed in the session WS #78 Intelligent machines and society: An open-ended conversation. During this session, Sorina Teleanu questioned the human tendency to assign human attributes to AI, such as reasoning and understanding. She pointed out the obsession with developing AI that mimics human intelligence, asking if we could instead develop intelligent machines that act more like other forms of intelligence found in nature, such as octopuses or fungi.
The discussion highlighted that this obsession could stem from a natural human inclination to understand the world through a human-centric lens, leading to the aspiration of creating AI that replicates human thought processes and behaviors. Furthermore, the potential of AI to solve complex problems and enhance human capabilities makes the pursuit of human-like AI appealing for technological and economic advancements.
While the other sessions listed did not specifically address this question, the overall discourse at the forum emphasized the importance of considering diverse perspectives and ensuring that the development of artificial intelligence reflects a broad range of intelligences, not solely human-like attributes.
In a world driven by economic growth and efficiency, can humans compete with machines? Should they? Is there space to advocate for a right to be humanly imperfect?
The question of whether humans can and should compete with machines in a world driven by economic growth and efficiency, and whether there is space to advocate for a right to be humanly imperfect, was addressed in the session titled “Intelligent Machines and Society: An Open-Ended Conversation”. During this session, Jovan Kurbalija argued for the right to be imperfect and questioned the focus on efficiency. He suggested that humans should not compete with machines but embrace their imperfections, which historically have led to breakthroughs. Kurbalija also mentioned the concept of a ‘right to be humanly imperfect’ as potentially gaining traction in future discussions.
His argument emphasizes that while machines excel in efficiency and precision, human imperfections are valuable as they lead to creativity and innovation. This notion challenges the prevailing narrative of economic growth driven solely by technological advancement and efficiency. The discussion opens a dialogue on the need for a balanced coexistence where both human and machine strengths are acknowledged and leveraged.
What unintended consequences might arise from the rush to come up with new regulations for AI, and how can we proactively address them?
The rush to implement new regulations for AI can lead to several unintended consequences, as was highlighted in the Main Session | Policy Network on Artificial Intelligence. Brando Benifei emphasized the importance of transparency in regulatory processes and warned against stifling innovation, particularly for smaller market players.
In another session, WS #78 Intelligent Machines and Society: An Open-Ended Conversation, Sorina Teleanu expressed concerns about the hasty regulatory approach, highlighting the need for more nuanced discussions to fully grasp AI’s benefits and risks.
These discussions underscore the necessity for a balanced approach to AI regulation. Key strategies include ensuring regulatory transparency, fostering innovation through supportive policies, and engaging a wide range of stakeholders in the policy-making process. This approach can help mitigate potential negative impacts, such as hindering technological advancement or disproportionately affecting smaller entities.
Could the push for global AI governance standards inadvertently stifle innovation in developing countries?
The concern about whether the push for global AI governance standards might inadvertently stifle innovation in developing countries was discussed in a few sessions during the Internet Governance Forum 2024. Notably, in WS #82 A Global South perspective on AI governance, Jenny Domino highlighted the potential risks of the “race to regulate AI” and its geopolitical implications, indicating that some stakeholders, particularly those from developing countries, might be left out.
In the Main Session | Policy Network on Artificial Intelligence, Yves Iradukunda stressed the importance of considering local contexts when implementing AI governance frameworks. He advocated for capacity building and partnerships to mitigate potential inequities that might arise due to global governance standards.
Similarly, in WS #100 Integrating the Global South in Global AI Governance, Martin Roeske argued for a balance between regulation and innovation to ensure that innovation is not hindered by excessive regulatory measures.
Lastly, in WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World, Ananda Gautam pointed out the challenges faced by developing countries in creating their own AI legislation and suggested that developed nations could assist in capacity building to help address these challenges.
These discussions collectively underscore the need for a nuanced approach to AI governance that takes into account the unique challenges faced by developing countries, ensuring that global standards do not act as barriers to innovation.
What are the implications of treating algorithms as ‘black boxes’ beyond human comprehension? How might this opacity erode public trust in AI?
The issue of treating algorithms as ‘black boxes’ and its implications on public trust was discussed in several sessions during the Internet Governance Forum 2024. The discussions highlighted the challenges posed by the opacity of AI systems and the necessity for transparency and explainability to maintain trust and accountability.
In the Main Session | Policy Network on Artificial Intelligence, Jimena Viveros emphasized the “issues of opacity and the need for transparency in AI systems,” describing ‘black box’ algorithms as a substantial challenge in “assigning liability” and ensuring accountability.
In the session DC-CIV & DC-NN: From Internet Openness to AI Openness, Renata Mielli and Sandrine Elmi Hersi discussed the need for transparency in AI systems to maintain trust and accountability. Sandrine highlighted the “lack of transparency in AI interfaces” and the risk of “hallucinations,” which can lead to user distrust.
During WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy, Monica Lopez pointed out that treating algorithms as black boxes “limits transparency and can perpetuate societal disparities.” She stressed the importance of transparency and explainability to “improve trust and understanding.”
In the session WS #31 Cybersecurity in AI: balancing innovation and risks, Dr. Alison highlighted the “lack of transparency and replicability” in AI models as significant challenges to trust, noting the difficulty of understanding the inner workings of these models once they’re operational.
Another critical perspective was shared by Yasmin Afina in WS #184 AI in Warfare – Role of AI in upholding International Law, where she discussed the challenges of ‘black box’ algorithms in military AI and the difficulty in conducting investigations into International Humanitarian Law (IHL) violations when systems are not transparent.
Lastly, in High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative, Amal El Fallah Seghrouchni mentioned that AI systems are often seen as ‘black boxes’ with inputs and outputs, and this lack of transparency can lead to a lack of accountability and trust in AI systems.
These discussions underscore the urgent need for transparency and explainability in AI systems to preserve public trust and ensure accountability in the technology’s deployment.
How can we address the potential conflict between calls for data minimisation and the data-hungry nature of AI development?
The potential conflict between calls for data minimization and the data-hungry nature of AI development is a significant topic in the realm of digital governance. In the High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative, Amal El Fallah Seghrouchni highlighted the “importance of using well-calibrated data sets rather than huge amounts of data”. This suggests that specific and specialized data may be sufficient for particular sectors, aligning with the principles of data minimization.
Although the issue was not directly addressed in other sessions, the mention by Amal El Fallah Seghrouchni provides a valuable perspective on balancing the need for large datasets in AI with privacy concerns and data minimization efforts. The broader context of digital governance and AI development continues to explore how to responsibly harness data while respecting privacy rights and minimizing unnecessary data collection.
How can we address the potential conflict between calls for algorithmic transparency and the protection of trade secrets?
The issue of balancing algorithmic transparency with the protection of trade secrets was notably mentioned in two sessions during the Internet Governance Forum 2024.
In the Main Session | Policy Network on Artificial Intelligence, Anita Gurumurthy highlighted the challenge posed by trade secrets to algorithmic transparency. She pointed out that trade secret claims can limit the disclosure of information necessary for accountability, thus posing a barrier to achieving transparency.
Additionally, during the DC-CIV & DC-NN: From Internet Openness to AI Openness session, Vint Cerf addressed the proprietary nature of AI models. He emphasized the need for accountability and transparency, even in the face of trade secret protections.
These discussions underscore the tension between the need for transparency in artificial intelligence systems and the desire to protect proprietary information. The speakers suggest that while maintaining trade secrets is important, there must be mechanisms in place to ensure that algorithmic accountability is not compromised.
How do we reconcile the need for global AI governance with the vastly different cultural and ethical perspectives on AI across regions?
The challenge of aligning global AI governance with diverse cultural and ethical perspectives was a pertinent topic discussed in several sessions during the Internet Governance Forum 2024. The discussions highlighted the complexity of creating universal AI governance frameworks that respect regional differences while maintaining global standards.
In the session on A Global South perspective on AI governance, Jenny Domino emphasized that “human rights can provide a common language to address diverse perspectives,” suggesting that a human rights framework might serve as a bridge among different regions.
During the Main Session on Policy Network on Artificial Intelligence, Brando Benifei and others acknowledged “the challenge of aligning global governance with local cultural and ethical perspectives,” underscoring the importance of establishing common standards while respecting local nuances.
In the session on AI Regulation Unveiled, Lisa Vermeer noted “the cultural and regulatory diversities that might make adopting a universal standard challenging,” pointing to the intricacies of harmonizing AI regulations across different jurisdictions.
Dr. Alison, in the session on Cybersecurity in AI, discussed “the complexity of harmonizing AI regulations across different jurisdictions due to cultural and ethical differences,” while Melodina emphasized the importance of considering cultural nuances.
The session on From Internet Openness to AI Openness featured Yik Chan Chin, who discussed the importance of “respecting regional diversity in AI governance while establishing compatibility mechanisms” to reconcile differences.
Tejaswita Kharel, in the session on Fostering EthicsByDesign, emphasized that “ethics is subjective and varies across contexts,” highlighting the need for collaboration among stakeholders to bridge these differences.
In the session on Interoperability of AI Governance, Sam Daws mentioned the necessity of “balancing regional variations with global governance approaches,” advocating for cross-regional forums to reduce regional siloing of AI approaches and enhance interoperability.
Jovan Kurbalija, during the session on Intelligent Machines and Society, stressed the importance of “considering different philosophical and cultural perspectives,” including Arab, Asian, and African philosophies, in the global governance of AI.
Finally, in the session on Contextualising Fairness, speakers discussed the need for “contextualizing AI ethics to different cultural and national contexts,” emphasizing the importance of understanding regional needs and priorities.
The discussions across these sessions underscore the necessity of creating adaptable AI governance frameworks that respect cultural diversity while striving for global coherence.
What are the potential unintended consequences of the push for 'ethical AI' in perpetuating certain cultural or philosophical worldviews?
The topic of potential unintended consequences of the push for ‘ethical AI’ in perpetuating certain cultural or philosophical worldviews was discussed in a few sessions at the Internet Governance Forum 2024. Key points from these discussions highlight the complexity and diversity of perspectives on this issue.
During the session “From Internet Openness to AI Openness“, Anita Gurumurthy emphasized the need for a societal and collective rights approach to AI, which might address certain cultural biases in ‘ethical AI’ discussions. This point underscores the importance of considering diverse cultural contexts when developing ethical frameworks for AI, to avoid reinforcing existing biases and power imbalances.
In another session titled “Intelligent Machines and Society: An Open-Ended Conversation“, Jovan Kurbalija cautioned against the overemphasis on AI ethics, warning that it could become an ideological narrative. He suggested that ethical discussions should be more practical and grounded in real-world implications. This highlights a concern that the discourse on ethical AI might become detached from practical realities and serve more as a theoretical exercise rather than addressing tangible issues.
These discussions reflect a broader debate on how to implement ethical AI principles in a way that respects and incorporates a variety of cultural and philosophical perspectives, ensuring that AI technologies benefit all members of society equitably.
What concrete actions need to be taken to address the long-term societal implications of the increasing use of AI in judicial systems, immigration and border control, and government decision-making?
Despite the importance of the topic, the issue of addressing the long-term societal implications of the increasing use of AI in judicial systems, immigration, and border control, and government decision-making was not directly discussed in any of the sessions at the Internet Governance Forum 2024. The extensive list of sessions, including WS #97 Interoperability of AI Governance: Scope and Mechanism, WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance, and WS #98 Towards a Global, Risk-Adaptive AI Governance Framework, did not touch upon this critical question.
While these sessions may have discussed related issues such as AI governance, ethical implications, and trust, there were no direct contributions or quotes addressing the specific societal concerns within judicial, immigration, and governmental decision-making contexts. The absence of discussion on this question highlights a potential gap in the forum’s agenda that warrants future exploration and dialogue to ensure comprehensive policy development and stakeholder engagement in these critical areas.
How can synthetic data be leveraged to improve machine learning models while addressing concerns around data privacy, bias, and representativeness? What governance frameworks are needed to regulate the use of synthetic data?
In the session WS #100 Integrating the Global South in Global AI Governance, the use of synthetic data was discussed as a potential solution to address issues of data representativity and bias. Jill highlighted that synthetic data can be generated to fill gaps in datasets, ensuring that machine learning models are trained on more comprehensive and representative data. This approach can help mitigate biases that arise from underrepresented groups in real-world data, thus enhancing the fairness and accuracy of AI models.
While the advantages of synthetic data in improving machine learning models are evident, the session also emphasized the need for robust governance frameworks to regulate its use. Such frameworks would need to ensure that synthetic data is used ethically and that privacy concerns are adequately addressed. This includes establishing guidelines on the creation, validation, and application of synthetic data to prevent misuse and ensure compliance with data protection laws.
Overall, synthetic data presents a promising avenue for advancing AI technologies while addressing critical concerns around privacy, bias, and representativeness. However, as underscored in the session, its successful integration into AI systems requires careful consideration of governance and ethical implications.
How can international law obligations be effectively translated into technical requirements for AI systems in military applications? And how can liability be determined when AI systems are involved in military actions that violate international law?
The topic of translating international law obligations into technical requirements for AI systems in military applications and determining liability when such systems violate international law was discussed in two sessions at the Internet Governance Forum 2024.
In the Main Session | Policy Network on Artificial Intelligence, Jimena Viveros emphasized the importance of accountability in the military use of AI and highlighted the significance of state responsibility in international law.
In another session, WS #184 AI in Warfare – Role of AI in upholding International Law, Yasmin Afina focused on the challenge of translating international law into technical requirements and ensuring compliance by design. Anoosha Shaigan provided insights into the complexities of determining liability in military AI usage, discussing various forms of liability that could arise in these contexts.
These discussions underscore the need for robust frameworks that integrate international legal principles with the technical design of AI systems in military contexts, ensuring accountability and compliance with international norms.
Are multilateral and multistakeholder approaches to internet and digital governance in opposition to each other? How to move away from this dichotomy and see the two as complementary, rather than competing?
The discussion surrounding whether multilateral and multistakeholder approaches to internet and digital governance are in opposition, and how to see them as complementary, was highlighted in several sessions at the Internet Governance Forum 2024. Participants emphasized the necessity of integrating both approaches to achieve effective governance.
In WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond, the importance of seeing these approaches as complementary was noted. Despite challenges, it was argued that integration is key to achieving balanced and effective governance.
WS #278 Digital Solidarity & Rights-Based Capacity Building highlighted the value of multistakeholderism and its role in fostering digital solidarity, emphasizing how these approaches complement multilateral processes.
During WS #206 Evolving the IGF: cooperation is the only way, a speaker compared these approaches to two sides of the same coin, emphasizing that they are not in opposition but are complementary.
In the Main Session 4, Timea Suto articulated the necessity for multistakeholder and multilateral approaches to ‘hold hands and work together,’ rather than being seen as oppositional, to effectively address challenges.
Similarly, in DC-CIV & DC-NN, Alejandro Pisanty emphasized the need for multistakeholder governance in AI, drawing parallels to internet governance and suggesting that these approaches can be complementary.
During Open Forum #11, Christine Arida highlighted the need to move away from the false dichotomy between multilateralism and multistakeholderism, emphasizing how processes like the Arab IGF can support multilateral decisions.
In WS #194, Ayman El-Sherbiny noted that intergovernmentalism and multistakeholderism can coexist smoothly as two sides of a coin. He highlighted how multistakeholder dialogues shape ideas, which are then taken to governmental decision-making processes.
Additionally, Open Forum #12 emphasized the need for a multistakeholder approach to complement multilateral processes, particularly in the context of the WSIS+20 review and Global Digital Compact implementation.
Finally, in WS #97, Xiao Zhang and others discussed the necessity of both multilateral and multistakeholder approaches to AI governance, highlighting the importance of multilateral engagement while acknowledging the role of multistakeholder involvement.
Given the upcoming WSIS+20 review process, where a renewal of the IGF mandate will be up for discussion, what does the IGF we want look like? What lessons have we learned from 19 years of the forum, and how can we build on them moving forward?
The WSIS+20 review process presents a crucial opportunity to reassess and potentially renew the mandate of the Internet Governance Forum (IGF). Discussions across various sessions at the IGF 2024 focused on the future of the IGF, emphasizing its role as a central platform for multistakeholder dialogue and digital cooperation.
Day 0 Event #166 emphasized the importance of renewing the IGF mandate and improving its inclusivity and participation. Timea Suto and Melanie Kaplan highlighted the need for operational stability through better funding and staffing. Meanwhile, in High-Level Session 4, the IGF was recognized as a crucial platform for multistakeholder dialogue, with calls to strengthen its mandate while maintaining its role as a platform for discussion rather than creating new arenas.
In WS #278, panelists underscored the need for the IGF to remain a central platform for multistakeholder engagement, with Susan Mwape expressing hope for the renewal of the IGF mandate, highlighting its continued relevance.
WS #209 discussed the need for the IGF to be more inclusive and responsive to all stakeholders’ needs, with lessons learned highlighting the importance of diversity and balancing multilateralism with multistakeholderism.
The Main Session 4 was central to the theme of empowering the IGF’s role in Internet Governance, discussing adaptation to new challenges, enhancing outputs, increasing inclusivity, and the need for institutional improvements. The importance of its multistakeholder nature and integration within the WSIS framework was emphasized.
In WS #206, the focus was on evolving the IGF to maintain relevance, improve stakeholder engagement, and enhance funding mechanisms, with lessons highlighting the need for strategic focus, integration with regional and national IGFs, and addressing language barriers.
During the NRI Main Session, the importance of continuing and strengthening the IGF mandate was discussed, with calls for a longer mandate and emphasizing the role of NRIs in bolstering the multi-stakeholder process. Bertrand La Chapelle suggested engaging NRIs in a consultation during 2025 to discuss the next institutional steps for the IGF.
Finally, in Open Forum #66, Rudolf Gridl and others stressed the IGF as a cornerstone of multistakeholder internet governance, with discussions on the need for the IGF to evolve and suggestions for updating its mandate and institutional structure.
Overall, the discussions highlighted the IGF’s pivotal role in fostering inclusive and multistakeholder dialogue on internet governance, underscoring the need for its mandate renewal, structural evolution, and enhanced inclusivity and funding mechanisms to address emerging digital challenges.
What are the risks and challenges of having two parallel processes for the implementation, review, and follow-up of GDC and WSIS outcomes?
The discussions across various sessions at the Internet Governance Forum 2024 highlighted concerns about the risks and challenges associated with having two parallel processes for the implementation, review, and follow-up of the Global Digital Compact (GDC) and the World Summit on the Information Society (WSIS) outcomes. The primary issues raised include potential overlap, inefficiencies, and the added complexity these separate processes could introduce.
In the session on harmonizing strategies towards coordination, there was a significant focus on the need to avoid fragmentation and duplication between GDC and WSIS processes. Jason Pielemier and others emphasized the importance of the Internet Governance Forum (IGF) in ensuring effective coordination between these processes.
During the discussion on the 20 years of implementation of WSIS, Valeria Betancourt from the Association for Progressive Communication highlighted the potential for confusion and the need for integrated processes to avoid added complexity and burden due to having two separate processes.
Henriette Esterhuysen, in Open Forum #42 on Global Digital Cooperation, expressed concern about the potential overlap and inefficiencies of having separate processes for GDC and WSIS, urging for integration to facilitate country-level action.
Moreover, Filippo Pierozzi in Open Forum #12 emphasized the need for concerted efforts to ensure that the parallel processes of GDC and WSIS are complementary rather than divergent, with a focus on building from existing agreements.
These discussions underline the importance of coordination and integration to avoid redundancy and facilitate effective implementation and follow-up of digital governance initiatives.
How can we ensure the GDC doesn't become another set of well-intentioned but poorly implemented framework for digital cooperation?
The discussions on ensuring that the Global Digital Compact (GDC) does not become another well-intentioned but poorly implemented framework for digital cooperation were sparse across the sessions at the Internet Governance Forum 2024. However, a few sessions did address the topic, providing valuable insights and recommendations.
In WS #143 From WSIS to GDC-Harmonising strategies towards coordination, the emphasis was on the importance of coordination and effective implementation, with speakers like David Fairchild and Jorge Cancio discussing the need for strategic alignment with existing processes like WSIS.
During Day 0 Event #98 Discussing multistakeholder models in the Digital Society IWW, Amrita Choudhury criticized the GDC process for not being truly multistakeholder, highlighting a lack of transparency and inclusivity. She suggested that a more open and consultative approach is necessary for effective implementation.
In WS #213 Hold On, We’re Going South: beyond GDC, Vadim Glushenko expressed concerns about the implementation of the GDC being influenced by the interests of global digital platforms rather than achieving digital cooperation.
At the Open Forum #15 Digital cooperation: the road ahead, the discussion touched on the importance of partnerships and the need for alignment of interests and funding to ensure effective implementation of the GDC.
Finally, in Open Forum #12 Ensuring an Inclusive and Rights-Respecting Digital Future, the importance of stakeholder engagement and capacity building to ensure effective implementation of the GDC was highlighted. Emilar Gandhi and Sabhanaz Rashid Diya emphasized the need for inclusive processes and meaningful participation.
Overall, the discussions underscore the necessity for transparency, inclusivity, strategic coordination, and alignment of interests to prevent the GDC from being another ineffective framework. By incorporating these elements, the GDC can potentially achieve its goals for digital cooperation on a global scale.
Who needs to do what to ensure that the commitments and calls outlined in the Global Digital Compact have a meaningful and impactful reflection into local and regional realities? Are there lessons learnt from the implementation of WSIS action lines that could be put to good use?
The Global Digital Compact (GDC) aims to ensure that its commitments and calls have a meaningful and impactful reflection in local and regional realities. This requires a coordinated effort involving various stakeholders, including governments, businesses, civil society, and academia.
In the WS #143 From WSIS to GDC-Harmonising strategies towards coordination, the importance of local and regional implementation of GDC commitments was emphasized. Speakers noted the need for leveraging lessons from WSIS action lines, and Jason Pielemier mentioned the role of national and regional IGFs in facilitating these discussions.
During the Main Session 4: Looking back, moving forward – how to continue to empower the IGF’s role in Internet Governance, the roles of NRIs (National and Regional IGFs) and the need for better outreach and connection to local and regional realities were emphasized. Lessons from WSIS implementation include the importance of communication and collaboration among stakeholders.
In the Day 0 Event #98 Discussing multistakeholder models in the Digital Society IWW, Amrita Choudhury emphasized the importance of buy-in from all relevant stakeholders, including nation states, companies, and civil society, to ensure effective implementation of the GDC.
The Open Forum #12 Ensuring an Inclusive and Rights-Respecting Digital Future featured several speakers, including Sabhanaz Rashid Diya and Adeboye Adegoke, who emphasized the role of local and national networks in translating commitments into action and the importance of capacity building to enable effective participation.
Finally, the Open Forum #15 Digital cooperation: the road ahead highlighted the necessity for broad stakeholder engagement, including governments, businesses, civil society, and academia, to implement GDC objectives effectively.
How can we address the tension between the drive for digital sovereignty and the need for a globally interoperable internet?
The tension between digital sovereignty and the need for a globally interoperable internet was discussed in several sessions during the Internet Governance Forum 2024. The discussions highlighted the importance of balancing national interests with the benefits of global connectivity.
In WS #278 Digital Solidarity & Rights-Based Capacity Building, Jennifer Bachus emphasized the risks of protectionism inherent in digital sovereignty, while underscoring the importance of cross-border data flows for maintaining a globally interoperable internet. This perspective suggests that while countries might seek to protect their digital infrastructure and data, it is crucial to engage in cross-border cooperation to ensure the seamless flow of information.
In WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs, Dr. Jimson Olufuye and other speakers highlighted the importance of collaboration and partnership to ensure both digital sovereignty and interoperability. They argued that cooperation among nations and stakeholders is vital to achieving a balance where countries can maintain control over their digital space while not isolating themselves from the global internet community.
Additionally, Alejandro Pisanty, in DC-CIV & DC-NN: From Internet Openness to AI Openness, suggested that countries should aim for digital agency through collaboration rather than strict sovereignty. This approach advocates for a focus on digital empowerment and agency, allowing countries to benefit from global technological advancements while maintaining autonomy over their digital strategies.
Furthermore, in WS #213 Hold On, We’re Going South: beyond GDC, Vadim Glushenko addressed digital sovereignty by highlighting how Russian IT solutions offer non-politicized and customizable digital solutions that respect digital sovereignty while promoting global cooperation. This indicates a model where digital solutions are tailored to national needs while being open to international collaboration.
In conclusion, the discussions emphasized the need for a balanced approach that respects digital sovereignty while fostering an environment of global cooperation. This balance can be achieved through cross-border data flows, collaborative partnerships, and digital agency, ensuring that the internet remains a unified and interoperable global resource.
What could be the potential long-term impacts of the differing approaches to tech regulation adopted by China, EU, and USA?
The potential long-term impacts of the differing approaches to tech regulation adopted by China, the EU, and the USA were discussed in some sessions at the Internet Governance Forum 2024. In the session WS #213 Hold On, We’re Going South: beyond GDC, Milos Jovanovic highlighted the comparison between these regions, emphasizing that “different regions have varying stances on technology and data control, which could impact global cooperation.” This suggests that the divergent regulatory frameworks might lead to fragmentation in global tech standards and could pose challenges in achieving unified international cooperation on technology issues.
In another session, WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder, Ahmad Bhinder spoke about the “different approaches to AI regulation by various countries, such as the EU and the US,” pointing out the differences without delving into the specific long-term impacts. This discussion underscores the complexity and potential for conflict when harmonizing global AI regulation, as each region has its own priorities and regulatory philosophies.
The insights from these sessions suggest that while the differing approaches to tech regulation reflect the unique socio-political and economic contexts of each region, they also highlight potential obstacles to achieving cohesive global governance in technology. The need for dialogue and cooperation remains crucial to mitigate fragmentation and ensure that technological advancements benefit all regions equitably.
How do we balance the need for global coordination on tech governance with the importance of context-specific, localised approaches?
The balance between global coordination on technology governance and the necessity for context-specific, localized approaches was discussed in several sessions at the Internet Governance Forum 2024.
During WS #82: A Global South perspective on AI governance, Advocate Lufuno Tshikalange and others emphasized the importance of developing regulatory frameworks that are responsive to local challenges. They argued against the mere replication of frameworks from other regions like the EU, advocating instead for solutions that consider the unique challenges and needs of different regions.
In the Main Session on Policy Network on Artificial Intelligence, Yves Iradukunda and others highlighted the importance of partnerships and adapting global governance frameworks to local contexts. This approach ensures that governance mechanisms are both globally aligned and locally relevant.
Additionally, WS #143: From WSIS to GDC – Harmonising strategies towards coordination emphasized the need for global coordination while respecting local contexts. Anita Gurumurthy stressed the importance of international solidarity and understanding among nations to address digital governance challenges effectively.
Furthermore, WS #98: Towards a global, risk-adaptive AI governance framework discussed the necessity for adaptable frameworks that respect cultural differences. Speakers like Paloma Villa Mateos and Thomas Schneider supported a balanced approach that considers regional diversity while aligning with global standards.
In DC-CIV & DC-NN: From Internet Openness to AI Openness, Yik Chan Chin addressed the need for interoperability and respecting regional differences in AI governance, further emphasizing the importance of creating frameworks that are not only globally coordinated but also locally relevant.
Finally, in the Open Forum #61: WSIS to WSIS+20, Raquel Gatto highlighted the importance of both global and local discussions in the IGF process. She emphasized that local actions should inform global discussions and vice versa, ensuring that governance is both comprehensive and context-specific.
This ongoing dialogue underscores the importance of striking a balance between global coordination and local specificity, ensuring that governance frameworks are effective, inclusive, and practical across different contexts.
How can we create more effective mechanisms for civil society participation in tech policy-making that go beyond token consultations?
The discussions at the Internet Governance Forum 2024 highlighted the pressing need for more effective mechanisms for civil society participation in tech policy-making, emphasizing the importance of moving beyond token consultations. Several sessions delved into this topic, proposing actionable strategies and identifying challenges.
In Workshop #51, Jasmine Ko emphasized the necessity for inclusive and participatory processes involving diverse stakeholders, including civil society, to address digital governance issues. This highlights the need for frameworks that genuinely incorporate civil society voices in decision-making processes.
In Workshop #278, Susan Mwape highlighted the importance of genuine civil society engagement and the role of networks in promoting participation. This underscores the need for creating platforms where civil society can engage constructively and influence policy-making.
During Workshop #209, the need for meaningful engagement of civil society in policy-making processes was stressed, ensuring their voices are considered in decision-making. This reflects the growing recognition of the critical role civil society plays in shaping equitable and effective tech policies.
Day 0 Event #98 featured Avri Doria and Amrita Choudhury, who both emphasized the need for meaningful participation, transparency, and accountability in multistakeholder processes, beyond mere tokenism. This points to a need for reforming participation mechanisms to ensure they are impactful and representative.
In Workshop #266, the discussion emphasized the need for civil society organizations to unify in requesting unrestricted funding for core issues like research and upskilling. Nana Wachako, Stephanie Borg Psaila, and Rosemary Koech-Kimwatu noted the importance of regional collaborations and leveraging local knowledge for policy influence, highlighting the need for CSOs to be proactive and engage constructively in policy-making processes.
Tejaswita Kharel, in Workshop #45, criticized the current tokenistic involvement of civil society in tech policy discussions, suggesting the need for genuine engagement. This critique aligns with the broader call for systemic changes to ensure civil society’s voices are not only heard but acted upon.
In Open Forum #44, Gbenga Sesan emphasized the need for prioritization and participation from civil society to bring knowledge and different perspectives to standards development, noting the challenges faced due to resource and time constraints.
Amrita Choudhury, in Open Forum #66, highlighted the importance of genuine participation beyond tokenism, particularly for youth and civil society, suggesting capacity building and accountability as means to enhance participation.
The discussions collectively underscore a critical shift towards more inclusive, impactful, and representative participation of civil society in tech policy-making. Implementing these recommendations will require concerted efforts from all stakeholders to ensure that civil society’s contributions are not only acknowledged but also integrated into policy frameworks.
What are the implications of developed countries exporting their digital governance models to the Global South through development aid and capacity building programmes?
The topic of developed countries exporting their digital governance models to the Global South was touched upon in a few sessions at the Internet Governance Forum 2024. While it was not a central theme in most discussions, a couple of notable mentions provide insights into the implications of such practices.
In “A Global South perspective on AI governance”, the discussion centered around the EU AI Act and its potential extraterritorial impact. There was a concern about the “possible emulation of such frameworks by other countries, including those in the Global South,” highlighting the influence that developed countries’ regulations can have beyond their borders.
Similarly, in “Beyond Borders – NIS2’s Impact on Global South”, Emily Taylor discussed the “democratic deficit of states affected by legislation without having input in the process,” indicating concerns about the imposition of governance models that do not consider local contexts and the lack of agency for Global South nations in these legislative processes.
Lastly, during “Next Steps in Internet Governance: Models for the Future”, Amrita Choudhury emphasized “capacity building efforts and the need for developing countries to adapt governance models to their own contexts,” highlighting the importance of local adaptation rather than direct adoption.
Overall, the discussions underscore the potential risks associated with the export of digital governance models, including the risk of undermining local governance structures and the importance of tailoring such models to fit specific regional needs and contexts.
What is missing in our current approaches to addressing digital divides and why are we not there yet?
The discussions on addressing digital divides during the Internet Governance Forum 2024 highlighted several key areas where current approaches are lacking. A recurring theme was the need for a comprehensive strategy that addresses fundamental issues such as access, infrastructure, and digital literacy, particularly in the Global South.
- During the session on AI governance from a Global South perspective, Jenny Domino emphasized the importance of addressing underlying digital infrastructure and inequality issues before focusing solely on AI development.
- In the Policy Network on Artificial Intelligence session, Yves Iradukunda highlighted existing inequities in AI adoption and underscored the importance of capacity building and partnerships to bridge digital divides.
- The discussion in High-Level Session 4 acknowledged the substantial portion of the global population that remains unconnected, emphasizing the need for inclusive digital transformation.
- In the session on AI regulation, Ananda Gautam discussed the digital divide in developing nations, citing challenges such as internet access and the capacity to build AI models.
- The session titled “From WSIS to GDC” featured Gitanjali Sah, who highlighted ongoing challenges in digital inclusion and the necessity of leveraging past achievements to continue these efforts.
- In the NRI Main Session, the discussion highlighted the fact that 2.6 billion people remain unconnected, especially in Africa and Asia-Pacific, with challenges in access, infrastructure, and multilingualism.
- The Open Forum on Internet Governance Models highlighted issues such as access, infrastructure, and literacy as key challenges, with Keith Andere and Amrita Choudhury pointing out the need for addressing these particularly in the Global South.
- The discussion on ensuring an inclusive digital future emphasized the importance of local context and capacity development, with Sabhanaz Rashid Diya and Adeboye Adegoke discussing the importance of diverse stakeholders in addressing digital divides.
- In the session on empowering end-users, Olga Cavalli emphasized the importance of communication and information to enable meaningful participation, which is crucial for bridging the digital divide.
Overall, the discussions underscore the urgent need for comprehensive, inclusive strategies that tackle the root causes of digital divides, such as infrastructure deficits, access challenges, and lack of digital literacy. These strategies must be informed by local contexts and involve a diverse range of stakeholders to ensure a meaningful impact globally.
Given the slow progress in addressing digital divides despite years of effort, what fundamental assumptions about digital inclusion might we need to challenge or rethink to make meaningful progress in the coming decade?
Across various sessions at the Internet Governance Forum 2024, a recurring theme was the slow progress in addressing digital divides and the need to challenge fundamental assumptions about digital inclusion to make meaningful progress in the coming decade. Although direct discussions on this question were not prevalent, some insights can be derived from related topics.
During the session on Sharing and Exchanging Compute: New Digital Divisions, the need to localize infrastructure efforts to rural areas was emphasized. It was pointed out that current efforts often do not reach those most in need, such as women in rural Tanzania, highlighting a gap in the assumption that digital inclusion efforts are universally effective.
These discussions suggest that to address digital divides effectively, there must be a shift from broad, generalized solutions to more tailored, context-specific approaches. Recognizing the unique challenges faced by different communities, especially in rural and underserved areas, is crucial for developing strategies that truly bridge the digital divide.
Overall, while the specific question on rethinking assumptions about digital inclusion was not directly addressed in many sessions, the insights from related discussions suggest a need to rethink the current strategies to be more inclusive and equitable.
What are the risks of over-emphasising quantitative metrics in measuring digital inclusion, potentially overlooking qualitative aspects of meaningful connectivity like empowerment, digital literacy, etc.?
Based on the provided sessions from the Internet Governance Forum 2024, the question of the risks associated with over-emphasizing quantitative metrics in measuring digital inclusion, and potentially overlooking qualitative aspects such as empowerment and digital literacy, was not directly mentioned or discussed in any of the sessions. Therefore, no specific insights or quotes from the events can be referenced or hyperlinked in this summary.
This highlights a potential gap in the discussions at the forum, suggesting a need for future sessions to address how digital inclusion metrics can be balanced to better capture qualitative aspects. These aspects are crucial for understanding the full scope of digital inclusion, as they encompass the empowerment of individuals, the enhancement of digital literacy, and the fostering of meaningful connectivity. Without addressing these qualitative dimensions, there is a risk of misrepresenting the true state of digital inclusion and failing to identify areas that require more nuanced policy interventions.
Future dialogues at such forums could benefit from incorporating these considerations, ensuring that digital inclusion metrics are comprehensive and truly reflective of individuals’ experiences and capabilities within the digital landscape.
How do we balance the growing emphasis on AI divides and governance with the need to address broader issues of digital inequality and infrastructure gaps, ensuring that the focus on AI does not overshadow other critical areas of digital policy that require attention?
Within the discussions at the Internet Governance Forum 2024, the need to balance the focus on AI governance with broader digital inequality and infrastructure concerns was notably addressed in two sessions.
In WS #82: A Global South perspective on AI governance, Jenny Domino highlighted the risk of focusing on AI governance while neglecting broader digital infrastructure and inequality issues. She emphasized that while AI is a critical area of development, attention must also be directed towards ensuring equitable access to digital infrastructure, especially in the Global South. This perspective stresses the importance of integrating AI governance within a broader framework that addresses the digital divide.
Similarly, during the Main Session | Policy Network on Artificial Intelligence, Yves Iradukunda and others emphasized the need to integrate AI governance with broader issues of digital inclusion and infrastructure development. This session underscored the necessity for AI policies that not only regulate technology but also promote inclusivity and equitable resource distribution across different regions.
These discussions underscore the ongoing challenge of ensuring that the advancements in AI governance do not come at the expense of addressing fundamental issues of digital inequality and infrastructure gaps. The consensus is clear: a holistic approach that considers both AI and digital infrastructure is essential for sustainable and inclusive digital development.
What are the implications of the growing role of satellite internet providers in shaping global internet access?
The growing role of satellite internet providers, such as Starlink and OneWeb, is significantly reshaping the landscape of global internet access. These providers are increasingly vital for delivering connectivity to remote areas and during disaster relief situations. During the Day 0 Event #154: Last Mile Internet: Brazil’s G20 Path for Remote Communities, the use of Starlink and satellite internet in rural areas was highlighted, emphasizing their importance in regions lacking traditional infrastructure.
Furthermore, the NRI Main Session on the Evolving Role of NRIs in Multistakeholder Digital Governance included discussions on the role of Starlink in providing connectivity during disasters, underscoring its significance in resilience and disaster response.
In WS #19: Satellites, Data, Action: Transforming Tomorrow with Digital, there was an in-depth discussion on how satellite internet providers are transforming global internet access. The session highlighted the benefits of expanding connectivity into remote and underserved areas. However, it also raised concerns about regulatory challenges, market concentration, environmental impacts, and the potential for digital colonialism.
Overall, while satellite internet providers offer promising solutions for global connectivity issues, they also pose significant challenges that must be addressed through thoughtful regulation and international cooperation.
How can we ensure that efforts to promote digital financial inclusion don't expose vulnerable populations to new forms of exploitation?
The theme of promoting digital financial inclusion without exposing vulnerable populations to new forms of exploitation was touched upon in a few sessions during the Internet Governance Forum 2024.
In Open Forum #66: Next Steps in Internet Governance: Models for the Future, Amrita Choudhury highlighted the importance of ensuring that financial inclusion efforts are implemented carefully to prevent potential exploitation of vulnerable groups through digital services. She emphasized that these initiatives must be designed to protect and empower users rather than expose them to risks.
The discussion acknowledges the potential risks associated with digital financial inclusion and underscores the necessity of designing systems that prioritize user protection and empowerment. This includes implementing robust regulatory frameworks, enhancing digital literacy, and ensuring transparent data practices. By focusing on these measures, stakeholders can work towards safeguarding vulnerable populations from exploitation.
The general consensus across these discussions is that while digital financial inclusion can offer significant benefits, it is crucial to approach its implementation with a strong focus on ethics, security, and inclusivity to truly serve the needs of those it aims to benefit.
What are the risks of over-emphasising STEM education at the expense of humanities and social sciences in preparing for the digital future? And how can we address them?
The issue of over-emphasizing STEM education at the expense of humanities and social sciences in preparing for the digital future was not broadly addressed in the sessions of the Internet Governance Forum 2024. However, the importance of a balanced educational approach was briefly touched upon in the session titled DC-Inclusion & DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved. In this session, speakers Sarah Birungi Kaddu and Onica Makwakwa emphasized the importance of including digital literacy and humanities to empower women and marginalized communities.
While the specific risks of over-emphasizing STEM were not directly discussed, the session highlighted the need for diverse educational frameworks that integrate both technical and humanistic knowledge. This balance is essential for fostering critical thinking, ethical understanding, and inclusive growth in the digital age.
How can we better coordinate capacity building efforts among development agencies and partners to avoid duplication and maximise impact?
During the Internet Governance Forum 2024, the topic of coordinating capacity building efforts among development agencies and partners to avoid duplication and maximize impact was addressed in several sessions.
In the session WS #51 Internet & SDG’s: Aligning the IGF & ITU’s Innovation Agenda, Umut Pajaro Velasquez emphasized the importance of coordination and collaboration among various stakeholders, including governments, NGOs, and the private sector, to enhance digital skills and capacity building.
Yves Iradukunda, in the Main Session | Policy Network on Artificial Intelligence, highlighted the importance of partnerships and coordinated efforts to address capacity building and digital divides.
Orhan Osmani, in the Open Forum #53 Safeguarding Critical Infrastructure Beyond Borders, suggested the need for stakeholders to sit down and decide who will do what to avoid duplication of efforts.
During the Main Session | Best Practice Forum on Cybersecurity, Teresa Horejsova emphasized the importance of cooperation and integration between different portals and resources. Brendan Dowling highlighted the need for mechanisms of coordination and urged stakeholders to use existing processes effectively.
In WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs, Dr. Martin Koyabe emphasized the need for harmonization of policies and capacity building efforts across regions to maximize impact.
Keith Andere and Amrita Choudhury, in Open Forum #66 Next Steps in Internet Governance: Models for the Future, touched on capacity building, highlighting its importance and suggesting coordinated efforts to maximize impact, particularly in developing regions.
Mauricio Gibson, in WS #97 Interoperability of AI Governance: Scope and Mechanism, emphasized the importance of coordinating capacity building efforts to avoid duplication and maximize impact. He highlighted the UK’s AI for Development program as an example of investing in skills and compute in Africa and Asia.
Who can do what to achieve the desired interoperability of data systems and data governance arrangements, considering the fact that there are different interests and priorities among and between countries, companies, and other stakeholders?
The discussion on achieving interoperability of data systems and data governance across varied stakeholders, including countries and companies with differing priorities, was scarcely addressed in the sessions at the Internet Governance Forum 2024. However, a few sessions touched upon related aspects.
In the Main Session | Policy Network on Artificial Intelligence, Brando Benifei emphasized the importance of transparency and safeguards in achieving interoperability, highlighting the need for establishing common standards as a critical step.
Additionally, during the Day 0 Event #61, the necessity for interoperability of policy and regulatory approaches was underscored, suggesting that collaboration among stakeholders could be instrumental in achieving this objective.
Overall, while direct discussions on achieving interoperability were limited, the sessions underscored the importance of transparency, common standards, and collaborative policy efforts as potential pathways to harmonize data systems and governance across diverse interest groups.
How can we move away from the rather false dichotomy between data localisation and cross-border data flows, and focus on different approaches that combine localisation and free flows depending on the types of data?
The discussion on moving beyond the dichotomy between data localisation and cross-border data flows was prominently featured in some sessions of the Internet Governance Forum 2024. A nuanced approach that considers the types of data and the context is essential for balancing data localisation with free data flows.
In the session “WS #102 Harmonising Approaches for Data Free Flow with Trust”, the panelists emphasized the importance of balancing data localisation with cross-border data flows. Maiko Meguro from Japan highlighted the necessity of understanding the real bottlenecks and challenges in data transfer and working on interoperability solutions. Bertrand de La Chapelle pointed out the need for a more innovative approach to data sharing, dispelling misconceptions surrounding data localisation.
Similarly, in the session “WS #278 Digital Solidarity & Rights-Based Capacity Building”, Jennifer Bachus discussed the risks associated with data localisation and underscored the importance of cross-border data flows for security and innovation.
The “WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs” session also touched upon the approach of data classification and collaboration to ensure both localisation and free data flows, as highlighted by Dr. Jimson Olufuye and others.
Furthermore, in the session “Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa”, the focus was on balancing data localisation within an African context with cross-border data flows to facilitate trade and maintain economic growth, as discussed by Thelma Quaye and Paul Baker.
These discussions collectively underscore the importance of a balanced and contextually informed approach to data governance, ensuring that different types of data are handled in a way that promotes both security and innovation.
What are the implications of framing digital sovereignty primarily in terms of data control, while paying less attention to arguments related to technological capacity building?
The concept of digital sovereignty, when primarily framed around data control, can have several implications. This approach often overlooks the critical aspect of technological capacity building, which is essential for sustainable development and innovation. During the discussions at various sessions of the Internet Governance Forum 2024, this topic was not extensively covered. However, some insights were shared in specific sessions.
In the session DC-CIV & DC-NN: From Internet Openness to AI Openness, Alejandro Pisanty suggested moving from digital sovereignty to digital agency, implying a broader approach beyond just data control.
Further, in the session WS #213 Hold On, We’re Going South: beyond GDC, Vadim Glushenko discussed the importance of digital sovereignty, emphasizing that Russian IT solutions respect digital sovereignty while allowing for technological capacity building.
The lack of focus on technological capacity building could result in a dependency on foreign technologies, limiting a country’s ability to innovate and adapt to new digital challenges. A more balanced approach that includes both data control and the development of technological capabilities is crucial for ensuring comprehensive digital sovereignty.
To what extent does the current approach to promoting certain digital public infrastructure initiatives risk creating new forms of digital colonialism?
The risk of digital colonialism through the promotion of digital public infrastructure initiatives was not extensively discussed at the Internet Governance Forum 2024. However, some aspects of the concern were touched upon in WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital. During this session, it was highlighted that the dominance of developed countries or companies in satellite internet could lead to the imposition of their own data governance standards, exacerbating global inequalities. This could potentially create new forms of digital colonialism, where less developed regions become dependent on external infrastructure and policies.
Overall, while the specific topic of digital colonialism was not a primary focus in most sessions, the potential for inequalities emerging from digital infrastructure initiatives was acknowledged. The discussions imply a need for more inclusive and equitable governance frameworks that consider the interests and needs of all regions involved, particularly those in the Global South.
Could the push for digital identity systems exacerbate existing forms of discrimination and exclusion?
The question of whether the push for digital identity systems could exacerbate existing forms of discrimination and exclusion was not directly addressed in the sessions of the Internet Governance Forum 2024 as per the provided transcripts. Despite the wide range of topics covered, including Africa in CyberDiplomacy, Designing Digital Future for Cyber Peace & Global Prosperity, and Gender Prioritization through Responsible Digital Governance, this specific issue was not a focal point.
While the sessions tackled various aspects of digital governance, AI, cybersecurity, and inclusion, the potential for digital identity systems to contribute to discrimination and exclusion remains an area that requires further exploration and discussion in future forums. It is crucial for stakeholders to ensure that digital identity systems are designed and implemented in a way that is inclusive and equitable, preventing any form of marginalization or bias.
For more detailed insights, you may explore the session transcripts through the respective links provided above.
What are the risks of relying too heavily on public-private partnerships in developing digital infrastructure, particularly in terms of accountability and public interest?
The topic of public-private partnerships (PPPs) in developing digital infrastructure and the associated risks, particularly regarding accountability and public interest, was highlighted in a few sessions at the Internet Governance Forum 2024. Notably, Susan Mwape emphasized the critical role of PPPs while cautioning on the necessity of maintaining accountability. Similarly, in Dr. Martin Koyabe’s session, he acknowledged the significance of these partnerships but stressed the importance of ensuring accountability and protecting public interest.
In another discussion, Kendi Kosa from Mozambique brought up the need for PPPs and regulatory frameworks to tackle infrastructure challenges in underserved areas, pointing out both the opportunities and potential risks involved. These discussions suggest that while PPPs can be instrumental in digital infrastructure development, there are inherent risks related to accountability and public interest that must be addressed through effective governance and regulatory measures.
When tackling dis/misinformation and other types of harmful content, how do we move away from over-emphasising technical solutions, and focus more on addressing underlying societal issues fueling the spread of such content?
The discussions from the IGF 2024 highlighted the need to address societal issues contributing to the spread of dis/misinformation, moving beyond solely technical solutions. In the session titled “From Internet Openness to AI Openness”, Alejandro Pisanty emphasized the importance of understanding the human factors behind misinformation and addressing them directly rather than focusing solely on technological solutions.
In another session, “Hold On, We’re Going South: beyond GDC”, Pavel Zakharov suggested moving from confrontation to cooperation and resilience in tackling disinformation, emphasizing the need to address societal issues and not just rely on technical solutions.
During the “Digital for Development: UN in Action” session, Dr. Hoda A. Alkhzaimi mentioned the importance of creating platforms that incorporate real-time reliability checks, while Monica Gizzi highlighted the role of fact-checking agencies to combat misinformation and disinformation. This session emphasized the need for a holistic approach involving multiple stakeholders, including civil society, governments, and technology companies.
Viktoriia Romaniuk, in the “Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved” session, emphasized the need for media literacy and cooperation among governments, organizations, and tech companies to combat disinformation.
Finally, the “The Internet Governance Landscape in The Arab World” session discussed regulatory frameworks and the importance of maintaining communication channels between governments and platforms as part of the efforts to tackle misinformation and disinformation.
What are the risks of over-relying on AI-powered content moderation systems in diverse cultural contexts? And how can they be addressed?
During the WS #82 A Global South perspective on AI governance, Jenny Domino raised concerns about the reliance on AI-powered systems for content moderation. She emphasized the “importance of human rights and the potential misuse of AI in such contexts.” The discussion highlighted the risks associated with these systems, particularly in diverse cultural contexts where there is a high potential for AI to misunderstand or misrepresent cultural nuances and contexts.
AI systems often lack the contextual understanding needed to accurately moderate content across different cultures, leading to the potential for both over-censorship and under-censorship. This can result in the suppression of legitimate expression and the persistence of harmful content. Furthermore, there is a risk that these systems may be used to reinforce existing biases or perpetuate misinformation.
To address these risks, it is crucial to integrate human oversight in the moderation process. Human moderators, who understand the cultural contexts, can provide the necessary nuance that AI systems currently lack. Moreover, there should be a continuous effort to improve the cultural sensitivity of AI algorithms through better training datasets that reflect a wide range of cultural contexts.
Ensuring transparency and accountability in AI operations is also vital. Stakeholders should work collaboratively to establish guidelines and ethical standards for AI use in content moderation, particularly in areas with diverse cultural backgrounds. By doing so, the balance between automation and human insight can be achieved, leading to more effective and culturally sensitive content moderation.
For more detailed discussions and quotes from the sessions, please refer to the original transcript of WS #82 A Global South perspective on AI governance.
What are the long-term implications of the growing role of private digital platforms in shaping public discourse and democratic processes?
The long-term implications of the growing role of private digital platforms in shaping public discourse and democratic processes were not directly discussed in any of the sessions at the Internet Governance Forum 2024, as indicated by the provided list of sessions. This suggests a possible gap in the discussions at the forum, considering the significant impact that private digital platforms have on public discourse and democratic processes globally. Given the absence of direct discussions on this topic, it could be beneficial for future forums to address the implications of digital platforms on democracy, including issues such as content moderation, misinformation, and the influence of algorithms on public opinion. Considering the influence these platforms wield, it is crucial to explore how governance frameworks could ensure they support democratic ideals rather than undermine them.
How can we create more effective mechanisms for addressing cross-border content moderation issues without creating global content standards?
In the discussions held during the Internet Governance Forum 2024, the question of how to address cross-border content moderation issues without implementing global content standards was not explicitly mentioned or discussed in any of the sessions. The absence of this topic suggests that it might not have been a primary focus during the event. Participants may have concentrated on other pressing issues related to digital governance, cybersecurity, and digital inclusion. For more information on the specific sessions, you can visit the individual session pages linked in the provided list.
What frameworks can be developed – and by whom – to ensure the well-being of content moderators, addressing their mental health, ethical challenges, and the need for continuous support in working for a healthier digital environment?
In an exploration of frameworks for ensuring the well-being of content moderators, the discussion at the Internet Governance Forum 2024 did not directly address this issue in any of the documented sessions. The following inquiry sought to identify and recommend frameworks that could be developed to support content moderators in their mental health, ethical challenges, and need for continuous support.
The absence of this specific topic across numerous sessions suggests a potential gap in the current discourse on digital governance and well-being. While many sessions focused on related issues such as digital peace and prosperity, combating illegal content, and AI and disinformation, none explicitly tackled the well-being of content moderators.
To address this gap, future discussions could benefit from a dedicated focus on content moderation, perhaps by drawing insights from other sessions on mental health and workplace support. Establishing a multistakeholder framework, involving tech companies, mental health professionals, and policy-makers, could provide the necessary support structure. This could also involve developing ethical guidelines and continuous training programs to better equip content moderators in handling the psychological impacts of their work.
While the IGF 2024 sessions did not cover this topic, it remains a critical area for future exploration to ensure a healthier digital environment and the well-being of those who maintain it.
What are the implications of over-emphasising the role of technology in achieving the sustainable development goals? How to ensure that the broader systemic challenges (social and cultural) are not neglected in the pursuit of technological advancements?
The role of technology in achieving the Sustainable Development Goals (SDGs) is an area of significant interest and debate. Although technology can act as a powerful catalyst for progress, over-emphasizing its role can lead to the neglect of broader systemic challenges, such as social and cultural issues, which are equally critical in achieving holistic and sustainable development.
During the session WS #51 Internet & SDG’s: Aligning the IGF & ITU’s Innovation Agenda, Umut Pajaro Velasquez stressed the importance of aligning technology with sustainable development goals by ensuring that diverse stakeholder perspectives are considered, including social and cultural factors. This highlights the need for a multidimensional approach that balances technological advancements with attention to human-centered issues.
Furthermore, in High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative, Doreen Bogdan Martin emphasized that while AI can accelerate progress towards the SDGs, it is crucial to ensure that technology does not exacerbate existing inequalities. This underscores the importance of integrating ethical considerations and addressing social disparities alongside technological innovations.
The discussions across these sessions suggest that achieving the SDGs through technology requires a balanced approach that equally prioritizes social inclusion and cultural sensitivity. This calls for policies and frameworks that incorporate a holistic view of development, addressing both technological and non-technological factors to foster sustainable progress.
What is missing in our approaches to addressing the environmental impact of digital technologies?
The environmental impact of digital technologies is a crucial concern that was insufficiently addressed across various sessions at the Internet Governance Forum 2024. Notably, in the Main Session | Policy Network on Artificial Intelligence, Meena Lysko emphasized the importance of implementing sustainable practices in AI development, addressing both the environmental implications of resource extraction and the necessity for comprehensive governance frameworks. This discussion was one of the few instances where the environmental impact of digital technologies was highlighted.
Another session where this issue was acknowledged was the High-Level Session 4: From Summit of the Future to WSIS+ 20. The session underscored environmental sustainability and the impact of digital technologies on the environment as critical priorities, specifically pointing out concerns such as e-waste and carbon emissions from data centers.
Despite these mentions, the overall discourse at the forum lacked a unified strategy or comprehensive discussion on the environmental implications of digital technologies, indicating a significant gap in the approach to integrating environmental considerations into digital governance frameworks. This absence highlights a need for more focused dialogue and actionable strategies at future forums to address this pressing issue.
What innovative strategies (e.g. viral social media campaigns, influencer collaborations, etc.) could be used to raise public awareness about the environmental and health impacts of e-waste and encourage more responsible disposal practices?
Unfortunately, none of the sessions at the Internet Governance Forum 2024 explicitly discussed the question of innovative strategies to raise public awareness about the environmental and health impacts of e-waste and encourage more responsible disposal practices.
While this important topic was not covered, it remains crucial to explore various methods such as viral social media campaigns, influencer collaborations, and community engagement efforts to address this pressing issue. Future forums and discussions could greatly benefit from integrating these strategies, recognizing the significant role digital platforms and influential figures can play in promoting sustainability and responsible e-waste management.
For more comprehensive discussions on related topics, you may explore the sessions listed in the original document, though they do not cover the specific question of e-waste awareness.
How can we ensure that efforts to create safe online spaces for children don't infringe on their rights to privacy and free expression?
The discussion on how to create safe online spaces for children without infringing on their rights to privacy and free expression was addressed in DC-IoT & DC-CRIDE: Age aware IoT – Better IoT and Open Forum #60 Safe Digital Space for Children.
In the session on Age aware IoT, Sonia Livingstone emphasized the importance of designing with a child rights approach, considering both privacy and safety in relation to children’s broader rights.
During Open Forum #60, Jutta Kroll from the German Digital Opportunities Foundation noted that measures like parental controls and monitoring could potentially infringe on children’s privacy rights as outlined by the UN Convention on the Rights of the Child.
The sessions highlighted the need for a balanced approach that prioritizes child safety while respecting their rights to privacy and free expression, ensuring that digital tools and policies are designed to support children’s overall well-being and development.
How can we design and enforce gender-responsive laws and legal frameworks that effectively protect women from online harm while promoting their digital rights and participation?
The question of designing and enforcing gender-responsive laws and legal frameworks to protect women from online harm while promoting their digital rights was
highlighted in Day 0 Event #142. During this session, Dr. Maha Abdel Nasser and Noha Ashraf Abdel Baky emphasized the crucial role of awareness and civil society efforts in safeguarding women from online harm, while simultaneously promoting their digital rights. They stressed that creating a safe online environment for women involves a multilayered approach that includes not only legislative measures but also societal and educational initiatives.
Additionally, the importance of gender equality being mainstreamed across all frameworks and processes was acknowledged in
Day 0 Event #61. This session pointed out that ensuring gender equality in digital spaces requires comprehensive strategies that integrate gender perspectives into policy-making and implementation at all levels.
The need for an action line on gender within the WSIS processes was briefly mentioned in the session titled
“20 Years of implementation of WSIS and the vision beyond 2025”. While this session did not delve deeply into specific legal frameworks, it highlighted the ongoing need to address gender issues within the broader context of digital governance.
Moreover,
Day 0 Event #55 featured Matilda Moses-Mashauri, who underscored the gender inequality in digital access, particularly in rural areas. She advocated for equitable access to technology for young girls and women, which is an essential component of creating a gender-responsive digital environment.
What are the potential negative consequences of framing digital rights primarily in terms of individual liberties, potentially overlooking other rights and responsibilities?
During the Internet Governance Forum 2024, the potential negative consequences of framing digital rights primarily in terms of individual liberties were not specifically addressed in any session. This oversight can lead to several challenges that deserve attention.
One of the main concerns is that emphasizing individual liberties might overshadow collective rights and responsibilities. Individual-centric frameworks could neglect the societal impacts of digital rights, such as community safety, collective privacy concerns, and the equitable distribution of digital resources. This approach might also fail to consider the responsibilities that come with digital freedoms, potentially leading to issues like misinformation, digital exclusion, or cybersecurity threats.
Moreover, focusing solely on individual liberties might inadequately address the needs of marginalized or vulnerable groups who require additional protections and considerations beyond individual rights. A balanced approach that integrates both individual and collective perspectives is necessary to ensure comprehensive digital rights governance that aligns with broader societal goals and responsibilities.
Although the specific topic was not discussed in the sessions, the general theme of balancing rights and responsibilities in digital governance remains crucial for future discussions.
How can we move beyond the binary framing of ‘digital rights vs. security’ in discussions about encryption and lawful access?
The discussion on moving beyond the binary framing of ‘digital rights vs. security’ in the context of encryption and lawful access was addressed in Workshop #199. The panelists highlighted the tension between privacy rights and security imperatives, particularly in the context of combating child exploitation online.
Different viewpoints were presented, emphasizing the importance of adopting a balanced approach that considers both privacy rights and security concerns. This requires moving away from a binary perspective and instead embracing a framework that can accommodate the complexities and nuances of the issue.
The conversation stressed the necessity of collaboration between various stakeholders, including governments, tech companies, civil society, and the public, to develop strategies that protect both individual privacy and the collective security of society. There was a consensus on the need for transparent and accountable processes that ensure lawful access is conducted in a manner that respects human rights.
The session concluded with a call for ongoing dialogue and cooperation to address the challenges and opportunities presented by encryption and lawful access, ensuring that solutions are both effective and respectful of fundamental rights.
How can we create comprehensive and effective governance frameworks for brain-computer interfaces and neurotechnology that adequately address ethical and privacy concerns? And how ensure that such frameworks are diligently implemented?
The topic of developing comprehensive and effective governance frameworks for brain-computer interfaces and neurotechnology, focusing on ethical and privacy concerns, was scarcely addressed in the Internet Governance Forum 2024 sessions listed, with only a singular mention. During Open Forum #44: Building Trust with Technical Standards and Human Rights, Marek Janovský highlighted the urgency to address emerging technologies like neuroscience by emphasizing inception and systemic properties to guarantee safety and responsibility. He underscored the necessity of incorporating a human rights-based approach from the outset.
Despite its significance, the issue of governance frameworks for neurotechnology did not find substantial discourse across the other sessions. The discussions mainly revolved around digital governance, AI, cybersecurity, and related topics, leaving a gap in addressing the specific requirements and ethical considerations for brain-computer interfaces and neurotechnology.
How can we enhance data collection efforts to better capture the diversity among persons with disabilities, ensuring the development of more accurate and inclusive policies and interventions?
The question of how to enhance data collection efforts to better capture the diversity among persons with disabilities, ensuring the development of more accurate and inclusive policies and interventions, was not specifically discussed in any of the sessions of the Internet Governance Forum 2024.
An examination of various sessions, such as Disability & Data Protection for Digital Inclusion and Her Data, Her Policies: Towards a Gender Inclusive Data Future, indicates that while related topics were addressed, the specific focus on enhancing data collection concerning persons with disabilities was notably absent.
This lack of discussion highlights an opportunity for future forums to explore this essential topic, which could lead to the development of more targeted strategies and technologies aimed at improving inclusive data collection methodologies for persons with disabilities.
Could a middle-ground solution be found between the efforts to advance global digital trade agreements and the call to address more immediate challenges, such as bridging digital divides and promoting data fairness?
During the Internet Governance Forum 2024 events, the specific question of finding a middle-ground solution between advancing global digital trade agreements and addressing immediate challenges, such as bridging digital divides and promoting data fairness, was not directly discussed in any session. However, the overarching themes of digital inclusion, data governance, and equitable access were recurrent topics throughout the forum.
While no discussions specifically addressed the interplay between global trade agreements and bridging digital divides, various sessions highlighted the importance of deepening cooperation on governance to bridge the digital divide, emphasizing multistakeholder collaboration as a crucial approach. Additionally, there was a focus on addressing gender inequality in meaningful access, which aligns with the goals of promoting data fairness and inclusivity.
The forum also explored aligning innovation agendas with sustainable development goals, indirectly supporting the idea of balancing technological advancements with equitable access. Moreover, digital innovation and transformation were highlighted as key drivers for global well-being, which could serve as a bridge between trade agreements and digital inclusivity efforts.
Overall, while the specific question was not explicitly tackled, the discussions at the IGF 2024 underscore the need for harmonizing efforts in digital trade and inclusivity initiatives, ensuring that technological progress benefits all communities equitably.
How can we create meaningful accountability mechanisms for big tech companies that go beyond fines and actually drive changes in corporate behaviour?
The topic of creating meaningful accountability mechanisms for big tech companies that go beyond fines was addressed in a few sessions during the Internet Governance Forum 2024. In the session WS #82 A Global South perspective on AI governance, Jenny Domino emphasized the importance of companies considering human rights impacts and moving beyond mere regulatory compliance. This perspective highlights the necessity for tech companies to integrate human rights considerations into their operational models to ensure responsible business conduct.
Similarly, in WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond, concerns were raised about the effectiveness of current accountability mechanisms. The session underscored the necessity for mechanisms that truly drive changes in corporate behavior rather than just imposing financial penalties.
In the session WS #213 Hold On, We’re Going South: beyond GDC, Alexandra Kozina highlighted the challenges of holding big tech accountable. She suggested that more meaningful rules and regulations are required to ensure compliance and promote ethical behavior among tech companies.
Additionally, during the DC-IoT & DC-CRIDE: Age aware IoT – Better IoT session, there were discussions about the need for effective liability and accountability mechanisms. Suggestions included imposing personal liability on executives to ensure adherence to ethical standards and responsible business practices.
Moreover, in Open Forum #44 Building Trust with Technical Standards and Human Rights, the role of tech companies in setting standards and exercising human rights due diligence was discussed. Yoo Jin Kim stressed the importance of engaging with tech firms to operationalize human rights principles, thereby fostering responsible conduct within the industry.
Can digital trade provisions in international agreements be designed in a way that facilitates international trade while also preserving domestic policy space for regulating the digital economy?
In reviewing the sessions of the Internet Governance Forum 2024, the specific question of whether digital trade provisions in international agreements can facilitate international trade while preserving domestic policy space for regulating the digital economy was not directly addressed in any of the discussions. The topic was notably absent from various sessions covering a range of issues related to digital governance, cybersecurity, AI, and more.
Given the absence of direct discussions on this issue, it remains a critical area for future exploration and dialogue. As nations continue to navigate the complexities of the digital economy, balancing international trade facilitation with domestic policy autonomy will be essential. The development of strategies and frameworks that can achieve this balance without compromising national interests or international collaboration is a key challenge for policymakers and stakeholders worldwide.
For those interested in related topics, sessions such as WS #180: Protecting Internet Data Flows in Trade Policy Initiatives might provide some contextual insights into the broader discourse around digital trade and regulation.
How do we ensure that efforts to regulate the digital economy don't inadvertently entrench the market power of dominant platforms?
The topic of ensuring that regulatory efforts do not inadvertently strengthen the market power of dominant digital platforms was sparsely addressed during the sessions at the Internet Governance Forum 2024. In WS #213 Hold On, We’re Going South: Beyond GDC, Alexandra Kozina discussed the challenges of regulating big tech. She emphasized the necessity of effective legal frameworks to prevent these companies from abusing their market power.
Despite the limited discussion on this specific question across the sessions, the mention in WS #213 underscores the broader challenge of balancing regulation with the need to foster a competitive digital economy. The discussion highlighted the importance of creating frameworks that can adapt to the evolving digital landscape without stifling innovation or inadvertently consolidating power within a few major players.
Overall, while the sessions did not extensively cover this topic, the insights provided suggest an awareness of the potential risks associated with digital economy regulation and a call for nuanced approaches that consider both market competition and consumer protection.
Are there risks associated with relying too heavily on self-regulation and corporate social responsibility in addressing tech-related societal challenges? If so, how do we address them?
The discussions around the risks of relying too heavily on self-regulation and corporate social responsibility (CSR) in addressing tech-related societal challenges were touched upon in a few sessions during the Internet Governance Forum 2024.
In the session A Global South perspective on AI governance, Jenny Domino emphasized the importance of corporate accountability and the need for companies to go beyond self-regulation by aligning with human rights frameworks. This underscores the viewpoint that self-regulation alone may not suffice in ensuring ethical governance, especially in regions like the Global South, where additional oversight may be necessary.
Similarly, in the Main Session | Policy Network on Artificial Intelligence, Jimena Viveros highlighted the insufficiency of self-regulation and the need for enforceable governance frameworks for AI. This reflects a call from experts for regulatory frameworks that are legally binding, to complement voluntary corporate efforts, thereby ensuring that AI technologies are developed and deployed responsibly.
Moreover, during the session DC-IoT & DC-CRIDE: Age aware IoT – Better IoT, there was a discussion on the need for a mix of self-regulation and formal regulation, with emphasis on ensuring industry responsibility. This approach advocates for a balanced model where self-regulation is supported by external regulations to ensure a comprehensive framework for addressing societal issues stemming from technological advancements.
Overall, while self-regulation and CSR play crucial roles in addressing tech-related challenges, these discussions highlight a consensus on the need for stronger, enforceable regulations that can hold corporations accountable and align their operations with global human rights and ethical standards.
What are the implications of the growing role of military and national security interests in shaping global cybersecurity norms?
The issue of military and national security interests shaping global cybersecurity norms was not explicitly discussed in any of the sessions listed. Consequently, there are no direct references or quotes available from the Internet Governance Forum 2024 sessions. This absence suggests that the topic might not have been a focal point during the event or it may have been addressed in broader contexts not captured within the session summaries provided.
Given the significance of military and national security considerations in cybersecurity, it is crucial for future forums to address these implications. Such discussions could explore how national security priorities influence international cybersecurity policies and norms, potentially affecting global cooperation, trust, and the balance between security and privacy.
What can be done to improve communication and coordination between technical and diplomatic communities in the cybersecurity domain?
The topic of enhancing communication and coordination between technical and diplomatic communities in the cybersecurity domain was extensively discussed in Open Forum #53 Safeguarding Critical Infrastructure Beyond Borders. During this session, speakers underscored the necessity for better communication and coordination, specifically highlighting the importance of joint training initiatives and establishing clearer role definitions at national levels. These measures were seen as critical steps to bridge the gap between the technical and diplomatic spheres and ensure a more cohesive approach to cybersecurity challenges.
Additionally, the session emphasized the importance of crafting policies that align the interests and expertise of both communities to facilitate seamless collaboration. By fostering mutual understanding and cooperation, these initiatives aim to enhance the collective ability to respond to cyber threats and safeguard critical infrastructure on a global scale.
Given the increasing use of AI in cybersecurity, how can we ensure that AI-driven security measures don't inadvertently create or exacerbate vulnerabilities?
The growing integration of artificial intelligence in cybersecurity presents both opportunities and challenges. During the workshop on AI as a Guardian for Critical Infrastructure in the Developing World, Hafiz Muhammad Farooq highlighted the potential of AI to “identify anomalies in real-time operations and augment detection and response mechanisms.” This underscores the proactive role AI can play in enhancing security measures.
However, the session also discussed the inherent risks associated with AI systems. Daniel Lohrmann emphasized concerns such as “data poisoning and adversarial attacks,” stressing the importance of robust data governance and secure model development. Continuous monitoring is vital to mitigate these risks and ensure that AI does not exacerbate vulnerabilities.
In another related session, Cybersecurity in AI: Balancing Innovation and Risks, participants discussed potential AI vulnerabilities, highlighting the need for continuous monitoring and layered protection. These discussions illustrate the critical balance between leveraging AI for cybersecurity advancements and safeguarding against the creation of new vulnerabilities.
Overall, the discussions from these sessions emphasize a dual approach: harnessing AI’s capabilities for enhancing cybersecurity while implementing stringent safeguards against its misuse or failure.
As end-to-end encryption becomes more widespread, how can we balance the need for privacy and security with the challenges it poses for combating child exploitation online? Are current proposals for 'client-side scanning' a viable solution or a dangerous precedent?
The topic of balancing privacy and security with the challenges posed by combating child exploitation online in the context of end-to-end encryption was addressed in WS #125 Balancing Acts: Encryption, Privacy, and Public Safety. During this session, Andrew Campling highlighted that client-side scanning for known child sexual abuse material (CSAM) images can reduce the problem without breaking encryption or privacy. He suggested that scanning images before they are uploaded and then encrypting them could help in detecting CSAM without privacy implications unless a match is found.
Additionally, in WS #199 Ensuring the online coexistence of human rights & child safety, the challenges of balancing privacy and security with combating child exploitation online were debated. The session included a discussion on the viability and implications of client-side scanning as a solution to detect child exploitation material, with some panelists expressing concerns about privacy and security risks.
With the increasing complexity of supply chains in technology manufacturing, how can we effectively implement ‘security by design’ principles when multiple actors across various jurisdictions are involved in the production process?
During the Internet Governance Forum 2024, the question of implementing ‘security by design’ principles in complex technology manufacturing supply chains was not discussed explicitly in any of the documented sessions. Despite the increasing complexity of these supply chains and the involvement of multiple actors across various jurisdictions, the topic remains largely unaddressed in the specific sessions mentioned.
However, the importance of such discussions is underscored by the need for collaborative approaches to cybersecurity and supply chain security across national boundaries. While this particular question was not tackled, sessions like WS #202 The UN Cybercrime Treaty and Transnational Repression and WS #137 Combating Illegal Content With a Multistakeholder Approach highlight the broader context of cybersecurity and international cooperation, which are crucial to addressing such challenges.
The absence of a direct dialogue on this topic indicates a potential area for future focus, emphasizing the need for comprehensive frameworks that integrate security considerations at every stage of the technology manufacturing process, involving all stakeholders. As the landscape of technology and regulation continues to evolve, fostering discussions on these themes at forums like the IGF could be instrumental in setting global standards for security by design.
How can we operationalise international norms on cybersecurity and critical infrastructure protection?
During the Internet Governance Forum 2024, the topic of operationalising international norms on cybersecurity and critical infrastructure protection was specifically discussed in the session Open Forum #53: Safeguarding Critical Infrastructure Beyond Borders. In this session, Rashida Syed Othman highlighted the crucial role that national and regional frameworks play in guiding efforts to protect critical infrastructure. She emphasized that while international norms provide a framework for cooperation, the implementation must be adapted to local contexts to be effective.
The discussion underscored the importance of integrating international norms into national strategies, ensuring that they are not only adopted but also operationalized in a way that addresses the specific challenges and threats faced by different regions. This includes building capacity at the local level and fostering collaboration across borders to enhance the security and resilience of critical infrastructure.
While other sessions at the forum did not directly address this question, the overarching theme of international cooperation and multi-stakeholder engagement was prevalent throughout the discussions, highlighting the interconnected nature of cybersecurity challenges and the need for a coordinated global response.
How can we responsibly deploy emerging technologies like AI and quantum computing in critical infrastructure while addressing potential vulnerabilities?
After reviewing the transcripts of various sessions from the Internet Governance Forum 2024, it appears that the question of how to responsibly deploy emerging technologies like AI and quantum computing in critical infrastructure while addressing potential vulnerabilities was not specifically discussed in any of the sessions. Consequently, there are no direct references or quotes available from the sessions that address this particular question.
While the topic of emerging technologies and their responsible use is a critical issue, it seems that these specific discussions were either not prioritized or not recorded in the available transcripts from the sessions at the IGF 2024. It is possible that related topics were discussed in a broader context, but without specific mention of AI and quantum computing in relation to critical infrastructure and potential vulnerabilities.
For those interested in this topic, it may be beneficial to explore other forums or discussions where these emerging technologies are the focal point, or to review future sessions at similar events where this subject might be covered more extensively.
How to establish universal baseline or minimum cybersecurity requirements for critical infrastructure protection across jurisdictions?
The challenge of establishing universal baseline cybersecurity requirements for critical infrastructure protection across jurisdictions was acknowledged in the session titled “Securing critical infrastructure in cyber: Who and how?” at the IGF 2024. The discussion highlighted the complexity of this endeavor due to different national legal frameworks. Participants emphasized the necessity for more transparent governance and international efforts to understand cross-jurisdictional interdependencies.
One of the key suggestions was to develop a common baseline understanding and interoperable legal systems to ensure security across supply chains. This approach would help in aligning different regulatory requirements and facilitate mutual recognition of cybersecurity standards among countries.
Moreover, the session underscored the importance of engaging multiple stakeholders, including governments, private sector, and civil society, to collaboratively define and implement these standards. This multi-stakeholder approach is crucial for achieving a comprehensive and effective cybersecurity framework that can be universally applied.
How can we ensure that provisions of the UN cybercrime convention are not misused for political prosecution? And how can future protocol negotiations be used to strengthen human rights safeguards while maintaining core provisions for addressing cybercrime?
The discussion on ensuring that the provisions of the UN cybercrime convention are not misused for political prosecution and how future protocol negotiations can be used to strengthen human rights safeguards while maintaining core provisions for addressing cybercrime was notably addressed in WS #202 The UN Cybercrime Treaty and Transnational Repression. During this session, panelists expressed significant concerns about the potential for misuse of the convention, particularly in terms of political repression. Deborah Brown and other speakers emphasized the need for stronger human rights safeguards, highlighting the dangers posed by vague definitions and the lack of enforceable limitations within the current framework. They stressed that the convention should not be ratified without substantial improvements.
In another session, WS #278 Digital Solidarity & Rights-Based Capacity Building, Jason Pielemeier reiterated the potential for the convention’s misuse for political repression and the critical importance of implementing rights-respecting measures.
These discussions underscore the necessity of addressing the identified gaps in human rights safeguards within the convention’s framework. Future protocol negotiations are seen as an opportunity to address these issues, with a strong emphasis on not proceeding with ratification without ensuring significant enhancements to protect against potential abuses.