Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality

Jovan Kurbalija
Published on January 5 2025
Read why clarity will define AI and digital developments in 2025. Consult main trends and list of events in digital governance and much more...

On 5 January, I chose the word ‘clarity’ to describe AI developments in 2025. At the end of the year, we have more clarity and less AI hype. It is clear that AI is not a magic wand that will solve all of humanity’s problems, as many have argued in previous years. Yet, the AI scene remains foggy, mainly due to the continuation of the AI hype narrative. While 2025 provided us with the contours of AI reality, clarity will increase in 2026 as we face a sharper contrast between, on the one hand, inflated expectations and the value of the AI industry, and, on the other hand, the still-limited use of AI by companies and society.

Last year was shaped by two main tech developments that occurred on January 20th, when President Trump was inaugurated, and DeepSeek released an open-source AI reasoning model. The remainder of 2025 was marked by addressing these two developments.

Trump’s marriage of convenience with Silicon Valley removed regulatory guardrails for AI, content policy, and cryptocurrency. AI investment helped Trump’s economic revival narrative, as it contributed between 34% and 50% to the rise of US GDP in 2025. A lot of investment came from abroad, especially from oil-rich Gulf countries. It resulted in an inflated AI bubble, which the world will have to cope with in 2026.

DeepSeek shook the AI space in three main ways: Firstly, it challenged the simplified formula behind the AI bonanza: more Nvidia GPUs = better AI. Second, DeepSeek’s open-source gambit challenged dominant proprietary players: OpenAI, Anthropic, and DeepMind. Third, it reshaped AI geopolitics by having Chinese companies act as shapers of AI developments through open-source solutions—the approach used by Western actors in establishing Internet, web, and Linux standards.

Trump’s presidency had more immediate and visible impacts, while DeepSeek’s ‘moment’ created a structural shift in AI development.

Here is the Scorecard of predictions from January 2025: Technology | Geostrategy | Governance | Security | Human Rights | Economy | Standards | Content | Development | Environment

Introduction to January 2025 Forecast

Clarity is the keyword for AI and digital developments in 2025.

It follows the hype of 2023 and the grounding of 2024.

In 2025, we will better understand AI’s risks, opportunities, and policy issues that must be regulated. By clarity, we also mean a return to digital basics. It’s easy to forget that even the most cutting-edge AI is built on decades-old foundations—like the humble TCP/IP protocol that underpins our digital reality.

Our 10 forecasts for 2025 begin with the evolution of AI technology itself, exploring how geostrategic interests and positions are shaping its development. From there, we delve into governance, where these interests crystallise into policies and regulations. With the stage set, we turn to key issues: security, human rights, the economy, standards, content, the environment, and global development.

The first reality check for our 10 forecasts in 2025 starts on 20 January, when we test whether we were right about predictions for President Trump’s tech priorities, followed by an outlook for the rest of the year.

Throughout the year, we will continuously monitor forecasts around the monitoring questions listed below in each section. You can also submit your questions and topics on AI and digitalisation that you want us to monitor in 2025.

Wishing you all the best for 2025!

Jovan Kurbalija

Topics: Technology | Geostrategy | Governance | Security | Human Rights | Economy | Standards | Content | Development | Environment

Be careful with AI predictions and forecasts!

Be careful with AI predictions and forecasts!

Any AI prediction, including this one, should be approached with caution. The history of AI predictions is riddled with inaccuracies. Take Geoffrey Hinton, the 2024 Nobel Prize Laureate, who declared back in 2016:

‘We should stop training radiologists now. It’s completely obvious that within five years, deep learning will outperform radiologists.

Yet, radiology—like many other professions—remains alive and well.

The list of flawed AI predictions is extensive, ranging from exaggerated risks of open-source AI to the impact of AI on elections and so on.

Why are Hinton’s and other AI predictions often false?

Latest updates (January – March 2025)

Recent AI Predictions and Their Realism

  • Geoffrey Hinton’s Prediction on Human Extinction: Hinton estimated a 10-20% chance that AI could lead to human extinction within the next 30 years. While this highlights the importance of AI safety, such predictions are speculative and reflect broader concerns rather than imminent realities.​ Read more
  • Demis Hassabis on AGI Development: Demis Hassabis, CEO of Google DeepMind, predicts that Artificial General Intelligence (AGI) could emerge within the next 5 to 10 years. Given the current state of AI, this timeline may be optimistic, as achieving AGI involves overcoming substantial technical and ethical challenges.​ Read more

Hinton’s false prediction illustrates a common misconception about AI’s capabilities and limitations. Here’s why radiology wasn’t the ‘low-hanging fruit’ for AI that many thought it would be—and what we can learn from this miscalculation:

Quality of data: AI thrives on vast amounts of high-quality, annotated data. But in radiology, getting that data is no mean feat. Medical images are sensitive, requiring strict privacy protections and expert labeling. Plus, the diversity of images—based on patient demographics, diseases, and imaging techniques—makes it hard for AI models to generalise effectively. What works in one scenario often fails in another.

Lack of ‘ground truth’: Unlike identifying cats or dogs, interpreting medical images is complex. Radiologists often disagree on findings, and images can contain multiple abnormalities that need precise detection and analysis. This lack of a clear ‘ground truth’ makes it tough to evaluate AI’s performance.

Workflow woes: Even if an AI model performs well in a lab, integrating it into a real-world radiology workflow is a whole other challenge. Radiologists need trustworthy, explainable, and seamlessly integrated tools in their systems. Add to that ethical, legal, and regulatory hurdles and it’s clear why AI adoption in radiology has been slower than expected.

So, what’s the takeaway? Hinton’s false prediction reminds us that AI’s potential is vast, but its path is riddled with complexities. Radiology isn’t disappearing—it’s evolving, with AI as a powerful tool rather than a replacement (read more here). 

Perhaps the biggest lesson of all is that predicting AI’s impact is about understanding human systems as it is about the technology itself. The future of AI isn’t just about what it can do—it’s about how we choose to use it. 

Making incorrect predictions might seem harmless, yet it can have real-world consequences. As an example, the overblown fears of existential risks from AI have led to a tsunami of AI governance initiatives, some of which are premature and/or misdirected.

How can we protect ourselves from the risks of inaccurate predictions?

We have three suggestions on reviewing predictions about AI developments:

Avoid overreacting: Governance and business initiatives should not be based on highly uncertain predictions. The past two years have seen numerous AI governance initiatives driven by speculative risks, which may have diverted resources from more pressing issues.

Develop a balanced outlook: A balanced combination of tools and approaches is essential to address various risks while focusing on key ones. As many have come to understand that the emphasis on existential risk was misguided and overinflated in 2023, it is crucial not to swing to the opposite extreme and disregard existential risks entirely.

Include non-technical expertise: While technical expertise on how AI functions is important, it must be paired with deep insights into how societies adopt, adapt to, and integrate new technologies. Balancing these perspectives will be key to shaping AI’s role in a way that is both innovative and socially responsible.

What can we learn from Diplo’s last 14 annual predictions?

Diplo’s experience in predicting digital governance trends since 2011 shows that change often occurs much slower than perceived. Areas like data governance and content policies are examples of glacial progress in technology regulation.

In hindsight, the ‘slow governance for fast technology’ approach has, despite its challenges, facilitated technological growth.

Consult Diplo’s previous predictions for the last 14 years: 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2022 | 2021 | 2022 | 2023 | 2024

Trump and Tech: More of the same, but with a twist

As President Trump prepares to take office on January 20, we can expect a mix of continuity and subtle policy shifts in the tech realm, as outlined here:

🔹 Historical continuity: Trump’s pro-business stance aligns with the long-standing US tradition of private sector-led innovation, resisting international regulations that could constrain American tech businesses.

🔹 Content regulation: Expect a softer approach on combating misinformation, as already signalled by policy changes in Musk’s X and Zuckerberg’s Meta. Key issues will be the future of Section 230, the US law that provides immunity for online platforms from liability for third-party content, and the trend of stricter content regulation worldwide.

🔹 AI policy: Trump will likely scrap Biden’s Executive Order on AI safety, focusing instead on innovation, workforce upskilling, and global competitiveness. This shift will sync with a global shift from AI safety narratives to those of opportunity and development.

🔹 Geostrategy: Relations with China will continue with more export restrictions for advanced technology and the solidifying of the tech block of like-minded countries.

🔹 Digital taxes: Unresolved tensions over tech taxation will resurface after the failure of the OECD to introduce global solutions. Countries like Germany and France will revisit digital tax policies, potentially clashing with US tech giants and the Trump administration.

🔹 Cryptocurrencies: The crypto industry stands to benefit from the expectation of Trump’s introduction of crypto-friendly regulations, including a strategic crypto reserve and improved access to banking services.

🔹 TikTok’s future: Trump will be open to a TikTok deal, avoiding major internal disruptions and a risky precedent that can expose US tech companies to ‘nationalisation’ pressures in other jurisdictions.

READ MORE

 Weapon

From bigger is better to smaller is smarter

DeepSeek challenged the dominant assumption that more Nvidia GPUs automatically mean better AI. Throughout 2025, each new model demonstrated that LLMs are reaching the scalability plateau, where additional computing power yields only limited advances. AI has entered an era of diminishing returns. This trend coincided with massive continued investment in AI hardware, finally closing the gap between inflated hype and reality.

AI Becomes a Commodity
Large Language Models became commoditised in 2025, with hundreds launched into the market. The significance of this shift was crystallised in the reaction to Apple’s cautious approach to AI investment, as highlighted in a November Bloomberg analysis:

Apple Inc. has faced plenty of criticism from Wall Street for not spending as aggressively on artificial intelligence as its Big Tech rivals. But that strategy is suddenly a blessing for the iPhone maker. Investors are beginning to scrutinize the huge sums companies like OpenAI, Meta Platforms Inc. and Microsoft Corp. are spending on AI, leading to heavy volatility in what had been some of the year’s biggest momentum plays. As a result, Apple’s position is being re-evaluated.

The Rise of Bottom-Up AI
With AI now a commodity, powered by open-source models, development is shifting toward smaller, specialized models tailored to individual companies, communities, and even countries. A growing number of organizations realize that the key to effective AI lies not in accessing the most cutting-edge LLM, but in leveraging the unique knowledge and data they already possess.

January 2025 Forecast

In 2025, the bigger is better pattern in AI will be challenged. Smaller models, such as DeepSeek, are increasingly outperforming larger ones as they are much more cost-effective to train and use. For instance, the cost of training DeepSeek is US 5.6 million—1% of the cost of training Claude 3.5 Sonnet or Meta’s Llama 3, and 500 times less than Elon Musk’s Grok, which uses 100,000 Nvidia H100 GPUs.

The image shows a table showing the difference in reasoning power between different AI models, such as Claude 3.5 Sonnet, GPT-4o and DeepSeek R1.
Survey of reasoning power of AI models (January 10 2025)

DeepSeek provides better reasoning for a fraction of the cost. For example, inference (answering questions) costs 3% of that of OpenAI. Think of buying an iPhone for USD 30 instead of USD 1000!

Additionally, smaller models require less energy for training and operations, making them more environmentally friendly. This shift from ‘bigger is better’ to ‘small is beautiful’ in AI will have several significant impacts:

Reality check for the artificial general intelligence (AGI) narrative: Since the launch of ChatGPT in November 2022, there has been widespread speculation about the arrival of AI that can think and act like humans in any field. As AGI is not likely to emerge soon, there is an increasing narrative shift towards AI agents (see below) as a substitute for the envisaged AGI. This reframing of the AGI discussion will accelerate in 2025. 

The rise of AI agents: An AI agent is a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilising available tools, thus encompassing a wide range of functionalities beyond mere natural language processing, including decision-making, problem-solving, interacting with external environments, and executing actions (IBM).

A combination of AI, human expertise, and specific use cases will converge around AI agents, which will dominate the AI landscape in 2025. Similar to previous technological phases, there will be a ‘tech hype’ or tsunami of AI agents promising solutions to all our problems. The best way to navigate this hype is to choose AI agents based on practical needs—whether a new tool is nice or genuinely helpful in tasks ranging from drafting diplomatic agreements to arranging protocol details for official dinners or summarising meeting reports.

Growth of bottom-up AI: Affordable and smaller AI models will facilitate the growth of bottom-up AI, addressing the specific needs of communities, companies, and countries. This approach is both technically more feasible and financially more viable. Moreover, it grounds increasingly abstract discussions about ethics and biases in local cultural contexts. Local communities can develop AI that reflects their unique cultural and ethical values while protecting data and knowledge in simple yet effective ways.

Open-source AI wins: The open-source approach has won the competition against closed models. The 2023 narrative that open-source AI could pose dangers has proven unfounded. The digital world is returning to open AI as the preferred approach. While the AI industry initially pushed for closed models, citing concerns about misuse, open-source AI has emerged as the key approach. In addition to platforms in the USA and Europe, China is becoming a major player in open-source AI. Today, Qwen 2.5 is the most powerful open-source AI model out there, and open-source AI will be established as the dominant approach.

Necessity is the mother of AI invention as well: DeepSeek developed one of the most powerful AI models using a fraction of the funding available to major companies and older-generation Nvidia GPUs, which are still exportable to China. These limitations were overcome thanks to the innovative approach and creativity of the development team.

AI is becoming a commodity: Advances in technology have made it possible to develop large language models (LLMs) and AI platforms with limited resources. The almost daily development of new LLMs has led to what is described in China as the ‘war of a hundred models’.  Affordable AI is growing rapidly worldwide.

AI transformation is a complex task: In 2025, many businesses and organisations will search for a formula for making shortcuts in the AI Gartner Curve towards the Plateau of productivity.

AI transformation: The commodity within, the cultural shift beyond

AI is an affordable commodity, but AI transformation is very ‘expensive’.

At first glance, this statement seems paradoxical. How can something so readily available be almost priceless. Yet, it captures today’s dilemma facing businesses, governments, and organisations. 

AI has become accessible to many—you can create an AI chatbot in hours—but unlocking its potential requires far more than technology. It demands a shift in professional cultures, a break from old routines, and embracing new ways of thinking and problem-solving.

Effective AI adoption isn’t about purchasing the latest software or algorithms; it’s about reshaping how we work, collaborate, and innovate.

AI transformation requires challenging the status quo, rethinking long-held practices, and fostering a culture of continuous learning and adaptability. But the rewards are immense. 

The good news? This journey isn’t just about efficiency, profit, or technological shift; it’s a cultural and philosophical evolution of contributing to the well-being of our communities, countries, and humanity as a whole. Moreover, AI nudges us to reflect on what it means to be human—both ontologically and spiritually.

As we use AI effectively, we may find ourselves closer to answering some of humanity’s eternal questions of purpose and happiness.

 Logo, Text, Outdoors

Losing grip of the NVIDIA monopoly

NVIDIA’s dominance is not as unshakeable as it appeared at the start of the year, due to several key developments. First, computing power is becoming less critical for AI performance compared to proprietary knowledge and data. Second, China has successfully developed domestic GPU technology sufficient for most of its AI needs, reducing its reliance. Third, competitors like Google have unveiled new GPUs that directly challenge NVIDIA’s market hold.

Geoeomotions take centar stage of AI developments

As the AI revolution shifts from raw computation to broad societal adoption, public attitude and sentiment have become critical factors. This “geo-emotional” divide gained prominence in 2025, with Asian societies showing greater excitement about AI, while the Anglosphere remains more cautious. On opposite ends of the spectrum are China, where 80% of the population is excited about AI developments, and the United States, where only 34% share that enthusiasm. This readiness—or reluctance—to embrace AI will fundamentally shape the transformation of businesses and societies in the years ahead.

January 2025 Forecast

In 2025, geography will play an increasingly significant role in shaping global politics and economies. The strength of national borders as barriers to the flow of capital, people, and goods will intensify. A central question will revolve around the global flow of data. So far, internet traffic has resisted significant fragmentation, but will this remain the case in 2025?

As digital geostrategy gains prominence, the influence of China and the USA in the digital realm will grow. However, the geostrategic landscape will not be purely bipolar. In certain sectors, such as digital trade, new power centres are emerging, particularly in the Global South, reflecting a more multipolar digital economy.

Geopolitics often dominate media coverage, focusing on the use of technology to advance security and other non-economic interests. Geoeconomics, on the other hand, centres on accumulating wealth and expanding markets. Meanwhile, emotions and perceptions help to explain why societies embrace or resist technology, reflecting a spectrum of enthusiasm, bias, and fear. 

Together, geopolitics, geoeconomics, and geoemotions shape the complex interplay between technology, society, and global power dynamics in the 21st century.


Geopolitics

Digital networks and AI developments are critical assets for countries worldwide. Thus, they become central to national security, the projection of power, and the protection of national interests. Over the last few years, political considerations have been prioritised over economic interests. This is particularly noticeable in various export restriction regimes on semiconductors, limited market access, the deployment of submarine cables, and the launching of satellites. 

Semiconductors

The growth of the semiconductor industry will slow down in 2025 to 12.5% growth from 16% in 2024, according to the World Semiconductor Trade Association.

New AI models which require less processing power for training and fewer GPUs could result in less demand for GPUs.

The USA starts returning the semiconductor industry home with the opening of a fabrication plant (or ‘fab’) in Arizona. As the USA restricted the export of semiconductors, China invested heavily in its local industry. China will focus on producing less advanced but essential, so-called ‘mature-node’ chips that can have wider economic use. 

Till now, AI has been processed in powerful data centres. In 2025, we will notice the emergence of ‘AI factories,’ purposely developed to train AI models. Nvidia is preparing a new Blackwell GPU, which will be used for AI processing. 

EVENTS

First meeting of the International Advisory Body for Submarine Cable Resilience | late February 2025, Abuja

IPCC Plenary Meeting 15-17 April 2025, Montreal


Geoeconomics

In 2025, the trend of ‘securitisation’ of the economy will significantly impact the tech sector, compelling companies to align their economic interests more closely with their home countries’ political and security priorities. This shift is driven by the growing recognition of technology as a strategic asset in geopolitics. 

Currently, tech companies are wielding unprecedented power, often surpassing the GDPs of entire nations. For instance, Apple’s market capitalisation in January 2025 was US$3.524 trillion, Nvidia’s was US$3.262 trillion, and Microsoft’s was US$3.101 trillion, each of which is comparable to the total 2023 GDP of the entire African continent (US$3.1 trillion) and close to the GDPs of the UK (US$2.27 trillion), France (US$3.03 trillion), and India (US$3.57 trillion). Other tech giants like Amazon, Meta, Alphabet, Alibaba, and Tencent have similar income and profit levels, further emphasising the economic clout of the tech industry.

The power of tech companies extends far beyond technology, permeating various aspects of global society and governance, including social influence, data centralisation, and political impact. No company in the history of humanity, including the East India Company, has had such combined power extending beyond the economy to social and political realms.

In 2025, tech giants will likely face pushback from national governments and local companies. For example, Flipkart in India, which controls one-third of the market, and Mercado Libre, an Argentinian firm, are challenging global tech giants in their respective regions. According to The Economist, this trend is also supported by bottom-up financing by local financial initiatives such as M-Pesa in Africa and Nubank in Brazil and South America. 

Furthermore, governments worldwide are increasingly imposing ‘data localisation’ requirements, which will significantly impact the business models of tech companies that rely on the free flow of data across national borders. 

In summary, 2025 will be a pivotal year for the tech sector as it navigates the dual pressures of aligning with national security priorities, adapting to regulatory changes, facing competition from regional players, and addressing the financial inclusion needs of underserved populations in the Global South.


Geoemotions

In 2025, geoemotions will impact the acceptance and growth of AI. If societies fear AI, the use of technology will be limited. A 2024 Ipsos study positions countries in two dimensions: nervous and excited about AI.

The image shows a scatter graph depicting the nervousness and excitement about AI of a number of countries. The Anglosphere is more nervous than excited, Europe is roughly equally excited and nervous, and Asia is more excited than nervous.

Generally speaking, societies from the Anglosphere, including the USA, are highly nervous and display low excitement about AI. This view is probably shaped by the strong media campaign about AI risks in 2023. On the opposite side are Asian societies with high excitement and low nervousness about AI. 

Similar YouGov research from September 2024 showed that the most positive attitude towards AI is in the UAE (60%), while the least enthusiastic is in the USA (17%). 

The image shows a bar chart depicting responses from a number of countries about perceptions of AI tools. UAE has the most positive response whilst the USA has the most negative response about perception of AI tools.

In 2025, these trends will likely change as Trump’s administration will shift focus from safety issues towards AI opportunities and narratives. This political shift is likely to impact media and academic coverage of AI.

Are we close to a significant fragmentation of the internet?

Monitoring update will be provided in February 2025.

What consequences the ongoing chip war between the world’s largest technological powers will have? 

Monitoring update will be provided in February 2025.

Beyond statements and principles, which concrete actions (and by whom) will help strengthen the resilience of submarine cables?

Monitoring update will be provided in February 2025.

How much power is too much? How can we keep the power held by big tech companies in check?

Monitoring update will be provided in February 2025.

 Logo, Text

Temporary consensus and the postponement of important decisions

The UN negotiations on WSIS+20, which determined the future of the World Summit on the Information Society, successfully produced a consensus report. While any agreement in the current geopolitical climate is a success, the outcome was less a policy breakthrough and more a search for the lowest common denominator. Critically, WSIS+20 failed to resolve the risk of creating two parallel policy processes—the Global Digital Compact (GDC) and WSIS itself—addressing similar issues, leaving the search for a unified solution to future negotiations.

A shift in focus: From existential to existing and exclusion Risks

Media focus has shifted away from existential risks, as speculative timelines for an AI superintelligence are constantly pushed back. Instead, existing risks—particularly to education and employment—have gained prominence. Meanwhile, the exclusion risk of AI monopolisation by a few corporations is being counteracted by the proliferation of the technology itself. The rise of bottom-up, open-source AI development is diminishing the relevance of such monopolies. In response, entities like OpenAI are pushing back, arguing for the necessity of centralised, proprietary models on grounds of security and geopolitical competition, notably with China.

January 2025 Forecast

After a year of intense negotiations on AI and digital governance in 2024, 2025 will shift the focus to implementation as the world works to put resolutions, agreements, and treaties into action. This should be a year of clarity and consolidation, with two key themes echoing across the AI and digital realms:

  • Avoiding governance duplication by synchronising the Global Digital Compact (GDC) and the World Summit on the Information Society (WSIS) framework, particularly in the context of the WSIS+20 review.
  • Deflating the ‘AI governance bubble’ that has ballooned in recent years.

Syncing GDC and WSIS

The WSIS framework, shaped between 2003 and 2005, has been tested and refined over two decades. The GDC, introduced in 2024, represents a fresh, dynamic approach to digital governance. Metaphorically, WSIS is a marathon, while the GDC is a sprint.

The main challenge in 2025 will be to sync the experience and expertise of the WSIS framework with the new energy of the GDC. This alignment is crucial to avoid duplication and ensure that both frameworks complement each other. The WSIS+20 review in 2025 will provide the perfect context for this synchronisation, offering an opportunity to integrate lessons from the past with the urgency of the present.

 Advertisement, Poster, Book, Publication

Sorina Teleanu provides a detailed analysis of the Global Digitial Compact, including:

  • Main issues
  • Negotiating history
  • Links to other governance processes
  • Actors
  • Implementation and follow-up
  • AI governance initiatives

READ MORE

 Logo, Text, Outdoors

Hanoi: A new toponym in cyber politics

For years, ‘Budapest’ has been a key toponym in cybersecurity, named for the Hungarian city where the 2001 Council of Europe Cybercrime Convention was adopted. This October, Hanoi emerged as a new landmark, hosting the signing of the UN Cybercrime Convention. The treaty will enter into force 90 days after its 40th ratification.

January 2025 Forecast

The Hanoi Cybercrime Convention

Following Budapest, the 2001 Council of Europe Cybercrime Convention host, Hanoi, will emerge as the next toponym in cybercrime language. The Vietnamese capital will host the signing ceremony for the new UN Cybercrime Convention, which will remain open for signatures until 31 December 2026. The convention will enter into force 90 days after the 40th ratification.

At the same time, the Ad Hoc Committee on Cybercrime decided that it would complete its work on the convention by holding a session in Vienna, lasting up to five days, one year after the convention’s adoption. Since the convention was adopted on 24 December 2024, this follow-up may occur by the end of 2025. During this session, the committee will draft the rules of procedure for the Conference of the States Parties and other rules outlined in Article 57 of the convention. 

We expect the convention to enter into force in 2025, with 40+ ratifications. Though this might be tricky, there is considerable diplomatic and political momentum, a need to address growing cybercrime, and relatively general satisfaction with the truly global nature of this binding agreement adopted by consensus in the UN (which is not common these days).

UN Cybersecurity partnership framework

2025 will be an important year for cybersecurity negotiations—the mandate of the UN Open-Ended Working Group (OEWG) on the security of and the use of information and communications technologies is ending in July 2025 with its 11th session. What will follow is a new mechanism for dealing with cybersecurity under the UN auspices. 

Currently, states disagree on the scope of thematic groups in the future mechanism: (a) while some countries insist on keeping traditional pillars of the OEWG agenda (threats, norms, international law, confidence-building measures, and capacity building), (b) others advocate for a more cross-cutting and policy-orientated nature of such groups. There is also uncertainty regarding the modalities of multistakeholder engagement in the future mechanism. Agreements on these issues are key if states want to hit the ground running and not get tangled in red tape at the beginning of the next mechanism. 

In the OEWG report, which should be adopted by July 2025, there are a few points where we can expect consensus. CBMs are gaining wider support, including establishing the Global Points of Contact (POC) Directory and capacity-building portal. We may also have some implementation checklists for the existing 11 cyber norms. Support for a voluntary fund for capacity building is not yet certain. The protection of critical infrastructure and the impact of AI are likely to feature highly in follow-up processes. 

The main controversies in remaining OEWG negotiations will be around modalities for the future cybersecurity process at the UN (i.e. institutional dialogue): should a future cybersecurity architecture deal mainly with implementing existing norms or negotiating new norms and legally binding instruments? Other open issues include the decision on topics and several thematic groups, and the inclusion of other actors through multistakeholder provisions (which will remain a central stumbling stone). One thing appears clear: the process will be continued, and the next one will likely be a permanent mechanism rather than time-limited.

By July, the following outcomes are possible:

  • Consensus around least-common denominator of the future process, especially on implementing the existing vs. negotiating the new (and binding) mechanisms, which will be open for interpretation by future negotiators. 
  • In the absence of such consensus, two resolutions might be tabled, as it happened a few years ago, with one (sponsored by the US, EU and their partners) establishing a POA-like mechanism, focusing on norms implementation and capacity building. The other (sponsored by Russia and its partners) established a continuous OEWG focused on negotiations on binding norms and agreements. 

Although time is not on the side of negotiators, there will be quite a few activities in the next six months, giving states more discussion opportunities. An informal town hall meeting to discuss the next mechanism will be held before the tenth substantive session scheduled for February. The OEWG’s schedule for the first quarter of 2025 includes the Global POC Directory simulation exercise, an example template for the Global POC Directory, and reports on the Global ICT Security Cooperation And Capacity-Building Portal, and the Voluntary Fund.  Further, the chair can schedule additional intersessionals if deemed necessary. 

Encryption

The race between cryptography and quantum computing

In 2025, governments and companies will ramp up preparations for quantum computing, a technology poised to render current encryption obsolete. To address this, the US National Institute of Standards and Technology (NIST) introduced a post-quantum cryptography standard in 2024, featuring algorithms designed to withstand quantum attacks. This proactive policy approach demonstrates how society can tackle the uncertainties of emerging technologies like quantum computing.

Meanwhile, the EU’s ‘Chat Control’ initiative dominates current encryption policy debates. It mandates tech platforms to scan content for illegal activities but faces strong opposition over privacy and human rights concerns. In December 2024, 10 member states rejected the proposal. Efforts to revive it through ‘Chat Control 2.0,’ which introduces ‘upload moderation’ requiring user consent for message scanning, are unlikely to succeed. Critics argue it fails to address the core issue of undermining encryption and creating security vulnerabilities. Major platforms like WhatsApp, Signal, Telegram, and Threema have threatened to exit the EU market if forced to weaken encryption protections.

Military uses of AI and LAWS

The implications of AI for international peace and security are typically tackled separately from the broader discussions on AI governance at the UN level. In December 2024, the UN General Assembly adopted the first-ever resolution on AI in the military domain, affirming the applicability of international law to systems enabled by AI in the military domain and encouraging member states to convene exchanges—at multilateral and multistakeholder levels—on responsible application of AI In the military domain.

In 2025, the UN Secretary-General will have to follow up on this resolution, as he was requested to ‘seek the views of member states and observer States on the opportunities and challenges posed to international peace and security by the application of AI in the military domain, with specific focus on areas other than lethal autonomous weapons systems (LAWS)’;  a substantive report summarising those views and cataloguing existing and emerging normative proposals will have to be submitted to the General Assembly at its 80th session, starting in September 2025. 

The GGE on emerging technologies in the areas of LAWS (convened yearly as a group since 2017) will continue its work in 2025, with two meetings planned for March and September. The group is tasked with ‘considering and formulating, by consensus, a set of elements of an instrument, without prejudging its nature, and other possible measures to address emerging technologies in the area of LAWS’.

In 2024, the GGE worked on a so-called ‘rolling text’, which outlines provisional rough consensus on several formulations on issues such as the characterisation of a LAWS; applicability of international humanitarian law; human control and judgement as essential with regard to the use and effects of LAWS; several prohibitions on the use of LAWS, including a prohibition of employing LAWS that operate without context-appropriate human control and judgement; obligations for states prior to potential employment and as applicable throughout the entire life cycle of LAWS; and obligations for states to ensure human responsibility and accountability. 

International humanitarian law

With the increasing ‘digitalisation’ of ongoing conflicts, the applicability of international humanitarian law (IHL) in the cyber realm is set to become more prominent. A 2024 report by the International Committee of the Red Cross (ICRC) highlights two key challenges:

  1. Clarifying legal grey zones, such as those involving hybrid warfare and proxy warfare.
  2. Applying IHL principles to emerging technologies used in warfare.

In 2025, IHL and cyber conflicts are likely to be central to several cases before the International Court of Justice (ICJ), including South Africa vs. Israel, Nicaragua vs. Germany, and Ukraine vs. Russia

 Logo, Text, Outdoors, Nature

Tech and a global decline of human rights

The protection of tech-related human rights deteriorated in 2025, mirroring a broader global decline. The Trump administration deprioritised inclusive frameworks—including gender and other rights—while maintaining a singular focus on freedom of expression. Concurrently, major technology companies systematically dismantled content governance policies that had been built around human rights principles.

The narrow focus of AI rights discourse

While AI remained prominent in human rights discussions, the debate was largely confined to traditional narratives of bias and algorithmic ethics. However, the human rights community devoted less attention to AI’s profound impact on other fundamental areas—such as education, cognitive development, and social cohesion—that define the very fabric of communities.

January 2025 Forecast

According to all indications, in 2025, Trump’s presidency will deprioritise human rights compared to the Biden administration’s focus. As tech companies retreat from content moderation, they will likely backpedal on the impact of their platforms on human rights. 

While EU countries will likely increase their focus on human rights in the digital age, addressing issues such as AI ethics, surveillance, and the impact of technology on privacy and freedom of expression. The EU will aim to ensure global relevance of EU’s regulations of digital relevance: GDPR, AI Act, DSA, etc. 

AI will be raised on the agenda of the UN Human Rights Council and other initiatives and organisations dealing with human rights. AI will bring new angles to reshaping ‘traditional’ human rights, such as freedom of expression and privacy protection. In addition, it will foster new types of human rights dilemmas. For instance, human identity will be highly relevant as individuals are impersonated through AI to mimic human looks and voices. Here, the dilemma is whether the question of identity can be covered by privacy protection or if it will require a new set of legal and policy rules. 

The rapid development of neurotechnologies, spurred by AI and biotech advancements, will bring neurorights to the forefront of human rights agendas. The 2024 report on neurotechnology and human rights of the UN Human Rights Council Advisory Committee, along with UNESCO’s work on the ethics of neurotechnology, will likely catalyse new international norms and regulations to protect cognitive liberty, mental privacy, and the integrity of the human mind.

While AI will bring risks across human rights, there are areas where AI can help realise human rights. For example, AI can and should play a crucial role in increasing the well-being of people with disabilities. In the governance realm, the main focus should be on developing usability standards for people with disabilities. 

The main regulatory development on technology and disabilities will be the entry into force on 28 June 2025 of the European Accessibility Act, which will impose legal liability for businesses for providing equal access to digital products and services. 

This is a placeholder tab content. It is important to have the necessary information in the block, but at this stage, it is just a placeholder to help you visualise how the content is displayed. Feel free to edit this with your actual content.

This is a placeholder tab content. It is important to have the necessary information in the block, but at this stage, it is just a placeholder to help you visualise how the content is displayed. Feel free to edit this with your actual content.

 Logo, Text, Outdoors

The AI Bubble: Inflated, but not burst (yet)

The AI bubble has expanded dramatically, fueled by both soaring expectations and immense investment. By the end of the year, a shared understanding emerged of the significant gap between the capital deployed and the relatively low revenue generated by AI. The critical question of how to manage this financial overhang has been decisively postponed to 2026.

The securitisation of the tech economy

The traditional division between digital technology and national security has continued to blur. “Digital economic security” is now central to policy agendas worldwide, leading to a growing web of sanctions and restrictive regimes governing cross-border technological exchange.

Cryptocurrency’s counterintuitive year

Bitcoin’s value trajectory in 2025 defied easy prediction. Despite the arrival of a crypto-friendly Trump administration and the introduction of favourable U.S. regulations—factors expected to drive a rapid price increase—the value of Bitcoin actually declined. It fell from $94,439.99 at the start of the year to $88,854.54 by January 31st, 2025, highlighting the complex and often counterintuitive dynamics of the cryptocurrency market.

January 2025 Forecast

In 2025, geopolitical tensions, technological developments, trade barriers, and industrial policies will affect the digital economy. The resilience of the digital economy will face three critical tests in 2025:

1. Can data flow freely in an economically fractured world? So far, the internet and digital networks have resisted significant fragmentation in the flow of capital, goods, and services. The success or failure of this test will not only shape the future of the digital economy but, more importantly, the internet itself.

2. Will the AI bubble burst in 2025? This risk stems from massive investments in AI and its limited impact on businesses and productivity. While significant funding has fueled the development of AI models, driving the market capitalisation of companies like Nvidia to new heights, the real-world adoption of AI in business and productivity remains low. The risk of an ‘AI bubble burst’ grows with the emergence of cost-effective models, such as DeepSeek, which are developed and deployed at a fraction of the cost compared to those by OpenAI, Anthropic, and other mainstream AI platforms.

3. Will the digital economy become securitised? Current geopolitical trends are increasingly integrating tech companies into nation-states’ security and military frameworks. The growing securitisation of the tech industry will likely trigger pushback worldwide, as the involvement of foreign tech companies in internal markets will no longer be evaluated solely on economic grounds.

Digital taxation 

After the OECD’s failed digital tax negotiations in mid-2024, countries like Canada, India, France, and Germany will likely roll out digital services taxes (DSTs). This patchwork of regulations can spark tensions, especially with the USA, as the Trump administration shields tech giants from foreign taxation. DSTs won’t stand alone; they’ll become bargaining chips in broader trade wars, entangled in Trump’s tariffs and restrictions. The digital economy, once a unifying force, risks becoming a battleground in a fragmented world.

Worth paying attention to in 2025 is an intergovernmental negotiating committee tasked with drafting a UN Framework Convention on International Tax Cooperation and two early protocols. The UN General Assembly decided to establish this committee in December 2024; the committee is to meet in 2025, 2026, and 2027, and have an organisational session in February 2025. 

Its work will be guided by a UNGA-approved terms of reference (ToR) for the UN Framework Convention on International Tax Cooperation developed by a dedicated Ad Hoc Committee. According to the ToR, such a convention is expected to tackle issues highly relevant in the context of the digital economy, such as fair allocation of taxing rights (including equitable taxation of multinational enterprises), and addressing tax avoidance and evasion.

Moreover, the two early protocols that are to be developed simultaneously with the convention are to deal specifically with digital issues: one should address taxation of income derived from the provision of cross-border services in an increasingly digitalised and globalised economy, while the second will have taxation of the digitalised economy among priority areas. 

Digital trade

The Joint Initiative on e-commerce at the World Trade Organisation (WTO) hangs in the balance. After five years of talks, 82 members of the Joint Statement Initiative (JSI) expressed acquiescence to a stabilised ‘Agreement on Electronic Commerce’. Nevertheless, a final agreement remains elusive. Although some digital economy powerhouses, such as the EU and China, are on board, other countries, such as the USA, Brazil, and Indonesia, have not yet agreed.

Turkey has joined the group of countries that oppose JSIs as a negotiating instrument. In addition, even if Australia, Japan, and Singapore (the three facilitators of the JSI) broker a deal, it will be far from the ambitious vision set years ago, which included topics such as data flows and source code. In 2025, these topics will likely continue to be regulated outside the WTO through preferential trade agreements and Digital Economy Agreements (DEAs). 

A JSI agreement would still be valuable in fostering the harmonisation of global rules, especially when enabling and facilitating e-commerce. In the present fragmentation scenario, an agreement would be a victory of multilateralism amid global failures like the OECD tax collapse.

Antitrust

As antitrust processes take time, most of the trends from last year will continue in 2025.

In the USA, antitrust pressure on tech companies will decrease, as the incoming Trump administration has already hinted

The main development will be the court ruling by mid-2025 on Chrome and Android divestitures from Google. Google also faces an antitrust investigation mainly centred on the dominance of the Chrome browser and search engines in Japan; Canada’s Competition Bureau initiated a similar case against Google. Google has announced further changes to its search results in Europe in response to complaints from smaller competitors.

Meta’s antitrust lawyers will also have a busy year. The EU hit Meta with a fine of nearly 800 million euro for anti-competitive practices to its Marketplace features. India’s Competition Commission imposed a US$25.4 million fine and restricted data-sharing between WhatsApp and other Meta-owned applications for five years. Apple also faces an anti-monopoly probe in India.

In the EU, tech companies face antitrust challenges under the Digital Market Act (DMA).

EU is expanding anti-monopoly actions in the AI realm by scrutinising two partnerships: Microsoft’s OpenAI and Google Samsung.

Anti-monopoly is used in the battle among tech companies. Elon Musk has expanded his legal battle against OpenAI by adding Microsoft to his lawsuit, accusing both companies of engaging in illegal practices to monopolise the generative AI market. 

Crypto and digital currencies

Overall, the digitalisation of currencies and finances will receive a new boost with a ‘crypto-friendly’ administration in the USA. Countries will continue introducing digital versions of their countries. 

Research by the Atlantic Council reveals that all G20 nations are now exploring central bank digital currencies (CBDCs), with 44 countries currently piloting them, up from 36 last year. Authorities are accelerating these efforts in response to decreasing cash usage and the potential threat from cryptocurrencies like Bitcoin and big tech companies.

Notable growth has been observed in the CBDCs of the Bahamas, Jamaica, and Nigeria, while China’s digital yuan (e-CNY) has seen its transaction value almost quadruple to 7 trillion yuan (US$987 billion). The European Central Bank has also launched a multi-year digital euro pilot.

Future of jobs 

So far, AI has led to more jobs being created than displaced. According to the World Economic Forum’s (WEF) Future of Jobs Report 2025, this trend will likely continue, with 170 million new roles set to be created and 92 million displaced, resulting in a net increase of 78 million jobs by 2030. 

The main challenge in AI transformation will be closing the skills gap with the shift towards AI, big data, and cybersecurity skills. According to WEF, 59% of the workforce will require reskilling and training by 2030, and investment in training and education will be high on the policy agenda of entities such as UNESCO. In addition, the International Labour Organisation (ILO) should prioritise creating standards and policies for AI and automation to ensure that AI technology complements rather than replaces human labourers.

 Text, Logo

The 2025 prediction was prescient in identifying the key domains. In reflection, the year confirmed that technical standards are now inextricably linked to geopolitics, economic resilience, and ethical governance. They functioned not just as a “safety net” but as strategic assets and diplomatic tools.

The grand narrative of 2025 was the struggle between forces using standardisation to maintain global interoperability and those using it to construct sovereign digital realms with controlled gateways. The outcome was not a single, unified standards regime, but a more complex, layered landscape where foundational protocols like TCP/IP remain universal, while higher-layer standards increasingly reflect the world’s political and economic divisions.

AI standardisation: The push for open-source AI standards emerged as a particularly vibrant and contentious domain, aiming to prevent the locking down of foundational technology by a few corporate actors.

Digital Public Infrastructure (DPI) standards: Standardisation became the linchpin for donor funding, vendor interoperability, and avoiding vendor lock-in. The ITU and World Bank were deeply involved in creating modular DPI standards frameworks, turning this into one of the most impactful standards stories of the year, directly affecting billions of citizens.

Standardisation became the linchpin for donor funding, vendor interoperability, and avoiding vendor lock-in. The ITU and World Bank were deeply involved in creating modular DPI standards frameworks, turning this into one of the most impactful standards stories of the year, directly affecting billions of citizens.

January 2025 Forecast

In 2025, technical standards will become more relevant as a ‘safety net’ for global networks during economic and political fragmentations. Standards ensure interoperability of apps and services across other divides and divisions. TCP/IP (Transport Control Protocol/Internet Protocol) remains the glue that keeps the internet together despite political and economic fragmentations. 

AI standardisation will gain additional momentum in 2025, as the three main international standard development organisations –  the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU) – have announced a set of initiatives including the International AI Standards Summit and creating an AI standards database.

Other notable standardisation initiatives will focus on open-source AI, digital public infrastructure, mobile networks like 6G, and brain-computer interfaces. 

EVENTS

AI Standards Hub Global Summit | 17-18 March 2025, London

International AI Standards Summit | 2-3 December 2025, Seoul

Can technical standardisation processes be kept separate from geopolitical tensions?

Monitoring update will be provided in February 2025.

What does it take to ensure that AI technical standards can serve as a meaningful governance tool?

Monitoring update will be provided in February 2025.

 Cutlery, Fork, Logo

In 2025, social media moderation is undergoing a radical shift as major platforms enthusiastically adopt a “hands-off” approach, replacing centralised fact-checking with crowd-sourced systems like Community Notes. This move is primarily driven by a desire to reduce the substantial costs and liabilities associated with maintaining large trust and safety teams.

In effect, platforms are transferring the burden of content judgement to users, betting that collective wisdom can ensure reliability. However, this experiment raises critical questions about its effectiveness against AI-generated misinformation, coordinated manipulation, and the overall decline in content quality, as platforms effectively abdicate their traditional gatekeeping role.

In stark response to this perceived platform failure, Australia has implemented one of the world’s most restrictive policies: a ban on social media use for under-16s. This drastic measure reflects a complete loss of trust in platforms’ self-regulation and safety tools. The agreement of Meta and TikTok to comply underscores a pragmatic acceptance of state-imposed access restrictions, validating a regulatory model focused on user protection through prohibition rather than improving the online environment itself.

This dynamic fuels the central geopolitical tension in digital governance: the clash between the EU and US-based tech giants. The EU, through fines such as the one levied against Platform X for breaches of the Digital Services Act, is aggressively enforcing a legalistic, rights-based framework that demands robust content moderation. This directly counteracts the platforms’ cost-saving “hands-off” strategy.

The resulting conflict creates a significant regulatory battleground, pitting the EU’s stringent oversight against the US’s more libertarian tech ethos and the platforms’ commercial interests, leading to a fragmented and contentious global internet.

January 2025 Forecast

In 2025, content governance will be altered by the shift from fact-checking to a community notes approach by social media platforms. The content policy landscape will shift from its current heavy focus on content regulation to a more relaxed approach.

The ‘hands off’ approach was enthusiastically adopted by tech companies, as it will reduce their cost of maintaining a complex system of policies, organisations, and networks that was formed over the last 10 years, involving, according to some estimates, close to 30.000 people who monitor content on social media platforms. In addition, social media platforms are transferring moderation responsibilities to users. 

The underlying question is whether this shift from fact-checking to a system of community notes will address the problem of quality and reliability of content on social media platforms. We will continue to monitor developments in 2025 around the following questions. 

Are community notices enough to avoid misuse of social media platforms?

Community notices, such as X’s (formerly Twitter) Community Notes (formerly Birdwatch), are designed to provide additional context or corrections to potentially misleading content. While they are a step forward in promoting transparency and user-driven moderation, there are views that they are not sufficient alone to prevent the misuse of social media platforms.

How are the EU and other countries reacting to the changes in social media platforms’ content moderation? 

The EU has a regulatory tool—the Digital Service Act—that can be used for main social media platforms with a potential fine of 6% of the worldwide turnover of these platforms in the case they breach provisions of DSA. 

The question of content moderation goes beyond immediate policy issues to deeper cultural roots and historical context. For example, EU societies have lower tolerance to hate speech and disinformation. European courts addressed the first cases related to content including the Compuserve case in Germany and the Yahoo case in France. 

Currently, one of the main focuses is on the impact of TikTok on the results of Romanian elections. The European Commission has initiated formal proceedings on the effectiveness of the Community Notes system to provide content moderation as per provisions of the DSA’s mitigation of systemic risks.

What are age verification regulations and policies for access to social media platforms?

1. United Kingdom

  • Current Policy: The UK’s Online Safety Act 2023 mandates ‘highly effective age assurance to prevent children from accessing harmful content such as pornography, self-harm, and suicide-related material. Ofcom has issued guidance and statutory codes, with the main implementation date (‘AV-Day’) set for July 2025.
  • 2025 Initiatives:

2. USA

  • Current Policy: Several states, including Florida and South Carolina, have enacted age verification laws for pornographic websites. The Supreme Court is reviewing the constitutionality of Texas Law HB 1181, with a decision expected by July 2025.
  • 2025 Initiatives:

3. EU

  • Current Policy: The Digital Services Act (DSA) regulates very large online platforms (VLOPs) like Pornhub and xVideos, requiring robust age assurance measures. The EU is also piloting the EUDI Wallet for age verification.
  • 2025 Initiatives:

4. Canada

  • Current Policy: Canada is reviewing Bill S-210, which mandates age verification for accessing adult content. The Office of the Privacy Commissioner (OPC) supports privacy-preserving age assurance methods.
  • 2025 Initiatives:

5. Australia

  • Current Policy: Australia is in the process of a landmark law proposal that includes, restricting access of children under 16 and testing age assurance technologies, with results expected by June 2025. The trial evaluates methods like biometrics, parental consent, and app store age checks.
  • 2025 Initiatives:

6. International Initiatives

  • Global Standards: The ISO/IEC 27566-1 framework for age assurance is nearing finalisation, with parts 2 and 3 expected in 2025. This standard will provide a unified approach to age verification, estimation, and inference.
  • Collaborative Efforts: The Global Age Assurance Standards Summit and the International Age Assurance Working Group are promoting interoperability, privacy, and regulatory consistency across jurisdictions.
  • 2025 Initiatives:

Key Themes for 2025

  1. Legislation Implementation: Jurisdictions like the UK, the USA, and the EU will see new age assurance laws come into force, requiring platforms to adopt robust verification methods.
  2. Interoperability: Initiatives like AgeAware® will enable seamless, privacy-preserving age verification across platforms and borders.
  3. Global Standards: The adoption of ISO/IEC 27566-1 and IEEE 2089. It will provide a unified framework for age assurance, ensuring consistency and reliability.
  4. Complementary Measures: Digital literacy, parental controls, and app store age checks will play a crucial role in protecting children online.

Innovation and Advocacy: Continued innovation in age estimation and verification technologies, coupled with advocacy for child safety, will drive progress in 2025.

Who are the national authorities in charge of supervising social media platforms?

National authorities responsible for the supervision of social media platforms vary by country and region. Below is an overview of key authorities:

1. China

  • Central Cyberspace Affairs Commission (CAC): The CAC is China’s top internet watchdog, responsible for regulating online content, including social media platforms. It issues guidelines and enforces rules to ensure compliance with national laws, such as cracking down on misinformation, fake accounts, and illegal content.

2. EU

  • European Data Protection Board (EDPB): The EDPB provides guidelines and oversees the implementation of data protection laws, including those affecting social media platforms, under the General Data Protection Regulation (GDPR).
  • National Regulatory Authorities: Each EU member state has its own regulatory body. For example, Germany’s Federal Network Agency (BNetzA).

3. USA

  • Federal Communications Commission (FCC): While the FCC primarily regulates telecommunications, it also plays a role in overseeing aspects of online communication, including social media platforms, particularly in areas like net neutrality and broadband access.
  • Federal Trade Commission (FTC): The FTC enforces consumer protection laws and addresses issues like privacy violations and deceptive practices on social media platforms.

4. United Kingdom

  • Office of Communications (Ofcom): Ofcom is the UK’s communications regulator, which has been granted expanded powers to regulate online harms under the proposed Online Safety Bill. It oversees social media platforms to ensure they remove illegal and harmful content.

5. India

  • Ministry of Electronics and Information Technology (MeitY): MeitY oversees the implementation of the Information Technology Act and related rules, including intermediary guidelines for social media platforms. It works with platforms to ensure compliance with content removal and data localisation requirements.

6. Germany

  • Federal Network Agency (BNetzA): BNetzA enforces the Network Enforcement Act (NetzDG), which mandates that social media platforms remove illegal content within strict timeframes. Non-compliance can result in significant fines.

7. Australia

  • Australian Communications and Media Authority (ACMA): ACMA regulates online content, including social media platforms, under the Broadcasting Services Act. It enforces rules related to harmful content and misinformation.

8. Brazil

  • National Telecommunications Agency (Anatel): Anatel oversees telecommunications and internet services, including social media platforms, ensuring compliance with national regulations.

9. Singapore

  • Infocomm Media Development Authority (IMDA): IMDA regulates online content and enforces the Protection from Online Falsehoods and Manipulation Act (POFMA), which targets misinformation on social media platforms.

10. South Africa

  • Independent Communications Authority of South Africa (ICASA): ICASA regulates electronic communications, including social media platforms, to ensure compliance with national laws.
What national policies exist for licensing and legal incorporations of social media platforms?

Several jurisdictions around the world require social media platforms to obtain licenses or establish local legal entities to operate within their borders. These requirements are often tied to national security, content moderation, and data localisation concerns. Below is a summary of key jurisdictions and their specific licensing or local entity requirements:


1. Malaysia

  • Licensing Requirement: Malaysia mandates that social media platforms and messaging services with over 8 million active users obtain an Application Service Provider Class (ASP(C)) license from the Malaysian Communications and Multimedia Commission (MCMC). This regulation took effect on 1 January 2025 aiming to combat cybercrime and harmful content. Non-compliance can result in fines of up to RM500,000 and/or imprisonment for up to 5 years.

2. India

  • Local Entity Requirement: India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, introduced in 2021, require social media platforms with significant user bases to appoint local compliance officers, establish a physical presence in India, and comply with content removal requests within 36 hours. Platforms must also publish compliance reports every six months.

3. Turkey

  • Local Representation: Turkey requires social media platforms with more than 1 million daily users to appoint a local representative and store user data within the country. Failure to comply can result in fines, bandwidth throttling, or outright bans.

4. Russia

  • Data Localisation and Licensing: Russia mandates that social media platforms store user data locally and comply with content removal requests. Platforms must also register with Roskomnadzor, the federal communications regulator, and face fines or restrictions for non-compliance.

5. China

  • Strict Licensing and Localisation: China requires foreign social media platforms to obtain licenses and establish local entities to operate. However, most Western platforms like Facebook and Twitter are blocked, while domestic platforms like WeChat and Weibo are heavily regulated under China’s strict content moderation laws.

6. EU

  • Conditional Immunity: While the EU does not mandate licensing, its Digital Services Act (DSA) requires platforms to establish local points of contact and comply with strict content moderation and transparency rules. Platforms must also adhere to the General Data Protection Regulation (GDPR) for data handling.

7. Brazil

  • Local Legal Representation: Brazil requires social media platforms to appoint local legal representatives and comply with content removal requests, especially during elections. Failure to comply can result in fines or temporary bans.

8. South Korea

  • Licensing and Compliance: South Korea requires platforms to comply with its Information and Communications Network Act, which includes content moderation and data protection requirements. Platforms must also register with the Korea Communications Commission (KCC).

9. Singapore

  • Licensing for News Platforms: Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) requires platforms disseminating news to obtain licenses and comply with content correction orders. While not specific to social media, it impacts platforms like Facebook and Twitter.

10. Australia

  • Age Verification and Local Compliance: Australia’s Online Safety Act requires platforms to comply with content removal requests and implement age verification systems. Platforms must also appoint local representatives to handle regulatory matters.
What are the liabilities of social media companies for the following types of content: disinformation, incitement of violence, hate speech, pornography, copyright infringement, scams, blasphemy, and impersonation?

More monitoring information will be provided.

What national policies are in place concerning companies paying news platforms for content made available on social media platforms?

More monitoring infomration will be provided.

What regulations are in place for AI-generated content?

EU:  The AI Act requires the labelling of AI-generated content.
USA: The Federal Trade Commission (FTC) has issued guidelines urging companies to disclose when content is AI-generated, particularly in advertising and marketing.

China: China has implemented strict regulations requiring platforms to label AI-generated content, especially deepfakes, and to obtain consent from individuals before using their likenesses.

Which practices do social media companies engage in when using technical tools like geolocation to filter content based on location and jurisdiction?

Social media companies employ various technical solutions, including geolocation, to filter content according to jurisdictional regulations. These practices are essential for complying with local laws, addressing cultural sensitivities, and managing licensing agreements.

For example, social media companies use geolocation to help identify users from Germany when they have to remove hate speech of illegal content within 24 hours as required under the NetzDG law

 Logo, Text, Outdoors

The central policy process was the WSIS+20 review, which assessed two decades of digital progress with mixed results. While billions have been connected – raising all boats, though some faster than others – digital divides have persisted and widened, particularly with recent AI advancements. Alongside this, a major development was the Hamburg Declaration on Responsible AI for the SDGs, aiming to steer global ethical AI development through explicit principles of inclusivity and sustainability. AI Commons featured prominently during the February AI Summit in Paris (February 2025), focusing on open data in climate/health/education, and open-source AI platforms.

Sectoral development activities include:

January 2025 Forecast

In 2025, digital development will remain a central theme in international cooperation, particularly through the WSIS+20 process. The WSIS+20 High-Level Event, scheduled for July 2025 in Geneva, will discuss issues such as bridging the digital divide and advancing the use of digital technologies for development.

The formal WSIS+20 review meeting at the UNGA level (likely in December 2025) will not only assess 20 years of implementation of WSIS action lines in support of an inclusive information society, but will also outline future priorities.

Additionally, the Hamburg Declaration on Responsible AI for the SDGs will introduce new frameworks for ethical AI development, emphasising inclusivity and sustainability.

Inclusion is a cross-cutting development issue and cornerstone of the 2030 Agenda for Sustainable Development. Digital inclusion has the following main aspects:

Access inclusion

In 2025, efforts to ensure equal access to the internet and digital technologies will intensify, particularly in rural and underserved areas. The WSIS+20 process will play a pivotal role in bridging the digital divide by outlining development priorities and concrete actions.

For example, governments and private sector players are expected to invest in affordable connectivity solutions, such as low-cost satellite internet and community Wi-Fi networks, to ensure that marginalised communities can participate in the digital economy.

Financial inclusion

The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus on financial health and well-being. Initiatives like CGAP’s Financial Inclusion 2.0 emphasise integrating resilience, equity, and broader development goals, such as climate change mitigation and gender inclusion.

For example, sustainable finance models, such as green bonds and ESG-linked financial products, are expected to grow significantly, with the green bond market projected to reach US$2 trillion in 2025. However, targeted policies will be crucial in less developed financial systems to prevent digital financial inclusion from exacerbating gender disparities.

Economic inclusion

Economic inclusion in 2025 will focus on enabling full participation in the labour market and entrepreneurship opportunities. Digital platforms will be key in supporting small and medium-sized enterprises (SMEs), particularly in developing regions.

For example, open finance ecosystems will provide SMEs with access to credit, savings, and insurance services, fostering inclusive digital economies. Additionally, competency-based education models will align workforce skills with market demands, ensuring that individuals from diverse backgrounds can thrive in the digital economy.

Work inclusion

Work inclusion efforts in 2025 will prioritise equal access to careers in the tech industry and beyond. According to the WEF 2025 Job Report, 19% of companies are planning to shift from credential-based to skill-based hiring practices. This tendency is particularly noticeable in AI and tech sectors. Reskilling and upskilling will gain momentum as ensuring jobs in the context of the major shift generated by AI and digital advancements will be (or should be) a priority. 

Gender inclusion

Educating and empowering women and girls in the digital and tech realms. This includes initiatives to increase female participation in STEM fields, provide digital skills training, and ensure women access digital tools and resources equally.

Policy inclusion

Encouraging the participation of stakeholders in digital policy processes at the local, national, regional, and international levels. This includes fostering multistakeholder collaboration to ensure that digital policies reflect the needs and perspectives of diverse communities. The WSIS+20 process, for example, involves consultations with governments, private sector entities, civil society, and international organisations to shape inclusive digital governance frameworks.

Knowledge inclusion

Contributing to knowledge diversity, innovation, and learning on the internet. The rise of AI brings new relevance to knowledge diversity, as current AI models are often based on limited datasets, primarily from Western sources. In the coming years, communities will aim to develop bottom-up AI solutions that reflect their cultural and knowledge heritage. This includes initiatives to create diverse datasets, promote local AI innovation, and ensure that AI technologies are inclusive and representative of global perspectives.

Digital commons

AI and digital commons will feature prominently in 2025, starting with the AI Summit in Paris in February. Commons could be realised through specific initiatives such as: a potential global data governance framework, open data initiatives in climate, health and education, knowledge inclusion initiatives, open-source AI platforms, etc. 

A blue background with white letters

An explosive energy demand of AI marked last year. With data centre electricity consumption projected to double to 1,000 TWh – equivalent to Japan’s total usage – the industry pursued radical innovation and energy sourcing. Concrete initiatives moved beyond pledges to tangible projects. Microsoft’s plan to restart a nuclear reactor at Three Mile Island and Google’s order of advanced reactors from Kairos Power exemplified the turn to next-generation nuclear to ensure clean, baseload power. Simultaneously, operational tactics like power capping and carbon-aware computing became standard practice, actively shifting workloads to optimise for renewable energy availability.

In parallel, the circular economy transition gained regulatory teeth and technical sophistication. While global e-waste volumes remained alarming, binding frameworks emerged. The updated Basel Convention amendments, effective January 2025, established a stricter global regime for the movement of e-waste across national borders, despite implementation inconsistencies between OECD and non-OECD countries. Technologically, companies like Cisco set a benchmark by reusing or recycling nearly 100% of returned products, and AI itself was deployed to optimize recycling streams and identify reusable components, targeting waste reductions of 16-86%.

The water intensity of AI also became a measurable crisis and a driver of innovation. The statistic that global AI demand could account for half of the UK’s annual water usage underscored the scale. In response, advanced cooling technologies, such as immersion cooling and liquid-to-liquid heat exchangers, saw accelerated adoption, offering up to 55% reductions in water usage.

Regulatory pressure crystallized in mandates like Germany’s Energy Efficiency Act, requiring data centers to achieve 100% renewable energy by 2027, and in disclosure laws in California and the EU forcing transparency on water and energy use.

At the international level, the anticipated advisory opinion from the International Court of Justice (ICJ) on state climate obligations promised to add significant legal and moral weight to the discourse, potentially influencing how environmental harms linked to digital infrastructure are framed as human rights issues.

January 2025 Forecast

AI, digitalisation, and energy

The year 2025 will see digitalisation and environmental sustainability increasingly intertwined, with AI and digital technologies driving innovation while posing new challenges.

Key focus areas will include energy efficiency, circular economy practices, water security, and enhanced sustainability reporting.

Collaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable and resilient future.

AI has significantly increased energy consumption, with data centres now consuming approximately 2% of global electricity, a figure comparable to the airline industry. By 2025, the energy demand from data centres is expected to double, reaching 1,000 terawatt-hours (TWh) annually—equivalent to the electricity consumption of Japan. This surge is driven by the exponential growth of AI workloads, particularly generative AI, which requires vast computational resources and energy-intensive cooling systems.

To address this, companies are exploring innovative solutions such as power capping (limiting processor power to 60-80% of capacity) and carbon-aware computing, which shifts workloads to times or locations with lower carbon intensity. Additionally, there is a growing emphasis on renewable energy sources. For instance, Microsoft plans to restart a nuclear power station at Three Mile Island to power its data centres, while Google has ordered advanced nuclear reactors from Kairos Power. These efforts aim to balance AI’s energy demands with sustainability goals.

Circular economy and e-waste management

The adoption of circular economy principles will accelerate in 2025, focusing on product longevity, repairability, and recycling. This shift is critical as global e-waste will reach 82 million tonnes by 2030. Companies like Cisco are leading the way by reusing and recycling nearly 100% of returned products, setting a benchmark for sustainable practices in the tech industry.

Moreover, AI-driven e-waste management strategies are emerging. For example, AI can optimise recycling by identifying reusable components and reducing waste generation. These innovations are expected to reduce e-waste by 16-86% through proactive management and circular economy practices.

However, in 2025, e-waste management remains a major challenge despite updated regulations. New Basel Convention amendments, effective 1 January 2025, require prior consent from importing and transit countries to prevent illegal dumping. OECD countries have also updated their e-waste guidelines to align with circular economy goals, offering a framework for trade with non-Basel parties like the USA. However, disagreements over adopting Basel amendments have left OECD countries to choose between existing OECD rules or stricter Basel controls, creating inconsistencies and complicating enforcement. These gaps risk enabling illegal dumping, especially in regions with weaker oversight. 

Alarmingly, only about a quarter of e-waste is recycled properly, with most ending up in informal sectors or landfills. There is still a significant amount of work to be done before these regulations become a reality.

Data centres, AI, and water consumption

AI facilities are particularly water-intensive due to the high heat generated by GPUs and other hardware. A small 1-megawatt data centre can consume up to 6.6 million gallons of water annually, primarily for cooling purposes. In 2025, global AI demand could account for the withdrawal of 4.2-6.6 billion cubic meters of water annually – roughly half of the UK’s yearly water usage.

To mitigate this, innovative cooling technologies such as immersion cooling and liquid-to-liquid heat exchangers are gaining traction. These methods can reduce water consumption by up to 55% and improve energy efficiency by 10-20%. Regulatory pressure is also mounting, with the EU’s Energy Efficiency Directive and California’s Climate Disclosure Laws pushing for greater transparency and stricter water and energy use regulations in data centres.

Collaboration and governance

Achieving a sustainable and resilient future in 2025 will require collaboration across sectors, robust governance, and strategic investments. Governments and industry leaders increasingly recognise the need for binding renewable energy and efficiency targets for data centres. For example, Germany’s Energy Efficiency Act mandates that data centres achieve 100% renewable energy reliance by 2027.

Additionally, public-private partnerships are essential for scaling sustainability initiatives. Companies invest in on-site renewable energy generation and advanced energy storage solutions to ensure a stable power supply. Hyperscalers such as Google and Microsoft spearhead this initiative by entering into long-term power purchase agreements (PPAs) with renewable energy providers.

In 2025, the International Court of Justice (ICJ) is expected to issue an advisory opinion on the obligations of states concerning climate change, as requested in a 2023 UNGA resolution. While non-binding, this advisory is expected to tackle issues related to the human right to a clean, healthy, and sustainable environment and all the other human rights and related obligations of states, and it could have an impact on intergovernmental processes dealing with climate change matters.



cross-circle