View Predictions for 2025 in browser The year of CLARITY: 10 Forecasts for AI and digital developments in 2025 Clarity is the keyword for AI and digital developments in 2025. Clarity follows on the hype of 2023 and the grounding of 2024. In 2025, we will better understand AI’s risks, opportunities, and policy issues that need to be regulated. But clarity, we also mean a return to digital basics. It’s easy to forget that even the most cutting-edge AI is built on decades-old foundations—like the humble TCP/IP protocol that underpins our digital reality. Our 2025 forecast begins with the evolution of AI technology itself, exploring how geostrategic interests and positions are shaping its development. From there, we delve into governance, where these interests crystallise into policies and regulations. With the stage set, we turn to key issues: security, human rights, the economy, standards, content, the environment, and global development. The forecast begins on Sunday, 20 January, outlining President Trump’s tech priorities, followed by an outlook for the rest of the year. AI and digital developments will be continuously monitored around the questions listed in this forecast throughout the year. You can also submit your questions and topics you want us to follow in 2025. Best wishes for 2025! Jovan Kurbalija
Be careful with AI predictions and forecasts! Any AI prediction, including this one, should be read with a critical mind. So far, many AI predictions have failed. Take Geoffrey Hinton, the 2024 Nobel Prize Laureate, who declared back in 2016: ‘We should stop training radiologists now. It’s completely obvious that within five years, deep learning will outperform radiologists.‘ Yet, radiology—like many other professions—remains alive and well.
From bigger is better to smaller is smarter
Power of small AI models: Smaller models increasingly outperform larger ones; they are cost-effective and more often efficient in inference (generating answers).
Bottom-up growth of AI: Affordable and smaller AI models will facilitate the growth of bottom-up AI anchored in the knowledge and needs of communities, companies, and countries.
AI agents between hype and reality: In 2025, there will be an avalanche of AI agents, with only a few proving genuinely useful. The more AI agents are tailored to specific uses and contexts, the more effective they will be.
Open-source AI prevails: The open-source approach will dominate AI growth in 2025. In addition to platforms in the USA and Europe, China is becoming a major player with leading AI platforms such as Quen 3.5.
AI is becoming a commodity: One can create AI applications in one day by using open-source tools, LLMs, etc. Yet…
AI transformation is a complex task: Proper use of AI will require new skills and changes in organisational culture, which are not a matter of use of technology. In 2025, many organisations and businesses will face the challenge of achieving a plateau of productivity using AI.
GEO politics-economy-emotions in the AI era
Geopolitics: Political and security considerations will dominate semiconductors, submarine cables, satellites and other aspects of AI and digital developments.
Geoeconomics: The power of tech companies extends far beyond technology, permeating various aspects of global society and governance, including social influence, data centralization, and political impact. In 2025, tech giants will likely face pushback from national governments and local companies.
Geoemotions: Nervousness and excitement about AI will impact growth of acceptance of AI in 2025. Asian societies are emotionally the most prepared for AI transformation as shown below.
From negotiations to implementations
After 2024 – a year of intense negotiations on AI and digital governance – 2025 will shift focus to implementation as the world works to put resolutions, agreements, and treaties into action. This should be a year of clarity and consolidation, with two key themes echoing across the AI and digital realms:
The search for clarity in AI governance includes identifying issues and risks specific to AI and not regulated by existing legal instruments and policies.
In 2025, the focus will be on two layers of the AI governance pyramid: data for developing AI and apps for the uses of AI (see below).
Regulation of AI risks will shift further from extinction towards existing and exclusion risks. A risk of burst of AI investment buble will become more prominent in 2025. .
Hanoi convention – new reference in the cybercrime field
Cybercrime: Following Budapest, the 2001 Council of Europe Cybercrime Convention host, Hanoi, will emerge as the next toponym in cybercrime language. The Vietnamese capital will host the signing ceremony for the new UN Cybercrime Convention, which will remain open for signatures until 31 December 2026. The convention will enter into force 90 days after the 40th ratification.
Cybersecurity: By July 2025, the UN Open-Ended Working Group (OEWG) should propose a future framework for cybersecurity cooperation. There is a consensus on confidence and capacity building measures. There is no agreement on whether the future cybersecurity framework should focus on implementing existing norms or negotiations of new ones.
Encryption: The race between cryptography and quantum computing will accelerate in 2025. EU will try to forge consensus on the ‘Chat Control 2.0’ framework, requesting tech platforms to monitor encrypted discussions for illegal content.
There will be more discussion on military uses of, lethal autonomous weapons systems, and impact of cyber conflicts on international humanitarian law.
In 2025, significant political, technological, and societal shifts will shape the landscape of human rights and digitalisation.
According to all indications, Trump’s presidency will deprioritise human rights compared to the Biden administration’s focus. As tech companies retreat from content moderation, they will likely backpedal on the impact of their platforms on human rights.
EU countries will likely increase their focus on human rights in the digital age, addressing issues such as AI ethics, surveillance, and the impact of technology on privacy and freedom of expression. The EU will aim to ensure global relevance of EU’s regulations of digital relevance: GDPR, AI Act, DSA, etc.
In 2025, geopolitical tensions, technological developments, trade barriers, and industrial policies will affect the digital economy. The resilience of the digital economy will face three critical tests in 2025:
First, can data flow freely in an economically fractured world? So far, the internet and digital networks have resisted significant fragmentation in the flow of capital, goods, and services. The success or failure of this test will not only shape the future of the digital economy but, more importantly, the internet itself.
Second, will the AI bubble burst in 2025? This risk stems from massive investments in AI and its limited impact on businesses and productivity. While significant funding has fueled the development of AI models, driving the market capitalisation of companies like Nvidia to new heights, the real-world adoption of AI in business and productivity remains low. The risk of an “AI bubble burst” grows with the emergence of cost-effective models, such as DeepSeek, which are developed and deployed at a fraction of the cost compared to those by OpenAI, Anthropic, and other mainstream AI platforms.
Third, will the digital economy become securitised? Current geopolitical trends are increasingly integrating tech companies into nation-states’ security and military frameworks. The growing securitisation of the tech industry will likely trigger pushback worldwide, as the involvement of foreign tech companies in internal markets will no longer be evaluated solely on economic grounds.
In 2025, technical standards will become more relevant as a ‘safety net’ for global networks during economic and political fragmentations. Standards ensure interoperability of apps and services across other divides and divisions. TCP/IP (Transport Control Protocol/Internet Protocol) remains the glue that keeps the internet together despite political and economic fragmentation.
AI standardisation will gain additional momentum in 2025, as the three main international standard development organisations – the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU) – have announced a set of initiatives including the International AI Standards Summit and creating an AI standards database.
Other notable standardisation initiatives will focus on open-source AI, digital public infrastructure, mobile networks like 6G, and brain-computer interfaces.
In 2025, content governance will be altered by the shift from fact-checking to a community notes approach by social media platforms. The content policy landscape will shift from its current heavy focus on content regulation to a more relaxed approach.
The ‘hands off’ approach was enthusiastically adopted by tech companies, as it will reduce their cost of maintaining a complex system of policies, organisations, and networks that was formed over the last 10 years, involving, according to some estimates, close to 30.000 people who monitor content on social media platforms. In addition, social media platforms transfer moderation responsibilities to users.
The underlying question is whether this shift from fact-checking to a system of community notes will address the problem of quality and reliability of content on social media platforms. We will monitor developments in 2025 around the following inquiry questions.
In 2025, digital development will remain a central theme in international cooperation, particularly through the WSIS+20 process. The WSIS+20 High-Level Event, scheduled for July 2025 in Geneva, will discuss issues such as bridging the digital divide and advancing the use of digital technologies for development.
The formal WSIS+20 review meeting at the UNGA level (likely in December 2025) will not only assess 20 years of implementation of WSIS action lines in support of an inclusive information society but will also outline future priorities.
Additionally, the Hamburg Declaration on Responsible AI for the SDGs will introduce new frameworks for ethical AI development, emphasizing inclusivity and sustainability.
The year 2025 will see digitalisation and environmental sustainability increasingly intertwined, with AI and digital technologies driving innovation while posing new challenges.
Key focus areas will include energy efficiency, circular economy practices, water security, and enhanced sustainability reporting.
Collaboration across sectors, robust governance, and strategic investments will be critical to achieving a sustainable and resilient future.
AI is an affordable commodity, but its effective use requires a priceless transformation of our thinking and organisations.
At first glance, this statement seems paradoxical. How can something so readily available demand such profound change? Yet, it captures today’s dilemma facing businesses, governments, and organisations.
AI has become accessible to many – you can create an AI chatbot in hours – but unlocking its potential requires far more than technology. It demands a shift in professional cultures, a break from old routines, and embracing new ways of thinking and problem-solving.
The challenge lies not in the tools but in adapting to them. Effective AI adoption isn’t about purchasing the latest software or algorithms; it’s about reshaping how we work, collaborate, and innovate. The future won’t belong to those with the most powerful AI but to those who can integrate it effectively into their workflows and decision-making processes.
This transformation isn’t easy. It requires challenging the status quo, rethinking long-held practices, and fostering a culture of continuous learning and adaptability. But the rewards are immense.
The good news? This journey isn’t just about efficiency, profit, or technological shift; it’s a cultural and philosophical evolution of contributing to the well-being of our communities, countries, and humanity as a whole. Moreover, AI nudges us to reflect on what it means to be human – both ontologically and spiritually. As we use AI effectively, we may find ourselves closer to answering some of humanity’s eternal questions of purpose and happiness and dealing with our predicaments.
Another good news is that AI philosophy is very practical. In preparing these predictions, I have been consulting some of the best minds from a history of philosophy and literature. Their thinking is adjusted to my ‘biases’ and cognitive framings via AI platforms. My reading of their main works was complemented by pointed parts relevant to my work, in this case, 2025 forecasts.