New Adobe app ensures creator credit as AI grows
Adobe announced it will introduce a free web-based app in 2025 to help creators of images and videos get proper credit for their work, especially as AI systems increasingly rely on large datasets for training. The app will enable users to affix ‘Content Credentials,’ a digital signature, to their creations, indicating authorship and even specifying whether they want their work used for AI training.
Since 2019, Adobe has been developing Content Credentials as part of a broader industry push for transparency in how digital media is created and used. TikTok has already committed to using these credentials to label AI-generated content. However, major AI companies have yet to adopt Adobe’s system, though Adobe continues to advocate for industry-wide adoption.
The initiative comes as legal battles over AI data use intensify, with publishers like The New York Times suing OpenAI. Adobe sees this tool as a way to protect creators and promote transparency, as highlighted by Scott Belsky, Adobe’s chief strategy officer, who described it as a step towards preserving the integrity of creative work online.
UN adopts ‘Pact for the Future’
On 22 September 2024, world leaders convened in New York to adopt the ‘Pact for the Future’ – a comprehensive agreement designed to reimagine global governance in response to contemporary and future challenges.
The ground-breaking Pact includes a Global Digital Compact and a Declaration on Future Generations, aiming to update the international system established by previous generations. The Secretary-General stressed the importance of aligning global governance structures with the realities of today’s world, fostering a more inclusive and representative international system.
The Pact covers many critical areas, including peace and security, sustainable development, climate change, digital cooperation, human rights, and gender equality. It marks a renewed multilateral commitment to nuclear disarmament and advocates for strengthened international frameworks to govern outer space and prevent the misuse of new technologies. To bolster sustainable development, the Pact aims to accelerate the Sustainable Development Goals (SDGs), reform international financial architecture, and enhance measures to tackle climate change by committing to net-zero emissions by 2050.
Digital cooperation is notably addressed through the Global Digital Compact, which outlines commitments to connect all people to the internet, safeguard online spaces, and govern AI. The Compact promotes open-source data and sets the stage for global data governance. It also ensures increased investment in digital public goods and infrastructure, especially in developing countries.
Why does it matter?
The ‘Pact for the Future’ encapsulates a detailed, optimistic vision geared toward creating a sustainable, just, and peaceful global order. The Summit of the Future, which facilitated the adoption of this Pact as an extensively inclusive process, involves millions of voices and contributions from diverse stakeholders. The event was attended by over 4,000 participants, including global leaders and representatives from various sectors, and was preceded by Action Days, which drew more than 7,000 attendees. Such a forum shows firm global commitments to action, including pledges amounting to USD 1.05 billion to advance digital inclusion.
Social media owners, politicians, and governments top threats to online news trust, IPIE report shows
A recent report from the International Panel on the Information Environment (IPIE) highlights social media owners, politicians, and governments as the primary threats to a trustworthy online news landscape. The report surveyed 412 experts across various academic fields and warned of the unchecked power social media platforms wield over content distribution and moderation. According to Philip Howard, a panel co-founder, such results pose a critical threat to the global flow of reliable information.
The report also raised concerns about major platforms like X (formerly Twitter), Facebook, Instagram, and TikTok. Allegations surfaced regarding biassed moderation, with Elon Musk’s X reportedly prioritising the owner’s posts and Meta being accused of neglecting non-English content. TikTok, under scrutiny for potential ties to the Chinese government, has consistently denied being an agent of any country. The panel emphasised that these platforms’ control over information significantly impacts public trust.
The survey revealed that around two-thirds of respondents anticipate the information environment will deteriorate, marking a noticeable increase in concern compared to previous years. Experts cited AI tools as a growing threat, with generative AI exacerbating the spread of misinformation. AI-generated videos and voice manipulation ranked as the top concerns, especially in developing countries with more acute impact.
However, not all views on AI are negative. Most respondents also saw its potential to combat misinformation by helping journalists sift through large datasets and detect false information. The report concluded by suggesting key solutions: promoting independent media, launching digital literacy initiatives, and enhancing fact-checking efforts to mitigate the negative trends in the digital information landscape.
EU’s AI Act faces tech giants’ resistance
As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.
A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.
The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.
The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.
UN issues final report with key recommendations on AI governance
In a world where AI is rapidly reshaping industries, societies, and geopolitics, the UN advisory body has stepped forward with its final report – ‘Governing AI for Humanity,’ presenting seven strategic recommendations for responsible AI governance. The report highlights the urgent need for global coordination in managing AI’s opportunities and risks, especially in light of the swift expansion of AI technologies like ChatGPT and the varied international regulatory approaches, such as the EU’s comprehensive AI Act and the contrasting regulatory policies of the US and China.
One of the primary suggestions is the establishment of an International Scientific Panel on AI. The body, modelled after the Intergovernmental Panel on Climate Change, would bring together leading experts to provide timely, unbiased assessments of AI’s capabilities, risks, and uncertainties. The International Scientific Panel on AI would ensure that policymakers and civil society have access to the latest scientific understanding, helping to cut through the hype and misinformation that can surround new technological advances.
The AI Standards Exchange implementation would form a standard exchange bringing together global stakeholders, including national and international organizations, to debate and develop AI standards. It would ensure AI systems are aligned with global values like fairness and transparency.
AI Capacity Development Network is also one of the seven key points that would address disparities. The UN here proposes building an AI capacity network that would link centres of excellence globally, provide training and resources, and foster collaboration to empower countries that lack AI infrastructure.
Another key proposal is the creation of a Global AI Data Framework, which would provide a standardised approach to the governance of AI training data. Given that data is the lifeblood of AI systems, this framework would ensure the equitable sharing of data resources, promote transparency, and help balance the power dynamics between big AI companies and smaller emerging economies. The framework could also spur innovation by making AI development more accessible across different regions of the world.
The report further recommends forming a Global Fund for AI to bridge the AI divide between nations. The fund would provide financial and technical resources to countries lacking the infrastructure or expertise to develop AI technologies. The goal is to ensure that AI’s benefits are distributed equitably and not just concentrated in a few technologically advanced nations.
In tandem with these recommendations, the report advocates for a Policy Dialogue on AI Governance, emphasising the need for international cooperation to create harmonised regulations and avoid regulatory gaps. With AI systems impacting multiple sectors across borders, coherent global policies are necessary to prevent a ‘race to the bottom’ in safety standards and human rights protections.
Lastly, the UN calls for establishing an AI Office within the Secretariat, which would serve as a central hub for coordinating AI governance efforts across the UN and with other global stakeholders. This office would ensure that the recommendations are implemented effectively and that AI governance remains agile in rapid technological change.
Through these initiatives, the UN seeks to foster a world where AI can flourish while safeguarding human rights and promoting global equity. The report implies that the stakes are high, and only through coordinated global action can we harness AI’s potential while mitigating its risks.