OpenAI delays Media Manager amid creator backlash

In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.

The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.

Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.

While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.

German parties outline technology policies ahead of election

As Germany prepares for national elections on February 23, political parties are outlining their tech policy priorities, including digitalisation, AI, and platform regulation. Here’s where the leading parties stand as they finalise their programs ahead of the vote.

The centre-right CDU, currently leading in polls with 33%, proposes creating a dedicated Digital Ministry to streamline responsibilities under the Ministry of Transport. The party envisions broader use of AI and cloud technology in German industry while simplifying citizen interactions with authorities through digital accounts.

Outgoing Chancellor Olaf Scholz’s SPD, polling at 15%, focuses on reducing dependence on US and Chinese tech platforms by promoting European alternatives. The party also prioritises faster digitalisation of public administration and equitable rules for regulating AI and digital platforms, echoing EU-wide goals of tech sovereignty and security.

The Greens, with 14% support, highlight the role of AI in reducing administrative workloads amid labour shortages. They stress the need for greater interoperability across IT systems and call for an open-source strategy to modernise Germany’s digital infrastructure, warning that the country lags behind EU digitalisation targets.

The far-right AfD, projected to secure 17%, opposes EU platform regulations like the Digital Services Act and seeks to reverse Germany’s adoption of the NetzDG law. The party argues these measures infringe on free speech and calls for transparency in funding non-state actors and NGOs involved in shaping public opinion.

The parties’ contrasting visions set the stage for significant debates on the future of technology policy in Germany.

UN discusses ethical tech and inclusion at IGF 2024

Speakers at IGF 2024 highlighted digital innovation within the United Nations system, demonstrating how emerging technologies are enhancing services and operational efficiency. Representatives from UNHCR, UNICEF, the UN Pension Fund, and UNICC shared their organisations’ progress and collaborative efforts.

Michael Walton, Head of Digital Services at UNHCR, detailed initiatives supporting refugees through digital tools. These include mobile apps for services and efforts to counter misinformation. Walton stressed the importance of digital inclusion and innovation to bridge gaps in education and access for vulnerable groups.

Fui Meng Liew, Chief of Digital Center of Excellence at UNICEF, emphasised safeguarding children’s data rights through a comprehensive digital resilience framework. UNICEF’s work also involves developing digital public goods, with a focus on accessibility for children with disabilities and securing data privacy.

Dino Cataldo Dell’Accio from the UN Pension Fund presented a blockchain-powered proof-of-life system that uses biometrics and AI in support of e-Government for the aging population. This system ensures beneficiaries’ security and privacy while streamlining verification processes. Similarly, Sameer Chauhan of UNICC showcased digital solutions like AI chatbots and cybersecurity initiatives supporting UN agencies.

The session’s collaborative tone extended into discussions of the UN Digital ID project, which links multiple UN agencies. Audience members raised questions on accessibility, with Nancy Marango and Sary Qasim suggesting broader use of these solutions to support underrepresented communities globally.

Efforts across UN organisations reflect a shared commitment to ethical technology use and digital inclusion. The panellists urged collaboration and transparency as key to addressing challenges such as data protection and equitable access while maintaining focus on innovation.

Irish data authority seeks EU guidance on AI privacy under GDPR

The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.

The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.

The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.

OpenAI faces lawsuit from Indian News Agency

Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.

The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.

OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

Google researchers discover first vulnerability using AI

Google researchers announced a breakthrough in cybersecurity, revealing they have discovered the first vulnerability using a large language model. This vulnerability, identified as an exploitable memory-safety issue in SQLite—a widely used open-source database engine—marks a significant milestone, as it is believed to be the first public instance of an AI tool uncovering a previously unknown flaw in real-world software.

The vulnerability was reported to SQLite developers in early October, who promptly addressed the issue on the same day it was identified. Notably, the bug was discovered before being included in an official release, ensuring that SQLite users were unaffected. Google emphasised this development as a demonstration of AI’s significant potential for enhancing cybersecurity defences.

The initiative is part of a collaborative project called Big Sleep, which involves Google Project Zero and Google DeepMind, stemming from previous efforts focused on AI-assisted vulnerability research.

Many companies, including Google, typically employ a technique known as ‘fuzzing,’ where software is tested by inputting random or invalid data to uncover vulnerabilities. However, Google noted that fuzzing often needs to improve in identifying hard-to-find bugs. The researchers expressed optimism that AI could help bridge this gap. ‘We see this as a promising avenue to achieve a defensive advantage,’ they stated.

The identified vulnerability was particularly intriguing because it was missed by existing testing frameworks, including OSS-Fuzz and SQLite’s internal systems. One of the key motivations behind the Big Sleep project is the ongoing challenge of vulnerability variants, with more than 40% of zero-day vulnerabilities identified in 2022 being variants of previously reported issues.

PepsiCo taps data to boost sales amid shifting consumer demand

PepsiCo is enhancing data collaboration with major retailers in response to declining sales and shifting consumer preferences toward budget options. The Lay’s and Tostitos maker has seen recent decreases in snack sales volumes, prompting adjustments to product sizes and a renewed focus on advertising. In early October, PepsiCo revised its annual sales forecast, reflecting the need to adapt to current market dynamics.

The data-sharing initiative, led by PepsiCo‘s senior vice president of strategy, Angelika Kipor, enables the company to gain insights into shoppers’ purchasing habits while helping retailers improve their supply chain accuracy. By sharing predictive data, PepsiCo assists retailers in optimising product orders, leading to higher sales—recently seen in collaboration with Carrefour, where the grocer expanded its PepsiCo product range based on historical data insights.

Retailer partnerships also help PepsiCo make data-driven supply chain adjustments using AI, a practice gaining traction among consumer goods companies looking to streamline operations. Kipor emphasised that while data-sharing strengthens trust with retailers, it remains separate from PepsiCo’s pricing negotiations, which have eased since the company’s commitment last year not to implement further price hikes on snacks and drinks despite ongoing inflation.