Apple to settle Siri privacy lawsuit for $95 million amidst ongoing user consent concerns
Apple has agreed to pay $95 million to settle a class action lawsuit alleging its Siri voice assistant violated users’ privacy. The lawsuit claimed that Apple recorded users’ private conversations without consent when the ‘Hey, Siri’ feature was unintentionally triggered. These recordings were allegedly shared with third parties, including advertisers, leading to targeted ads based on private discussions.
The class period for the lawsuit spans from 17 September 2014 to 31 December 2024 and applies to users of Siri-enabled devices like iPhones and Apple Watches. Affected users could receive up to $20 per device. Apple denied any wrongdoing but settled the case to avoid prolonged litigation.
The settlement amount is a small fraction of Apple’s annual profits, with the company making nearly $94 billion in net income last year. While the company and plaintiffs’ lawyers have yet to comment on the settlement, the plaintiffs may seek up to $28.5 million in legal fees and expenses. A similar lawsuit involving Google’s Voice Assistant is also underway in a California federal court.
Anthropic settles copyright infringement lawsuit with major music publishers over AI training practices
Anthropic, the company behind the Claude AI model, has agreed to resolve aspects of a copyright infringement lawsuit filed by major music publishers. The lawsuit, initiated in October 2023 by Universal Music Group, ABKCO, Concord Music Group, and others, alleged that Anthropic’s AI system unlawfully distributed lyrics from over 500 copyrighted songs, including tracks by Beyoncé and Maroon 5.
The publishers argued that Anthropic improperly used data from licensed platforms to train its models without permission. Under the settlement approved by US District Judge Eumi Lee, Anthropic will maintain and extend its guardrails designed to prevent copyright violations in existing and future AI models.
The company also agreed to collaborate with music publishers to address potential infringements and resolve disputes through court intervention if necessary. Anthropic reiterated its commitment to fair use principles and emphasised that its AI is not intended for copyright infringement.
Despite the agreement, the legal battle isn’t over. The music publishers have requested a preliminary injunction to prevent Anthropic from using their lyrics in future model training. A court decision on this request is expected in the coming months, keeping the spotlight on how copyright law applies to generative AI.
OpenAI delays Media Manager amid creator backlash
In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.
The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.
Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.
While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.
Plans for major structural change announced by OpenAI
OpenAI has unveiled plans to transition its for-profit arm into a Delaware-based public benefit corporation (PBC). The move aims to attract substantial investment as the competition to develop advanced AI intensifies, and the proposed structure intends to prioritise societal interests alongside shareholder value, setting the company apart from traditional corporate models.
The shift marks a significant step for OpenAI, which started as a nonprofit in 2015 before establishing a for-profit division to fund high-cost AI development. Its latest funding round, valued at $157 billion, necessitated the structural change to eliminate a profit cap for investors, enabling greater financial backing. The nonprofit will retain a substantial stake in the restructured company, ensuring alignment with its original mission.
OpenAI faces criticism and legal challenges over the move. Elon Musk, a co-founder and vocal critic, has filed a lawsuit claiming the changes prioritise profit over public interest. Meta Platforms has also urged regulatory intervention. Legal experts suggest the PBC status offers limited enforcement of its mission-focused commitments, relying on shareholder influence to maintain the balance between profit and purpose.
By adopting this structure, OpenAI aims to align with competitors like Anthropic and xAI, which have similarly raised billions in funding. Analysts view the move as essential for securing the resources needed to remain a leader in the AI sector, though significant hurdles remain.
Google tests Gemini AI against Anthropic’s Claude
Google contractors improving the Gemini AI model have been tasked with comparing its responses against those of Anthropic’s Claude, according to internal documents reviewed by TechCrunch. The evaluation process involves scoring responses on criteria such as truthfulness and verbosity, with contractors given up to 30 minutes per prompt to determine which model performs better. Notably, some outputs identify themselves as Claude, sparking questions about Google’s use of its competitor’s model.
Claude’s responses, known for emphasising safety, have sometimes refused to answer prompts deemed unsafe, unlike Gemini, which has faced criticism for safety violations. One such instance involved Gemini generating responses flagged for inappropriate content. Despite Google’s significant investment in Anthropic, Claude’s terms of service prohibit its use to train or build competing AI models without prior approval.
A spokesperson for Google DeepMind stated that while the company compares model outputs for evaluation purposes, it does not train Gemini using Anthropic models. Anthropic, however, declined to comment on whether Google had obtained permission to use Claude for these tests. Recent revelations also highlight contractor concerns over Gemini producing potentially inaccurate information on sensitive topics, including healthcare.
Data deletion hampers OpenAI lawsuit progress
OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.
The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.
Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.
AI won’t replace actors and screenwriters, says Ben Affleck
Actor and filmmaker Ben Affleck has weighed in on the ongoing debate over AI in the entertainment industry, arguing that AI poses little immediate threat to actors and screenwriters. Speaking to CNBC, Affleck stated that while AI can replicate certain styles, it lacks the creative depth required to craft meaningful narratives or performances, likening it to a poor substitute for human ingenuity.
Affleck, co-founder of a film studio with fellow actor Matt Damon, expressed optimism about AI’s role in Hollywood, suggesting it might even generate new opportunities for creative professionals. However, he raised concerns about its potential impact on the visual effects industry, which could face significant disruptions as AI technologies advance.
Strikes by Hollywood unions last year highlighted fears that AI could replace creative talent. Affleck remains sceptical of such a scenario, maintaining that storytelling and human performance remain uniquely human domains that AI is unlikely to master soon.
OpenAI faces lawsuit from Indian News Agency
Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.
The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.
OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.