FBI warns of AI-driven fraud
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
Apple to settle Siri privacy lawsuit for $95 million amidst ongoing user consent concerns
Apple has agreed to pay $95 million to settle a class action lawsuit alleging its Siri voice assistant violated users’ privacy. The lawsuit claimed that Apple recorded users’ private conversations without consent when the ‘Hey, Siri’ feature was unintentionally triggered. These recordings were allegedly shared with third parties, including advertisers, leading to targeted ads based on private discussions.
The class period for the lawsuit spans from 17 September 2014 to 31 December 2024 and applies to users of Siri-enabled devices like iPhones and Apple Watches. Affected users could receive up to $20 per device. Apple denied any wrongdoing but settled the case to avoid prolonged litigation.
The settlement amount is a small fraction of Apple’s annual profits, with the company making nearly $94 billion in net income last year. While the company and plaintiffs’ lawyers have yet to comment on the settlement, the plaintiffs may seek up to $28.5 million in legal fees and expenses. A similar lawsuit involving Google’s Voice Assistant is also underway in a California federal court.
Anthropic settles copyright infringement lawsuit with major music publishers over AI training practices
Anthropic, the company behind the Claude AI model, has agreed to resolve aspects of a copyright infringement lawsuit filed by major music publishers. The lawsuit, initiated in October 2023 by Universal Music Group, ABKCO, Concord Music Group, and others, alleged that Anthropic’s AI system unlawfully distributed lyrics from over 500 copyrighted songs, including tracks by Beyoncé and Maroon 5.
The publishers argued that Anthropic improperly used data from licensed platforms to train its models without permission. Under the settlement approved by US District Judge Eumi Lee, Anthropic will maintain and extend its guardrails designed to prevent copyright violations in existing and future AI models.
The company also agreed to collaborate with music publishers to address potential infringements and resolve disputes through court intervention if necessary. Anthropic reiterated its commitment to fair use principles and emphasised that its AI is not intended for copyright infringement.
Despite the agreement, the legal battle isn’t over. The music publishers have requested a preliminary injunction to prevent Anthropic from using their lyrics in future model training. A court decision on this request is expected in the coming months, keeping the spotlight on how copyright law applies to generative AI.
Apheris revolutionises data privacy and AI in life sciences with federated computing
Privacy and regulatory concerns have long hindered AI’s reliance on data, especially in sensitive fields like healthcare and life sciences. Apheris, a German startup co-founded by Robin Röhm, aims to solve this problem using federated computing—a decentralised approach that trains AI models without moving sensitive data.
The company’s approach is gaining traction among prominent clients like Roche and hospitals, and its technology is already being used in collaborative drug discovery efforts by pharmaceutical giants such as Johnson & Johnson and Sanofi. Apheris recently secured $8.25 million in Series A funding led by OTB Ventures and eCAPITAL, bringing its total funding to $20.8 million.
That follows a pivotal shift in 2023 to focus on the needs of data owners in the pharmaceutical and life sciences sectors. The pivot has paid off, quadrupling the company’s revenue since launching its redefined product, the Apheris Compute Gateway, which securely bridges local data and AI models.
With its new funding, Apheris plans to expand its team and refine its AI-driven solutions for complex challenges like protein prediction. By prioritising data security and privacy, the company aims to unlock previously inaccessible data for innovation, addressing a core barrier to AI’s transformative potential in life sciences.
Hackers target Chrome extensions in data breach campaign
A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.
Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.
Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.
Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.
ChatGPT search found vulnerable to manipulation
New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.
The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.
OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.
OpenAI services suffer second outage in December
OpenAI’s ChatGPT, Sora, and developer API experienced a significant outage on Thursday, disrupting services for over four hours. The issue began around 11 a.m. PT, with partial recovery reported by 2:05 p.m. PT. By 3:16 p.m. PT, OpenAI stated that Sora was operational, though ChatGPT users might still encounter issues accessing their chat history.
According to OpenAI’s status page, the outage was caused by one of their upstream providers, but the company did not provide further details. This marks the second major outage for OpenAI’s services in December. Two weeks ago, a similar incident attributed to a telemetry service malfunction resulted in a six-hour disruption, a notably longer downtime than usual.
Interestingly, popular platforms utilising OpenAI’s API, such as Perplexity and Siri’s Apple Intelligence integration, appeared unaffected during the outage, as confirmed by their status pages and independent testing. OpenAI is actively working to ensure full restoration of its services while addressing the root causes behind these recurring disruptions.
Spanish AI satire video imagines political unity for Christmas
A satirical video imagining Spain’s political rivals embracing the festive spirit has captured attention nationwide. The AI-generated clip, created by the collective United Unknown, portrays unlikely moments of reconciliation, such as Prime Minister Pedro Sánchez and conservative leader Alberto Núñez Feijóo sharing a warm hug. Former King Juan Carlos and Queen Sofía are also shown exchanging a kiss, despite their well-documented estrangement.
The video, titled The Magic of Christmas and set to the song Rockin’ Around the Christmas Tree, uses deepfake technology to depict other striking scenes. Far-right Vox leader Santiago Abascal and Catalan separatist Gabriel Rufián are seen laughing together, while Podemos founders Íñigo Errejón and Pablo Iglesias appear to have resolved their differences, chuckling and embracing. Madrid’s conservative leader Isabel Díaz Ayuso and Labour Minister Yolanda Díaz also feature, exchanging smiles and gestures of goodwill.
Since its release on X on 20 December, the video has been viewed over 3.4 million times and received widespread acclaim for its creative ingenuity. Gabriel Rufián, one of the depicted politicians, even retweeted the post. However, not all responses have been positive, with some raising concerns about the growing realism of AI-generated content and its potential to blur the line between reality and fiction.
United Unknown describes itself as a ‘visual guerrilla’ collective, known for satirical deepfakes often targeting Spain’s political scene. While the video has been celebrated as a humorous take on political differences, it also sparks a broader conversation about the implications of AI technology in modern media.