Faculty AI develops AI for military drones
Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.
While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.
The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.
Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.
US tech leaders oppose proposed export limits
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Meta appoints three new board directors
Meta Platforms has elected three new directors to its board, including Dana White, CEO of Ultimate Fighting Championship (UFC) and a close associate of President-elect Donald Trump. Investor and former Microsoft executive Charlie Songhurst and Exor CEO John Elkann have also joined. Meta CEO Mark Zuckerberg said their expertise would help the company navigate opportunities in artificial intelligence, wearables, and digital connectivity.
White’s appointment strengthens his ties with Zuckerberg, who has become a mixed martial arts enthusiast. The two have shared public exchanges in recent years, with Zuckerberg attending UFC events at White’s invitation. Songhurst has been involved in Meta’s AI advisory group since May, while Elkann holds leadership roles at Ferrari and Stellantis, alongside chairing the Agnelli Foundation.
Zuckerberg has been adjusting Meta’s strategy ahead of a possible second Trump presidency. The company recently promoted Republican policy expert Joel Kaplan and donated $1 million to Trump’s inaugural fund, signalling a shift in its political stance. Meta has also acknowledged past content decisions that were unpopular among conservatives as it prepares for the evolving political landscape.
Apple to settle Siri privacy lawsuit for $95 million amidst ongoing user consent concerns
Apple has agreed to pay $95 million to settle a class action lawsuit alleging its Siri voice assistant violated users’ privacy. The lawsuit claimed that Apple recorded users’ private conversations without consent when the ‘Hey, Siri’ feature was unintentionally triggered. These recordings were allegedly shared with third parties, including advertisers, leading to targeted ads based on private discussions.
The class period for the lawsuit spans from 17 September 2014 to 31 December 2024 and applies to users of Siri-enabled devices like iPhones and Apple Watches. Affected users could receive up to $20 per device. Apple denied any wrongdoing but settled the case to avoid prolonged litigation.
The settlement amount is a small fraction of Apple’s annual profits, with the company making nearly $94 billion in net income last year. While the company and plaintiffs’ lawyers have yet to comment on the settlement, the plaintiffs may seek up to $28.5 million in legal fees and expenses. A similar lawsuit involving Google’s Voice Assistant is also underway in a California federal court.
Anthropic settles copyright infringement lawsuit with major music publishers over AI training practices
Anthropic, the company behind the Claude AI model, has agreed to resolve aspects of a copyright infringement lawsuit filed by major music publishers. The lawsuit, initiated in October 2023 by Universal Music Group, ABKCO, Concord Music Group, and others, alleged that Anthropic’s AI system unlawfully distributed lyrics from over 500 copyrighted songs, including tracks by Beyoncé and Maroon 5.
The publishers argued that Anthropic improperly used data from licensed platforms to train its models without permission. Under the settlement approved by US District Judge Eumi Lee, Anthropic will maintain and extend its guardrails designed to prevent copyright violations in existing and future AI models.
The company also agreed to collaborate with music publishers to address potential infringements and resolve disputes through court intervention if necessary. Anthropic reiterated its commitment to fair use principles and emphasised that its AI is not intended for copyright infringement.
Despite the agreement, the legal battle isn’t over. The music publishers have requested a preliminary injunction to prevent Anthropic from using their lyrics in future model training. A court decision on this request is expected in the coming months, keeping the spotlight on how copyright law applies to generative AI.
Run:ai joins Nvidia after $700m deal
Nvidia has completed its $700 million acquisition of Israeli AI software company Run:ai, following unconditional approval by the European Commission. The deal, initially announced in April, underwent antitrust scrutiny to evaluate potential market dominance in GPUs, critical for AI applications. Regulators concluded Run:ai’s minimal current revenues posed no competition concerns.
Established in 2018, Run:ai offers workload management and orchestration software tailored for AI infrastructure. Its platform supports enterprise customers in optimising compute resources across cloud, edge, and on-premises environments. With a focus on managing large-scale GPU clusters, Run:ai is instrumental in deploying AI workloads like generative AI and search engines.
Now integrated into Nvidia, Run:ai aims to expand its offerings and make its software open-source, broadening its compatibility beyond Nvidia GPUs. The move aligns with Nvidia’s broader strategy to enhance its robotics portfolio and strengthen its AI ecosystem.
Nvidia plans to continue its advancements in robotics, targeting the release of its next-generation Jetson Thor computer for humanoid robots by early 2025.
FTC investigates Microsoft over antitrust concerns
The US Federal Trade Commission (FTC) has initiated an antitrust investigation into Microsoft, examining its software licensing, cloud computing operations, and AI-related practices. Sources indicate the probe, approved by FTC Chair Lina Khan before her anticipated departure, also investigates claims of restrictive licensing aimed at limiting competition in cloud services.
Microsoft is the latest Big Tech firm under regulatory pressure. Alphabet, Apple, Meta, and Amazon face similar lawsuits over alleged monopolistic practices in markets ranging from app stores to advertising. Penalties and court rulings loom as regulators focus on digital fairness.
The FTC’s probe highlights growing concerns about the influence of Big Tech on consumer choice and competition. As scrutiny intensifies, the outcomes could reshape the technology sector’s landscape, impacting businesses and consumers alike.
Victim warns of deepfake Bitcoin scams
A Brighton tradesman lost £75,000 to a fake bitcoin scheme that used a deepfake video of Martin Lewis and Elon Musk. The kitchen fitter, Des Healey, shared his experience on BBC Radio 5 Live, revealing how AI manipulated Martin’s voice and image to create a convincing endorsement. Des admitted he was lured by the promise of quick returns but later realised the devastating scam had emptied his life savings and forced him into debt.
He explained that the fraudsters, posing as financial experts, gained his trust through personalised calls and apparent success in his fake investment account. Encouraged to invest more, he took out £70,000 in loans across four lenders. Only when his son raised concerns about suspicious details, such as background music on calls, did Des begin to suspect foul play and approach the police.
Martin Lewis, Britain’s most impersonated celebrity in scams, described meeting Des as emotionally challenging. He commended Des for bravely sharing his ordeal to warn others. Martin emphasised that scams prey on urgency and secrecy, urging people to pause and verify before sharing personal or financial details.
Although two banks cancelled loans taken by Des, he still owes £26,000 including interest. Des expressed gratitude for the chance to warn others and praised Martin Lewis for his continued efforts to fight fraud. Meanwhile, Revolut reaffirmed its commitment to combating cybercrime, acknowledging the challenges posed by sophisticated scammers.