FBI warns of AI-driven fraud

The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.

Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.

To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.

Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

OpenAI confident in AGI but faces safety concerns

OpenAI CEO Sam Altman has stated that the company believes it knows how to build AGI and is now turning its focus towards developing superintelligence. He argues that advanced AI could significantly boost scientific discovery and economic growth. While AGI is often defined as AI that outperforms humans in most tasks, OpenAI and Microsoft also use a financial benchmark—$100 billion in profits—as a key measure.

Despite Altman’s optimism, today’s AI systems still struggle with accuracy and reliability. OpenAI has previously acknowledged that transitioning to a world with superintelligence is far from certain, and controlling such systems remains an unsolved challenge. The company has, however, recently disbanded key safety teams, leading to concerns about its priorities as it seeks further investment.

Altman remains confident that AI will soon make a significant impact on businesses, suggesting that AI agents could enter the workforce and reshape industries in the near future. He insists that OpenAI continues to balance innovation with safety, despite growing scepticism from former staff and industry critics.

US sanctions Iranian and Russian entities over election meddling

Sanctions have been imposed by the US on organisations in Iran and Russia accused of attempting to influence the 2024 presidential election. The Treasury Department stated these entities, linked to Iran’s Revolutionary Guard Corps (IRGC) and Russia’s military intelligence agency (GRU), aimed to exploit socio-political tensions among voters.

Russia’s accused group utilised AI tools to create disinformation, including manipulated videos targeting a vice-presidential candidate. A network of over 100 websites mimicking credible news outlets was reportedly used to disseminate false narratives. The GRU is alleged to have funded and supported these operations.

Iran’s affiliated entity allegedly planned influence campaigns since 2023, focused on inciting divisions within the US electorate. While Russia’s embassy denied interference claims as unfounded, Iran’s representatives did not respond to requests for comment.

A recent US threat assessment has underscored growing concerns about foreign attempts to disrupt American democracy, with AI emerging as a critical tool for misinformation. Officials reaffirmed their commitment to safeguarding the electoral process.

Hackers target Chrome extensions in data breach campaign

A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.

Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.

Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.

Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.

OpenAI services suffer second outage in December

OpenAI’s ChatGPT, Sora, and developer API experienced a significant outage on Thursday, disrupting services for over four hours. The issue began around 11 a.m. PT, with partial recovery reported by 2:05 p.m. PT. By 3:16 p.m. PT, OpenAI stated that Sora was operational, though ChatGPT users might still encounter issues accessing their chat history.

According to OpenAI’s status page, the outage was caused by one of their upstream providers, but the company did not provide further details. This marks the second major outage for OpenAI’s services in December. Two weeks ago, a similar incident attributed to a telemetry service malfunction resulted in a six-hour disruption, a notably longer downtime than usual.

Interestingly, popular platforms utilising OpenAI’s API, such as Perplexity and Siri’s Apple Intelligence integration, appeared unaffected during the outage, as confirmed by their status pages and independent testing. OpenAI is actively working to ensure full restoration of its services while addressing the root causes behind these recurring disruptions.

India launches AI-driven consumer protection initiatives

The Indian government has launched several initiatives to strengthen consumer protection, focusing on leveraging technology and enhancing online safety. Key developments include the introduction of the AI-enabled National Consumer Helpline, the e-Maap Portal, and the Jago Grahak Jago mobile application, all designed to expedite the resolution of consumer complaints and empower citizens to make informed choices.

The government of India also highlighted the significant progress made through the three-tier consumer court system, resolving thousands of disputes this year. In the realm of e-commerce, major platforms like Reliance Retail, Tata Sons, and Zomato pledged to enhance online shopping security, reflecting the government’s commitment to ensuring consumer confidence in the digital marketplace.

The e-Daakhil Portal has been expanded nationwide, achieving 100% adoption in states like Karnataka, Punjab, and Rajasthan, making it easier for consumers to file complaints online. The Consumer Protection Authority (CCPA) is also drafting new guidelines to regulate surrogate advertising and has already taken action against 13 companies for non-compliance with existing rules.

The importance of these initiatives was underscored at the National Consumer Day event, where key officials, including Minister of State for Consumer Affairs B L Verma and TRAI Chairman Anil Kumar Lahoti, were present. The event highlighted the government’s ongoing efforts to foster a safer and more transparent consumer environment, especially in the rapidly evolving digital landscape.

Google tests Gemini AI against Anthropic’s Claude

Google contractors improving the Gemini AI model have been tasked with comparing its responses against those of Anthropic’s Claude, according to internal documents reviewed by TechCrunch. The evaluation process involves scoring responses on criteria such as truthfulness and verbosity, with contractors given up to 30 minutes per prompt to determine which model performs better. Notably, some outputs identify themselves as Claude, sparking questions about Google’s use of its competitor’s model.

Claude’s responses, known for emphasising safety, have sometimes refused to answer prompts deemed unsafe, unlike Gemini, which has faced criticism for safety violations. One such instance involved Gemini generating responses flagged for inappropriate content. Despite Google’s significant investment in Anthropic, Claude’s terms of service prohibit its use to train or build competing AI models without prior approval.

A spokesperson for Google DeepMind stated that while the company compares model outputs for evaluation purposes, it does not train Gemini using Anthropic models. Anthropic, however, declined to comment on whether Google had obtained permission to use Claude for these tests. Recent revelations also highlight contractor concerns over Gemini producing potentially inaccurate information on sensitive topics, including healthcare.