FBI warns of AI-driven fraud
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
Faculty AI develops AI for military drones
Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.
While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.
The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.
Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.
US tech leaders oppose proposed export limits
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.
Amazon invests $11 billion in Georgia
Amazon Web Services (AWS) has announced a $11 billion investment to build new data centres in Georgia, aiming to support the growing demand for cloud computing and AI technologies. The facilities, located in Butts and Douglas counties, are expected to create at least 550 high-skilled jobs and position Georgia as a leader in digital innovation.
The move highlights a broader trend among tech giants investing heavily in AI-driven advancements. Last week, Microsoft revealed an $80 billion plan for fiscal 2025 to expand data centres for AI training and cloud applications. These facilities are critical for supporting resource-intensive AI technologies like machine learning and generative models, which require vast computational power and specialised infrastructure.
The surge in AI infrastructure has also raised concerns about energy consumption. A report from the Electric Power Research Institute suggests data centres could account for up to 9% of US electricity usage by 2030. To address this, Amazon has secured energy supply agreements with utilities like Talen Energy in Pennsylvania and Entergy in Mississippi, ensuring reliable power for its expanding operations.
Amazon’s commitment underscores the growing importance of AI and cloud services, as companies race to meet the demands of a rapidly evolving technological landscape.
OpenAI confident in AGI but faces safety concerns
OpenAI CEO Sam Altman has stated that the company believes it knows how to build AGI and is now turning its focus towards developing superintelligence. He argues that advanced AI could significantly boost scientific discovery and economic growth. While AGI is often defined as AI that outperforms humans in most tasks, OpenAI and Microsoft also use a financial benchmark—$100 billion in profits—as a key measure.
Despite Altman’s optimism, today’s AI systems still struggle with accuracy and reliability. OpenAI has previously acknowledged that transitioning to a world with superintelligence is far from certain, and controlling such systems remains an unsolved challenge. The company has, however, recently disbanded key safety teams, leading to concerns about its priorities as it seeks further investment.
Altman remains confident that AI will soon make a significant impact on businesses, suggesting that AI agents could enter the workforce and reshape industries in the near future. He insists that OpenAI continues to balance innovation with safety, despite growing scepticism from former staff and industry critics.
US sanctions Iranian and Russian entities over election meddling
Sanctions have been imposed by the US on organisations in Iran and Russia accused of attempting to influence the 2024 presidential election. The Treasury Department stated these entities, linked to Iran’s Revolutionary Guard Corps (IRGC) and Russia’s military intelligence agency (GRU), aimed to exploit socio-political tensions among voters.
Russia’s accused group utilised AI tools to create disinformation, including manipulated videos targeting a vice-presidential candidate. A network of over 100 websites mimicking credible news outlets was reportedly used to disseminate false narratives. The GRU is alleged to have funded and supported these operations.
Iran’s affiliated entity allegedly planned influence campaigns since 2023, focused on inciting divisions within the US electorate. While Russia’s embassy denied interference claims as unfounded, Iran’s representatives did not respond to requests for comment.
A recent US threat assessment has underscored growing concerns about foreign attempts to disrupt American democracy, with AI emerging as a critical tool for misinformation. Officials reaffirmed their commitment to safeguarding the electoral process.
Hackers target Chrome extensions in data breach campaign
A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.
Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.
Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.
Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.
ChatGPT search found vulnerable to manipulation
New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.
The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.
OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.