FBI warns of AI-driven fraud

The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.

Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.

To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.

How AI helped fraudsters steal £20,000 from a UK woman

Ann Jensen, a woman from Salisbury, was deceived into losing £20,000 through an AI-powered investment scam that falsely claimed endorsement by UK Prime Minister Sir Keir Starmer. The scammers used deepfake technology to mimic Starmer, promoting a fraudulent cryptocurrency investment opportunity. After persuading her to invest an initial sum, they convinced her to take out a bank loan, only to vanish with the funds.

The scam left Ms. Jensen not only financially devastated but also emotionally shaken, describing the experience as a “physical reaction” where her “body felt like liquid.” Now facing a £23,000 repayment over 27 years, she reflects on the incident as a life-altering crime. “It’s tainted me for life,” she said, emphasising that while she doesn’t feel stupid, she considers herself a victim.

Cybersecurity expert Dr. Jan Collie highlighted how AI tools are weaponised by criminals to clone well-known figures’ voices and mannerisms, making scams appear authentic. She advises vigilance, suggesting people look for telltale signs like mismatched movements or pixelation in videos to avoid falling prey to these sophisticated frauds.

AI cloned voices fool bank security systems

Advancements in AI voice cloning have revealed vulnerabilities in banking security, as a BBC reporter demonstrated how cloned voices can bypass voice recognition systems. Using an AI-generated version of her voice, she successfully accessed accounts at two major banks, Santander and Halifax, simply by playing back the phrase “my voice is my password.”

The experiment highlighted potential security gaps, as the cloned voice worked on basic speakers and required no high-tech setup. While the banks noted that voice ID is part of a multi-layered security system, they maintained that it is more secure than traditional authentication methods. Experts, however, view this as a wake-up call about the risks posed by generative AI.

Cybersecurity specialists warn that rapid advancements in voice cloning technology could increase opportunities for fraud. They emphasise the importance of evolving defenses to address these challenges, especially as AI continues to blur the lines between real and fake identities.

New OSI guidelines clarify open source standards for AI

The Open Source Initiative (OSI) has introduced version 1.0 of its Open Source AI Definition (OSAID), setting new standards for AI transparency and accessibility. Developed over the years in collaboration with academia and industry, the OSAID aims to establish clear criteria for what qualifies as open-source AI. The OSI says the definition will help align policymakers, developers, and industry leaders on a common understanding of ‘open source’ in the rapidly evolving field of AI.

According to OSI Executive Vice President Stefano Maffulli, the goal is to make sure AI models labelled as open source provide enough detail for others to recreate them and disclose essential information about training data, such as its origin and processing methods. The OSAID also emphasises that open source AI should grant users freedom to modify and build upon the models, without restrictive permissions. While OSI lacks enforcement power, it plans to advocate for its definition as the AI community’s reference point, aiming to combat “open source” claims that don’t meet OSAID standards.

The new definition comes as some companies, including Meta and Stability AI, use the open-source label without fully meeting transparency requirements. Meta, a financial supporter of the OSI, has voiced reservations about the OSAID, citing the need for protective restrictions around its Llama models. In contrast, OSI contends that AI models should be openly accessible to allow for a truly open-source AI ecosystem, rather than restricted by proprietary data and usage limitations.

Maffulli acknowledges the OSAID may need frequent updates as technology and regulations evolve. OSI has created a committee to monitor its application and adjust as necessary, with an eye on refining the open-source definition to address emerging issues like copyright and proprietary data.

US finalising rules to curb investment in China’s AI and defence tech

The Biden administration announced on Monday new rules restricting US investments in specific technology sectors in China, including AI, semiconductors, and quantum computing, citing national security concerns. These rules, effective from 2 January, aim to prevent US capital and expertise from aiding China’s development of military and intelligence capabilities. Issued under an executive order from August 2023, the regulations will be managed by the Treasury’s new Office of Global Transactions.

The targeted technologies are considered crucial to future military and cyber defence. Treasury officials note that US investments often include more than money—managerial support, network access, and intellectual expertise—that could benefit Chinese advancements in sensitive sectors. A senior Treasury official, Paul Rosen, emphasised that these restrictions curb potential US involvement in developing cutting-edge technologies for adversarial nations.

The US Commerce Secretary Gina Raimondo has previously highlighted the importance of these measures, viewing them as essential to slowing China’s progress in military technologies. The new regulations allow for investments in publicly traded Chinese securities; however, existing rules still restrict transactions involving certain Chinese firms deemed to support military development.

Additionally, the rules respond to recent criticism from the House Select Committee on China, which has scrutinised American index providers for funnelling US investments into Chinese companies linked to military advancements. With these regulations, the administration underscores its intent to protect US interests by limiting China’s access to critical technology expertise and capital.

Cybercriminals use AI to target elections, says OpenAI

OpenAI reports cybercriminals are increasingly using its AI models to generate fake content aimed at influencing elections. The startup has neutralised over 20 attempts this year, including accounts producing articles on the US elections. Several accounts from Rwanda were banned in July for similar activities related to elections in that country.

The company confirmed that none of these attempts succeeded in generating viral engagement or reaching sustainable audiences. However, the use of AI in election interference remains a growing concern, especially as the US approaches its presidential elections. The US Department of Homeland Security also warns of foreign nations attempting to spread misinformation using AI tools.

As OpenAI strengthens its global position, the rise in election manipulation efforts underscores the critical need for heightened vigilance. The company recently completed a $6.6 billion funding round, further securing its status as one of the most valuable private firms.

ChatGPT continues to see rapid growth, boasting 250 million weekly active users since launching in November 2022, emphasising the platform’s widespread influence.

Mexico emerges as top target for cybercrime in Latin America

Mexico has become the focal point for cybercrime in Latin America, accounting for over 50% of all reported cyber threats in the region during the first half of 2024, according to a study by cybersecurity firm Fortinet. With 31 billion cybercrime attempts, hackers are taking advantage of Mexico’s strategic ties with the US and booming industries like logistics and manufacturing, which are being targeted for larger ransom payouts.

Fortinet’s report highlighted how cybercriminals are using advanced tools, such as AI, to streamline attacks and focus on specific sectors for maximum impact. The rapid shift of production closer to the US, known as nearshoring, has made Mexico’s electronics and automotive industries prime targets. Despite a slight dip in attack numbers compared to last year, the overall threat level remains significant.

Experts, including Fortinet executives, emphasised the need for Mexico to strengthen its cybersecurity laws. While President Claudia Sheinbaum has pledged to establish a cybersecurity and AI center, there has been no mention of legal measures yet. Cybersecurity professionals warn that urgent action is needed as Mexico’s role in global supply chains continues to grow.

Forrester: Cybercrime to cost $12 trillion in 2025

Forrester’s 2025 Predictions report outlines critical cybersecurity, risk, and privacy challenges on the horizon. Cybercrime costs are expected to cost $12 trillion by 2025, with regulators stepping up efforts to protect consumer data. Organisations are urged to adopt proactive security measures to mitigate operational impacts, particularly as AI technologies and IoT devices expand.

Another major prediction is that Western governments plan to prohibit certain third-party or open-source software due to rising concerns over software supply chain attacks, which are a leading cause of worldwide data breaches. Increased pressure from Western governments has prompted private companies to produce software bills of materials (SBOMs), enhancing transparency regarding software components.

However, these SBOMs also reveal the reliance on third-party and open-source software in government purchases. In 2025, armed with this knowledge, Forrester says that a government will impose restrictions on a specific open-source component for national security reasons. Consequently, software suppliers will need to eliminate the problematic components and find alternatives to maintain functionality.

Among the key forecasts is the EU issuing its first fine under the new EU AI Act to a general-purpose AI (GPAI) model provider. Forrester warns that companies unprepared for AI regulations will face significant third-party risks. As generative AI models become more widespread, businesses must thoroughly vet providers and gather evidence to avoid fines and investigations. Another major prediction is a large-scale Internet of Things (IoT) device breach, with malicious actors finding it easier to compromise common IoT systems. Such breaches could lead to widespread disruption, forcing organisations to engage in costly remediation efforts.

Forrester also anticipates that Chief Information Security Officers (CISOs) will reduce their focus on generative AI applications by 10%, citing a need for measurable value. Currently, 35% of global CISOs and CIOs prioritise AI to boost employee productivity, but growing disillusionment and limited budgets are expected to hinder further AI adoption. The report reveals that 18% of global AI decision-makers already see budget limitations as a major barrier, a figure projected to increase as organisations struggle to justify investment in AI initiatives.

The report also highlights a rise in cybersecurity incidents. In 2023, 28% of security decision-makers reported six or more data breaches, up 16 percentage points from 2022. Additionally, 72% of those decision-makers experienced data breach costs exceeding $1 million. Despite these alarming statistics, only 16% of global security leaders prioritised testing and refining their incident response processes in 2023, leaving many organisations unprepared for future attacks.

Human-related cybersecurity risks, such as deepfakes, insider data theft, generative AI misuse, and human error, are expected to become more complex as communication channels expand. Forrester also explores how generative AI could reshape identity and access management, addressing challenges like identity administration, audit processes, lifecycle management, and authentication. In conclusion, the report urges companies to brace for evolving threats and adopt forward-thinking strategies to protect their assets as cybersecurity landscapes shift.