FBI warns of AI-driven fraud
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
Digital Bamboo Diplomacy: Vietnam’s strategic role in the tech geopolitics
The backdrop for the emergence of digital bamboo diplomacy is the deepening of Sino-American techno-decoupling. As tensions rise between the USA and China, many tech companies are seeking to preserve their supply chains by relocating production facilities to other Asian countries, with Vietnam being a primary location. For instance, Google has shifted the production of its latest Pixel smartphones from China to Vietnam. Similarly, microprocessor giants like Qualcomm have opened research and development centres in the country, and Intel has announced a substantial investment of USD 3.3 billion.
Vietnamese diplomacy is crucial in facilitating this digital shift. Traditional bamboo diplomacy, characterised by its flexibility and adaptability, is now infused with a digital edge.
Digital diplomacy featured high during the meeting on 28 November 2024 of newly appointed ambassadors ov Vietnam, which was hosted by the Ministry of Foreign Affairs and Ministry of Information and Communications. Deputy Minister of Foreign Affairs of Vietnam called to active participation on businesses in Vietnam’s digital diplomacy.
An importance of digital diplomacy was highlighted in December during the annual meeting of Vietnamese diplomats. Prime Minister Pham Minh Chinh highlighted the vital role of diplomacy in promoting emerging industries such as semiconductors, big data, AI, cloud computing, blockchain technology, cultural industries, and entertainment.
Vietnam’s diplomacy can foster tech priorities through regional initiatives and agreements such as the Regional Comprehensive Economic Partnership (RCEP) and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP).
The upcoming UN Cybercrime Convention in 2025, to be signed in Hanoi and named the Hanoi Convention on Cybercrime, is a testament to Vietnam’s growing influence in the digital domain. As Vietnam continues to navigate the complexities of digital diplomacy, it stands poised to play a significant role in shaping the future of technology in Asia.
Hackers target Chrome extensions in data breach campaign
A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.
Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.
Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.
Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.
ChatGPT search found vulnerable to manipulation
New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.
The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.
OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.
India launches AI-driven consumer protection initiatives
The Indian government has launched several initiatives to strengthen consumer protection, focusing on leveraging technology and enhancing online safety. Key developments include the introduction of the AI-enabled National Consumer Helpline, the e-Maap Portal, and the Jago Grahak Jago mobile application, all designed to expedite the resolution of consumer complaints and empower citizens to make informed choices.
The government of India also highlighted the significant progress made through the three-tier consumer court system, resolving thousands of disputes this year. In the realm of e-commerce, major platforms like Reliance Retail, Tata Sons, and Zomato pledged to enhance online shopping security, reflecting the government’s commitment to ensuring consumer confidence in the digital marketplace.
The e-Daakhil Portal has been expanded nationwide, achieving 100% adoption in states like Karnataka, Punjab, and Rajasthan, making it easier for consumers to file complaints online. The Consumer Protection Authority (CCPA) is also drafting new guidelines to regulate surrogate advertising and has already taken action against 13 companies for non-compliance with existing rules.
The importance of these initiatives was underscored at the National Consumer Day event, where key officials, including Minister of State for Consumer Affairs B L Verma and TRAI Chairman Anil Kumar Lahoti, were present. The event highlighted the government’s ongoing efforts to foster a safer and more transparent consumer environment, especially in the rapidly evolving digital landscape.
Hidden vulnerabilities in ChatGPT search tool uncovered
OpenAI’s ChatGPT search tool is under scrutiny after a Guardian investigation revealed vulnerabilities to manipulation and malicious content. Hidden text on websites can alter AI responses, raising concerns over the tool’s reliability. The search feature, currently available to premium users, could misrepresent products or services by summarising planted positive content, even when negative reviews exist.
Cybersecurity researcher Jacob Larsen warned that the AI system in its current form might enable deceptive practices. Tests revealed how hidden prompts on webpages influence ChatGPT to deliver biased reviews. The same mechanism could be exploited to distribute malicious code, as highlighted in a recent cryptocurrency scam where the tool inadvertently shared credential-stealing instructions.
Experts emphasised that while combining search with AI models like ChatGPT offers potential, it also increases risks. Karsten Nohl, a scientist at SR Labs, likened such AI tools to a ‘co-pilot’ requiring oversight. Misjudgments by the technology could amplify risks, particularly as it lacks the ability to critically evaluate sources.
OpenAI acknowledges the possibility of errors, cautioning users to verify information. However, broader implications, such as how these vulnerabilities could impact website practices, remain unclear. Hidden text, while traditionally penalised by search engines like Google, may find new life in manipulating AI-based tools, posing challenges for OpenAI in securing the system.
How AI helped fraudsters steal £20,000 from a UK woman
Ann Jensen, a woman from Salisbury, was deceived into losing £20,000 through an AI-powered investment scam that falsely claimed endorsement by UK Prime Minister Sir Keir Starmer. The scammers used deepfake technology to mimic Starmer, promoting a fraudulent cryptocurrency investment opportunity. After persuading her to invest an initial sum, they convinced her to take out a bank loan, only to vanish with the funds.
The scam left Ms. Jensen not only financially devastated but also emotionally shaken, describing the experience as a “physical reaction” where her “body felt like liquid.” Now facing a £23,000 repayment over 27 years, she reflects on the incident as a life-altering crime. “It’s tainted me for life,” she said, emphasising that while she doesn’t feel stupid, she considers herself a victim.
Cybersecurity expert Dr. Jan Collie highlighted how AI tools are weaponised by criminals to clone well-known figures’ voices and mannerisms, making scams appear authentic. She advises vigilance, suggesting people look for telltale signs like mismatched movements or pixelation in videos to avoid falling prey to these sophisticated frauds.
AI cloned voices fool bank security systems
Advancements in AI voice cloning have revealed vulnerabilities in banking security, as a BBC reporter demonstrated how cloned voices can bypass voice recognition systems. Using an AI-generated version of her voice, she successfully accessed accounts at two major banks, Santander and Halifax, simply by playing back the phrase “my voice is my password.”
The experiment highlighted potential security gaps, as the cloned voice worked on basic speakers and required no high-tech setup. While the banks noted that voice ID is part of a multi-layered security system, they maintained that it is more secure than traditional authentication methods. Experts, however, view this as a wake-up call about the risks posed by generative AI.
Cybersecurity specialists warn that rapid advancements in voice cloning technology could increase opportunities for fraud. They emphasise the importance of evolving defenses to address these challenges, especially as AI continues to blur the lines between real and fake identities.