AI model Aitana takes social media by storm

In Barcelona, a pink-haired 25-year-old named Aitana captivates social media with her stunning images and relatable personality. But Aitana isn’t a real person—she’s an AI model created by The Clueless Agency. Launched during a challenging period for the agency, Aitana was designed as a solution to the unpredictability of working with human influencers. The virtual model has proven successful, earning up to €10,000 monthly by featuring in advertisements and modelling campaigns.

Aitana has already amassed over 343,000 Instagram followers, with some celebrities unknowingly messaging her for dates. Her creators, Rubén Cruz and Diana Núñez, maintain her appeal by crafting a detailed “life,” including fictional trips and hobbies, to connect with her audience. Unlike traditional models, Aitana has a defined personality, presented as a fitness enthusiast with a determined yet caring demeanour. This strategic design, rooted in current trends, has made her a relatable and marketable figure.

The success of Aitana has sparked a new wave of AI influencers. The Clueless Agency has developed additional virtual models, including a more introverted character named Maia. Brands increasingly seek these customisable AI creations for their campaigns, citing cost efficiency and the elimination of human unpredictability. However, critics warn that the hypersexualised and digitally perfected imagery promoted by such models may negatively influence societal beauty standards and young audiences.

Despite these concerns, Aitana represents a broader shift in advertising and social media. By democratising access to influencer marketing, AI models like her offer new opportunities for smaller businesses while challenging traditional notions of authenticity and influence in the digital age.

Hackers target Chrome extensions in data breach campaign

A series of intrusions targeting Chrome browser extensions has compromised multiple companies since mid-December, experts revealed. Among the victims is Cyberhaven, a California-based data protection company. The breach, confirmed by Cyberhaven on Christmas Eve, is reportedly part of a larger campaign aimed at developers of Chrome extensions across various industries.

Cyberhaven stated it is cooperating with federal law enforcement to address the issue. Browser extensions, commonly used to enhance web browsing, can also pose risks when maliciously altered. Cyberhaven’s Chrome extension, for example, is designed to monitor and secure client data within web-based applications.

Experts identified other compromised extensions, including those involving AI and virtual private networks. Jaime Blasco, cofounder of Texas-based Nudge Security, noted that the attacks appear opportunistic, aiming to harvest sensitive data from numerous sources. Some breaches date back to mid-December, indicating an ongoing effort.

Federal authorities, including the US cyber watchdog CISA, have redirected inquiries to the affected companies. Alphabet, maker of the Chrome browser, has yet to respond to requests for comment.

ChatGPT search found vulnerable to manipulation

New research by The Guardian reveals that ChatGPT Search, OpenAI’s recently launched AI-powered search tool, can be misled into generating false or overly positive summaries. By embedding hidden text in web pages, researchers demonstrated that the AI could ignore negative reviews or even produce malicious code.

The feature, designed to streamline browsing by summarising content such as product reviews, is susceptible to hidden text attacks—a well-known vulnerability in large language models. While this issue has been studied before, this marks the first time such manipulation has been proven on a live AI search tool.

OpenAI did not comment on this specific case but stated it employs measures to block malicious websites and is working to improve its defences. Experts note that competitors like Google, with more experience in search technology, have developed stronger safeguards against similar threats.

Google tests Gemini AI against Anthropic’s Claude

Google contractors improving the Gemini AI model have been tasked with comparing its responses against those of Anthropic’s Claude, according to internal documents reviewed by TechCrunch. The evaluation process involves scoring responses on criteria such as truthfulness and verbosity, with contractors given up to 30 minutes per prompt to determine which model performs better. Notably, some outputs identify themselves as Claude, sparking questions about Google’s use of its competitor’s model.

Claude’s responses, known for emphasising safety, have sometimes refused to answer prompts deemed unsafe, unlike Gemini, which has faced criticism for safety violations. One such instance involved Gemini generating responses flagged for inappropriate content. Despite Google’s significant investment in Anthropic, Claude’s terms of service prohibit its use to train or build competing AI models without prior approval.

A spokesperson for Google DeepMind stated that while the company compares model outputs for evaluation purposes, it does not train Gemini using Anthropic models. Anthropic, however, declined to comment on whether Google had obtained permission to use Claude for these tests. Recent revelations also highlight contractor concerns over Gemini producing potentially inaccurate information on sensitive topics, including healthcare.

AI discovers how we see flavours

Generative AI, has begun to mimic an intriguing aspect of human perception, the blending of sensory experiences. Research shows that humans naturally associate colours, shapes, and even sounds with flavours a phenomenon known as cross-modal correspondence. For instance, red hues often evoke sweetness, while sharp shapes suggest bitterness. AI systems, trained on human data, appear to be trained to replicate these associations, offering new perspectives on how deeply such connections are embedded in our perception.

This revelation emerged through studies where AI was tasked with answering prompts about the relationships between sensory elements, such as the sweetness of certain shapes or colours. The results closely mirrored human responses, particularly when using advanced models like ChatGPT-4. Researchers believe this reflects the biases in the data the AI was trained on, highlighting how common and universal these sensory links might be.

The potential applications of this technology are vast. Marketing, for example, could use AI to design products and packaging that enhance sensory appeal. However, experts warn that AI’s insights should complement, not replace, human creativity. While AI offers inspiration, the nuances of human perception remain essential for creating designs that resonate deeply with people.

By understanding how AI interprets sensory input, researchers hope to not only enhance technology but also unlock more about the mysteries of the human brain. As AI continues to explore the sensory dimensions, it might pave the way for innovative approaches to art, marketing, and even neuroscience.

UN discusses ethical tech and inclusion at IGF 2024

Speakers at IGF 2024 highlighted digital innovation within the United Nations system, demonstrating how emerging technologies are enhancing services and operational efficiency. Representatives from UNHCR, UNICEF, the UN Pension Fund, and UNICC shared their organisations’ progress and collaborative efforts.

Michael Walton, Head of Digital Services at UNHCR, detailed initiatives supporting refugees through digital tools. These include mobile apps for services and efforts to counter misinformation. Walton stressed the importance of digital inclusion and innovation to bridge gaps in education and access for vulnerable groups.

Fui Meng Liew, Chief of Digital Center of Excellence at UNICEF, emphasised safeguarding children’s data rights through a comprehensive digital resilience framework. UNICEF’s work also involves developing digital public goods, with a focus on accessibility for children with disabilities and securing data privacy.

Dino Cataldo Dell’Accio from the UN Pension Fund presented a blockchain-powered proof-of-life system that uses biometrics and AI in support of e-Government for the aging population. This system ensures beneficiaries’ security and privacy while streamlining verification processes. Similarly, Sameer Chauhan of UNICC showcased digital solutions like AI chatbots and cybersecurity initiatives supporting UN agencies.

The session’s collaborative tone extended into discussions of the UN Digital ID project, which links multiple UN agencies. Audience members raised questions on accessibility, with Nancy Marango and Sary Qasim suggesting broader use of these solutions to support underrepresented communities globally.

Efforts across UN organisations reflect a shared commitment to ethical technology use and digital inclusion. The panellists urged collaboration and transparency as key to addressing challenges such as data protection and equitable access while maintaining focus on innovation.

FTC investigates Microsoft over antitrust concerns

The US Federal Trade Commission (FTC) has initiated an antitrust investigation into Microsoft, examining its software licensing, cloud computing operations, and AI-related practices. Sources indicate the probe, approved by FTC Chair Lina Khan before her anticipated departure, also investigates claims of restrictive licensing aimed at limiting competition in cloud services.

Microsoft is the latest Big Tech firm under regulatory pressure. Alphabet, Apple, Meta, and Amazon face similar lawsuits over alleged monopolistic practices in markets ranging from app stores to advertising. Penalties and court rulings loom as regulators focus on digital fairness.

The FTC’s probe highlights growing concerns about the influence of Big Tech on consumer choice and competition. As scrutiny intensifies, the outcomes could reshape the technology sector’s landscape, impacting businesses and consumers alike.

OLMo 2 models rival Meta’s best in performance

Ai2, a nonprofit AI research group, has introduced OLMo 2, a groundbreaking series of open-source language models designed for transparency and reproducibility. The models, developed using open-access data and tools, align with the Open Source Initiative’s standards for AI, setting them apart from many competitors.

The OLMo 2 series includes two versions: one with 7 billion parameters and another with 13 billion, making them powerful tools for tasks like summarising documents, answering questions, and generating code. Trained on a dataset of 5 trillion tokens sourced from websites, academic papers, and other vetted materials, the models perform competitively against Meta’s Llama 3.1.

While some critics voice concerns about potential misuse of open models, Ai2 argues their benefits outweigh the risks. By making the models freely available under an Apache 2.0 license, the organisation hopes to democratise AI development and promote ethical innovation.