Panasonic reveals AI-powered wellness assistant Umi at CES 2025
Panasonic has introduced Umi, a digital assistant designed to support family wellness, at CES 2025 in Las Vegas. Developed in partnership with AI startup Anthropic, Umi uses the Claude AI model to help families set and achieve personal goals, such as improving fitness or spending more time together. The interactive platform allows users to engage with the AI through voice chat and a mobile app, where they can create routines, manage tasks, and communicate in group chats.
The assistant is also aimed at caregivers looking after aging parents, offering a way to stay informed about their well-being even when living apart. Panasonic has collaborated with AARP to enhance Umi’s ability to support older adults. Additionally, the platform will connect users with wellness experts and integrate with partners such as Calm, Blue Apron, SleepScore Labs, and Precision Nutrition to help families build healthy habits.
Umi is expected to launch in the United States in 2025, with Panasonic positioning it as part of a broader wellness initiative. The partnership with Anthropic extends beyond consumer products, as Panasonic plans to integrate the Claude AI model into its own operations to enhance customer service, marketing, and coding efficiency.
Israeli agri-tech startup Fermata secures funding for AI-powered farming solutions
Israeli startup Fermata, founded in 2020 by bioinformatics expert Valeria Kogan, is using AI and computer vision to monitor greenhouse crops for diseases and pests. The company’s software works with standard cameras, capturing images of plants twice a day and alerting farmers to potential infestations via an app. Initially considering robotic solutions, Kogan shifted focus after consulting with farmers, realising that simpler camera-based monitoring was more effective.
Based in Israel, Fermata has gained traction by prioritising farmer needs and keeping its AI training in-house, improving model accuracy. Partnering with major agricultural firms like Bayer and Syngenta, the company has deployed over 100 cameras and continues to expand. The startup recently secured a $10 million Series A investment from Raw Ventures, its existing investor, to scale operations and work towards profitability by 2026.
Plans for growth include strengthening the sales team and expanding beyond greenhouse tomatoes into new crops. Despite AI’s previous struggles in agriculture, Fermata’s practical approach and farmer-centric model have helped it carve a niche in the industry.
Ryzen AI and Fire Range: AMD’s big CES 2025 reveals
AMD has announced a range of new processors and graphics cards at CES 2025, including high-performance desktop CPUs, energy-efficient laptop chips, and AI-powered processors for next-generation Copilot+ PCs. The company’s latest flagship, the Ryzen 9 9950X3D, targets gamers and creators with 16 cores and speeds of up to 5.7GHz, offering an 8% performance boost in select games compared to its predecessor. AMD also introduced the Fire Range series for laptops and the Ryzen AI 300 and Ryzen AI Max chips, which integrate neural processing units for AI workloads.
The growing market for handheld gaming PCs has led to the release of AMD’s Ryzen Z2 series, optimised for portable devices. Meanwhile, the company’s new Radeon RX 9070 XT and RX 9070 GPUs, built on RDNA 4 architecture, promise improved ray tracing, AI acceleration, and better media encoding. AMD’s FidelityFX Super Resolution 4.0, designed to enhance gaming visuals with minimal latency, was also unveiled.
Expanding beyond hardware, AMD’s Adrenalin software now includes AI-powered features, such as image generation and local AI models for summarising documents. With a strong market presence and increasing demand for AI and gaming solutions, AMD’s 2025 lineup reflects its strategy to remain competitive across multiple segments. Ryzen AI and Fire Range: AMD’s big CES 2025 reveals
Can AI really transform drug development?
The growing use of AI in drug development is dividing opinions among researchers and industry experts. Some believe AI can significantly reduce the time and cost of bringing new medicines to market, while others argue that it has yet to solve the high failure rates seen in clinical trials.
AI-driven tools have already helped identify potential drug candidates more quickly, with some companies reducing the preclinical testing period from several years to just 30 months. However, experts point out that these early successes don’t always translate to breakthroughs in human trials, where most drug failures occur.
Unlike fields such as image recognition, AI in pharmaceuticals faces unique challenges due to limited high-quality data. Experts say AI’s impact could improve if it focuses on understanding why drugs fail in trials, such as problems with dosage, safety, and efficacy. They also recommend new trial designs that incorporate AI to better predict which drugs will succeed in later stages.
While AI won’t revolutionise drug development overnight, researchers agree it can help tackle persistent problems and streamline the process. But achieving lasting results will require better collaboration between AI specialists and drug developers to avoid repeating past mistakes.
Google TV introduces AI-powered news summaries with Gemini
Google has announced a major update to its TV operating system at CES 2025, integrating its Gemini AI assistant to deliver personalised news summaries. The new ‘News Brief’ feature will scrape news articles and YouTube headlines from trusted sources to generate a concise recap of daily events. Google plans to roll out the feature to both new and existing Google TV devices by late 2025.
The move marks Google’s deeper foray into AI-generated news, a space that has faced legal challenges from media companies over copyright concerns. While rival firms like OpenAI and Microsoft have been sued over unlicensed content use, Google’s News Brief does not currently display its sources, apart from related YouTube videos. AI-generated news has also faced accuracy issues, with previous AI models producing misleading or entirely false headlines.
Beyond news summaries, Google aims to make TVs more interactive, with Gemini allowing users to search for films, shows, and YouTube videos using natural language. Future Google TVs will include sensors to detect when users enter the room, enabling a more personalised experience. As the company continues expanding AI features in consumer technology, the success of News Brief may depend on how well it addresses content accuracy and transparency concerns.
FBI warns of AI-driven fraud
The FBI has raised alarms about the growing use of artificial intelligence in scams, particularly through deepfake technology. These AI-generated videos and audio clips can convincingly imitate real people, allowing criminals to impersonate family members, executives, or even law enforcement officials. Victims are often tricked into transferring money or disclosing personal information.
Deepfake scams are becoming more prevalent in the US due to the increasing accessibility of generative AI tools. Criminals exploit these technologies to craft realistic phishing emails, fake social media profiles, and fraudulent investment opportunities. Some have gone as far as generating real-time video calls to enhance their deception.
To protect against these threats, experts recommend limiting the personal information shared online, enabling two-factor authentication, and verifying any unusual or urgent communications. The FBI stresses the importance of vigilance, especially as AI-driven scams become more sophisticated and harder to detect. By understanding these risks and adopting stronger security practices, individuals can safeguard themselves against the growing menace of deepfake fraud.
Faculty AI develops AI for military drones
Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.
While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.
The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.
Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.
US tech leaders oppose proposed export limits
A prominent technology trade group has urged the Biden administration to reconsider a proposed rule that would restrict global access to US-made AI chips, warning that the measure could undermine America’s leadership in the AI sector. The Information Technology Industry Council (ITI), representing major companies like Amazon, Microsoft, and Meta, expressed concerns that the restrictions could unfairly limit US companies’ ability to compete globally while allowing foreign rivals to dominate the market.
The proposed rule, expected to be released as soon as Friday, is part of the Commerce Department’s broader strategy to regulate AI chip exports and prevent misuse, particularly by adversaries like China. The restrictions aim to curb the potential for AI to enhance China’s military capabilities. However, in a letter to Commerce Secretary Gina Raimondo, ITI CEO Jason Oxman criticised the administration’s urgency in finalising the rule, warning of ‘significant adverse consequences’ if implemented hastily. Oxman called for a more measured approach, such as issuing a proposed rule for public feedback rather than enacting an immediate policy.
Industry leaders have been vocal in their opposition, describing the draft rule as overly broad and damaging. The Semiconductor Industry Association raised similar concerns earlier this week, and Oracle’s Executive Vice President Ken Glueck slammed the measure as one of the most disruptive ever proposed for the US tech sector. Glueck argued the rule would impose sweeping regulations on the global commercial cloud industry, stifling innovation and growth.
While the administration has yet to comment on the matter, the growing pushback highlights the tension between safeguarding national security and maintaining US dominance in the rapidly evolving field of AI.