Panasonic reveals AI-powered wellness assistant Umi at CES 2025

Panasonic has introduced Umi, a digital assistant designed to support family wellness, at CES 2025 in Las Vegas. Developed in partnership with AI startup Anthropic, Umi uses the Claude AI model to help families set and achieve personal goals, such as improving fitness or spending more time together. The interactive platform allows users to engage with the AI through voice chat and a mobile app, where they can create routines, manage tasks, and communicate in group chats.

The assistant is also aimed at caregivers looking after aging parents, offering a way to stay informed about their well-being even when living apart. Panasonic has collaborated with AARP to enhance Umi’s ability to support older adults. Additionally, the platform will connect users with wellness experts and integrate with partners such as Calm, Blue Apron, SleepScore Labs, and Precision Nutrition to help families build healthy habits.

Umi is expected to launch in the United States in 2025, with Panasonic positioning it as part of a broader wellness initiative. The partnership with Anthropic extends beyond consumer products, as Panasonic plans to integrate the Claude AI model into its own operations to enhance customer service, marketing, and coding efficiency.

Israeli agri-tech startup Fermata secures funding for AI-powered farming solutions

Israeli startup Fermata, founded in 2020 by bioinformatics expert Valeria Kogan, is using AI and computer vision to monitor greenhouse crops for diseases and pests. The company’s software works with standard cameras, capturing images of plants twice a day and alerting farmers to potential infestations via an app. Initially considering robotic solutions, Kogan shifted focus after consulting with farmers, realising that simpler camera-based monitoring was more effective.

Based in Israel, Fermata has gained traction by prioritising farmer needs and keeping its AI training in-house, improving model accuracy. Partnering with major agricultural firms like Bayer and Syngenta, the company has deployed over 100 cameras and continues to expand. The startup recently secured a $10 million Series A investment from Raw Ventures, its existing investor, to scale operations and work towards profitability by 2026.

Plans for growth include strengthening the sales team and expanding beyond greenhouse tomatoes into new crops. Despite AI’s previous struggles in agriculture, Fermata’s practical approach and farmer-centric model have helped it carve a niche in the industry.

Can AI really transform drug development?

The growing use of AI in drug development is dividing opinions among researchers and industry experts. Some believe AI can significantly reduce the time and cost of bringing new medicines to market, while others argue that it has yet to solve the high failure rates seen in clinical trials.

AI-driven tools have already helped identify potential drug candidates more quickly, with some companies reducing the preclinical testing period from several years to just 30 months. However, experts point out that these early successes don’t always translate to breakthroughs in human trials, where most drug failures occur.

Unlike fields such as image recognition, AI in pharmaceuticals faces unique challenges due to limited high-quality data. Experts say AI’s impact could improve if it focuses on understanding why drugs fail in trials, such as problems with dosage, safety, and efficacy. They also recommend new trial designs that incorporate AI to better predict which drugs will succeed in later stages.

While AI won’t revolutionise drug development overnight, researchers agree it can help tackle persistent problems and streamline the process. But achieving lasting results will require better collaboration between AI specialists and drug developers to avoid repeating past mistakes.

AI and speed cameras to tackle dangerous Devon road

A notorious stretch of the A361 in Devon will receive £1 million in AI and speed camera technology to improve road safety. The investment, part of a £5 million grant from the Department for Transport (DfT), comes after the road was identified as ‘high risk,’ with three fatalities and 30 serious injuries recorded between 2018 and 2022. AI-powered cameras will detect offences such as drivers using mobile phones and failing to wear seatbelts, while speed cameras will be installed at key locations.

A pilot scheme last August recorded nearly 1,800 potential offences along the route, highlighting the need for stricter enforcement. The latest plans include three fixed speed cameras at Ilfracombe, Knowle, and Ashford, as well as two average speed camera systems covering longer stretches of the road. AI cameras will be rotated between different locations to monitor driver behaviour more effectively.

Councillor Stuart Hughes, Devon County Council’s cabinet member for highways, expressed pride in the region’s adoption of AI for road safety improvements. The remaining £4 million from the DfT grant will be allocated to upgrading junctions and improving access for pedestrians and cyclists along the A361.

Faculty AI develops AI for military drones

Faculty AI, a consultancy company with significant experience in AI, has been developing AI technologies for both civilian and military applications. Known for its close work with the UK government on AI safety, the NHS, and education, Faculty is also exploring the use of AI in military drones. The company has been involved in testing AI models for the UK’s AI Safety Institute (AISI), which was established to study the implications of AI safety.

While Faculty has worked extensively with AI in non-lethal areas, its work with military applications raises concerns due to the potential for autonomous systems in weapons, including drones. Though Faculty has not disclosed whether its AI work extends to lethal drones, it continues to face scrutiny over its dual roles in advising both the government on AI safety and working with defense clients.

The company has also generated some controversy because of its growing influence in both the public and private sectors. Some experts, including Green Party members, have raised concerns about potential conflicts of interest due to Faculty’s widespread government contracts and its private sector involvement in AI, such as its collaborations with OpenAI and defence firms. Faculty’s work on AI safety is seen as crucial, but critics argue that its broad portfolio could create a risk of bias in the advice it provides.

Despite these concerns, Faculty maintains that its work is guided by strict ethical policies, and it has emphasised its commitment to ensuring AI is used safely and responsibly, especially in defence applications. As AI continues to evolve, experts call for caution, with discussions about the need for human oversight in the development of autonomous weapons systems growing more urgent.

China unveils Rotunbot RT-G: A groundbreaking advancement in robotic policing technology

China has introduced a groundbreaking addition to its law enforcement toolkit – the Rotunbot RT-G, a spherical robot designed to aid police in high-speed chases and challenging terrains. Developed by Logon Technology, this 276-pound robotic marvel can travel up to 22 mph on land and water, navigate mud and rivers, and even withstand drops from ledges. Its rapid acceleration and amphibious capabilities make it a unique asset for pursuit scenarios.

Equipped with advanced technology, the RT-G boasts GPS for precise navigation, cameras, ultrasonic sensors, and systems for tracking and avoiding obstacles. Gyroscopic self-stabilisation ensures smooth operation, while a suite of non-lethal tools—including tear gas dispensers, net shooters, and acoustic crowd dispersal devices—enables it to handle diverse law enforcement tasks humanely and effectively.

The RT-G is already used in Wenzhou, Zhejiang province of China, where it assists police in commercial zones. While its real-world performance shows promise, limitations such as instability during turns and difficulty navigating stairs reveal areas for improvement. Despite these challenges, the Rotunbot RT-G represents a significant leap in robotic policing technology, blending innovation with practicality.

Apheris revolutionises data privacy and AI in life sciences with federated computing

Privacy and regulatory concerns have long hindered AI’s reliance on data, especially in sensitive fields like healthcare and life sciences. Apheris, a German startup co-founded by Robin Röhm, aims to solve this problem using federated computing—a decentralised approach that trains AI models without moving sensitive data.

The company’s approach is gaining traction among prominent clients like Roche and hospitals, and its technology is already being used in collaborative drug discovery efforts by pharmaceutical giants such as Johnson & Johnson and Sanofi. Apheris recently secured $8.25 million in Series A funding led by OTB Ventures and eCAPITAL, bringing its total funding to $20.8 million.

That follows a pivotal shift in 2023 to focus on the needs of data owners in the pharmaceutical and life sciences sectors. The pivot has paid off, quadrupling the company’s revenue since launching its redefined product, the Apheris Compute Gateway, which securely bridges local data and AI models.

With its new funding, Apheris plans to expand its team and refine its AI-driven solutions for complex challenges like protein prediction. By prioritising data security and privacy, the company aims to unlock previously inaccessible data for innovation, addressing a core barrier to AI’s transformative potential in life sciences.

Debate over AI regulation intensifies amidst innovation and safety concerns

In recent years, debates over AI have intensified, oscillating between catastrophic warnings and optimistic visions. Technologists, once at the forefront of calling for caution, have been overshadowed by the tech industry’s emphasis on generative AI’s lucrative potential.

Dismissed as ‘AI doomers,’ critics warn of existential threats—from mass harm to societal destabilisation—while Silicon Valley champions the transformative benefits of AI, urging fewer regulations to accelerate innovation. 2023 marked a pivotal year for AI awareness, with luminaries like Elon Musk and over 1,000 experts calling for a development pause, citing profound risks.

US President Biden’s AI executive order aimed to safeguard Americans, and regulatory discussions gained mainstream traction. However, 2024 saw this momentum falter as investment in AI skyrocketed and safety-focused voices dwindled.

High-profile debates, like California’s SB 1047—a bill addressing catastrophic AI risks—ended in a veto, highlighting resistance from powerful tech entities. Critics argued that such legislation stifled innovation, while proponents lamented the lack of long-term safety measures.

Amid this tug-of-war, optimistic narratives, like Marc Andreessen’s essay ‘Why AI Will Save the World,’ gained prominence. Advocating rapid, unregulated AI development, Andreessen and others argued this approach would bolster competitiveness and prevent monopolisation.

Yet, detractors questioned the ethics of prioritising profit over societal concerns, especially as cases like AI-driven child safety failures underscored emerging risks.

Why does it matter?

Looking ahead to 2025, the AI safety movement faces an uphill battle. Policymakers hint at revisiting stalled regulations, signalling hope for progress. However, with influential players opposing stringent oversight, the path to balanced AI governance remains uncertain. As society grapples with AI’s rapid evolution, the challenge lies in addressing its vast potential and inherent risks.