AI and speed cameras to tackle dangerous Devon road
A notorious stretch of the A361 in Devon will receive £1 million in AI and speed camera technology to improve road safety. The investment, part of a £5 million grant from the Department for Transport (DfT), comes after the road was identified as ‘high risk,’ with three fatalities and 30 serious injuries recorded between 2018 and 2022. AI-powered cameras will detect offences such as drivers using mobile phones and failing to wear seatbelts, while speed cameras will be installed at key locations.
A pilot scheme last August recorded nearly 1,800 potential offences along the route, highlighting the need for stricter enforcement. The latest plans include three fixed speed cameras at Ilfracombe, Knowle, and Ashford, as well as two average speed camera systems covering longer stretches of the road. AI cameras will be rotated between different locations to monitor driver behaviour more effectively.
Councillor Stuart Hughes, Devon County Council’s cabinet member for highways, expressed pride in the region’s adoption of AI for road safety improvements. The remaining £4 million from the DfT grant will be allocated to upgrading junctions and improving access for pedestrians and cyclists along the A361.
Apple faces backlash over AI-generated news errors
Apple is facing mounting criticism over its AI-generated news summaries, which have produced inaccurate and misleading alerts on its latest iPhones. Media organisations, including the BBC, have raised concerns that the feature, designed to summarise breaking news notifications, has fabricated details that contradict original reports. The National Union of Journalists and Reporters Without Borders have called for the product’s removal, warning it risks spreading misinformation at a time when trust in news is already fragile.
High-profile errors have fuelled demands for urgent action. In December, an Apple AI summary falsely claimed that a murder suspect had taken his own life, while another inaccurately announced Luke Littler as the winner of the PDC World Darts Championship before the event had even begun. Apple has pledged to update the feature to make it clearer that summaries are AI-generated, but critics argue this does not address the root problem.
Journalism watchdogs and industry experts have warned that AI-driven news aggregation remains unreliable. The BBC stressed that the errors could undermine public trust, while former Guardian editor Alan Rusbridger described Apple’s technology as “out of control”. Similar concerns have been raised over generative AI tools from other tech firms, with Google’s AI-powered search summaries also facing scrutiny for producing incorrect responses. Apple insists the feature remains optional and is still in beta testing, with further improvements expected in an upcoming software update.
AI model Aitana takes social media by storm
In Barcelona, a pink-haired 25-year-old named Aitana captivates social media with her stunning images and relatable personality. But Aitana isn’t a real person—she’s an AI model created by The Clueless Agency. Launched during a challenging period for the agency, Aitana was designed as a solution to the unpredictability of working with human influencers. The virtual model has proven successful, earning up to €10,000 monthly by featuring in advertisements and modelling campaigns.
Aitana has already amassed over 343,000 Instagram followers, with some celebrities unknowingly messaging her for dates. Her creators, Rubén Cruz and Diana Núñez, maintain her appeal by crafting a detailed “life,” including fictional trips and hobbies, to connect with her audience. Unlike traditional models, Aitana has a defined personality, presented as a fitness enthusiast with a determined yet caring demeanour. This strategic design, rooted in current trends, has made her a relatable and marketable figure.
The success of Aitana has sparked a new wave of AI influencers. The Clueless Agency has developed additional virtual models, including a more introverted character named Maia. Brands increasingly seek these customisable AI creations for their campaigns, citing cost efficiency and the elimination of human unpredictability. However, critics warn that the hypersexualised and digitally perfected imagery promoted by such models may negatively influence societal beauty standards and young audiences.
Despite these concerns, Aitana represents a broader shift in advertising and social media. By democratising access to influencer marketing, AI models like her offer new opportunities for smaller businesses while challenging traditional notions of authenticity and influence in the digital age.
Google tests Gemini AI against Anthropic’s Claude
Google contractors improving the Gemini AI model have been tasked with comparing its responses against those of Anthropic’s Claude, according to internal documents reviewed by TechCrunch. The evaluation process involves scoring responses on criteria such as truthfulness and verbosity, with contractors given up to 30 minutes per prompt to determine which model performs better. Notably, some outputs identify themselves as Claude, sparking questions about Google’s use of its competitor’s model.
Claude’s responses, known for emphasising safety, have sometimes refused to answer prompts deemed unsafe, unlike Gemini, which has faced criticism for safety violations. One such instance involved Gemini generating responses flagged for inappropriate content. Despite Google’s significant investment in Anthropic, Claude’s terms of service prohibit its use to train or build competing AI models without prior approval.
A spokesperson for Google DeepMind stated that while the company compares model outputs for evaluation purposes, it does not train Gemini using Anthropic models. Anthropic, however, declined to comment on whether Google had obtained permission to use Claude for these tests. Recent revelations also highlight contractor concerns over Gemini producing potentially inaccurate information on sensitive topics, including healthcare.
German parties outline technology policies ahead of election
As Germany prepares for national elections on February 23, political parties are outlining their tech policy priorities, including digitalisation, AI, and platform regulation. Here’s where the leading parties stand as they finalise their programs ahead of the vote.
The centre-right CDU, currently leading in polls with 33%, proposes creating a dedicated Digital Ministry to streamline responsibilities under the Ministry of Transport. The party envisions broader use of AI and cloud technology in German industry while simplifying citizen interactions with authorities through digital accounts.
Outgoing Chancellor Olaf Scholz’s SPD, polling at 15%, focuses on reducing dependence on US and Chinese tech platforms by promoting European alternatives. The party also prioritises faster digitalisation of public administration and equitable rules for regulating AI and digital platforms, echoing EU-wide goals of tech sovereignty and security.
The Greens, with 14% support, highlight the role of AI in reducing administrative workloads amid labour shortages. They stress the need for greater interoperability across IT systems and call for an open-source strategy to modernise Germany’s digital infrastructure, warning that the country lags behind EU digitalisation targets.
The far-right AfD, projected to secure 17%, opposes EU platform regulations like the Digital Services Act and seeks to reverse Germany’s adoption of the NetzDG law. The party argues these measures infringe on free speech and calls for transparency in funding non-state actors and NGOs involved in shaping public opinion.
The parties’ contrasting visions set the stage for significant debates on the future of technology policy in Germany.
ByteDance sues former intern for $1.1 Million
ByteDance, the parent company of TikTok, has filed a $1.1 million lawsuit against former intern Tian Keyu, alleging deliberate sabotage of its AI model training infrastructure. The rare legal action, filed in a Beijing court, has attracted significant attention in China amid an intense race in AI development.
According to ByteDance, Tian manipulated code and made unauthorised modifications, disrupting its large language model (LLM) training tasks. While the company dismissed rumors of damages involving millions of dollars and thousands of graphics processing units, it confirmed that the intern was terminated in August.
The case underscores the growing stakes in generative AI, where technologies capable of creating text and images are advancing rapidly. ByteDance declined to comment further, while Tian, reportedly a postgraduate at Peking University, has yet to respond publicly. This lawsuit highlights the high-pressure environment of AI innovation and the risks companies face from internal threats.
Microsoft rejects AI training allegations
Microsoft has refuted allegations that it uses data from its Microsoft 365 applications, including Word and Excel, to train AI models. These claims surfaced online, with users pointing to the need to opt out of the ‘connected experiences’ feature as a possible loophole for data usage.
A Microsoft spokesperson stated categorically that customer data from both consumer and commercial Microsoft 365 applications is not utilised to train large language models. The spokesperson clarified in an email to Reuters that such suggestions were ‘untrue.’
The company explained that the ‘connected experiences’ feature is designed to support functionalities like co-authoring and cloud storage, rather than contributing to AI training. These assurances aim to address user concerns over potential misuse of their data.
Ongoing discussions on social media underscore persistent public worries about privacy and data security in AI development. Questions about data usage policies continue to highlight the need for transparency from technology companies.
AI tools transform daily life for the visually impaired
AI is transforming daily life for visually impaired individuals like Louise Plunkett, who has Stargardt disease, a condition causing progressive vision loss. Apps like “Be My AI” use ChatGPT to generate detailed descriptions of images, helping users identify everyday items, read packaging, and navigate spaces. While Plunkett praises its convenience, she notes that its descriptions can sometimes be overly detailed.
Developed by the Danish firm Be My Eyes, the app initially relied on human volunteers to describe visual elements over video calls. Now, its AI-driven features are expanding, with users increasingly turning to it for tasks such as analysing WhatsApp images. The company envisions future applications like live-streamed AI assistance to describe surroundings in real time.
Other innovations include the AI-powered WeWalk cane, which offers navigation, obstacle detection, and public transit updates through voice commands. Advocates like Robin Spinks of the Royal National Institute of Blind People emphasise AI’s potential to revolutionise accessibility, offering tools that make life easier for those with vision impairments. Despite some skepticism, many find the technology invaluable.