Somerset to introduce AI cameras for road safety

Authorities in South West England are set to introduce new AI cameras on the A361 near Frome, Somerset, in a bid to reduce road deaths after a rise in serious crashes. The technology will be used to detect speeding, mobile phone use, and seatbelt violations. Nine people have died on this road in less than two years.

The Avon and Somerset Police have already taken action, positioning unmarked cars and using speed detection equipment on the A361. Since the start of 2023, there have been 22 serious or fatal accidents along the route. Officials aim to improve public confidence in road safety measures.

The parents of two sisters killed in a high-speed crash on the A361 last year have criticised the lack of action. They believe better speed controls could have prevented the deaths of Madison and Liberty North, aged 21 and 17, who died in July 2022.

Local authorities, led by MP Anna Sabine, are also planning further safety measures. These include improving road signage, enhancing visibility, and urging drivers to adopt safer behaviours when navigating these fast A-roads.

Tanzania embraces AI to tackle rising cybercrime

Tanzanian President Samia Suluhu Hassan has called for the integration of AI into the strategies of the Tanzania Police Force to address the escalating threat of cybercrime. Speaking at the 2024 Annual Senior Police Officers’ Meeting and the 60th Anniversary of the Tanzania Police Force, President Samia emphasised that in today’s digital age, leveraging advanced technology is crucial for effectively combating online threats. She highlighted the necessity for the police to adapt technologically to stay ahead of sophisticated cybercriminals, underlining the importance of embracing these advancements.

In her address, President Samia also drew attention to a troubling surge in cybercrime, with incidents increasing by 36.1% from 2022 to 2023. She noted that crimes such as fraud, false information dissemination, pornography distribution, and harassment have become more prevalent, with offenders frequently operating from outside Tanzania. The President’s remarks underscore the urgency of adopting advanced technological tools to address these growing challenges effectively and to enhance the police’s capability to counteract such threats.

Furthermore, President Samia emphasised the need to maintain peace and stability during the upcoming local government and general elections. She tasked the police with managing election-related challenges, including defamatory statements and misinformation, without resorting to internet shutdowns. President Samia underscored that while elections are temporary, safeguarding a stable environment is essential for ongoing development and progress by stressing the importance of preserving national peace amidst political activities.

Mistral AI lowers prices and launches free developer features

Mistral AI has launched a new free tier for developers to fine-tune and test apps using its AI models, as well as significantly reducing prices for API access to these models, the startup announced on Tuesday. The Paris-based company, valued at $6 billion, is introducing these updates to remain competitive with industry giants such as OpenAI and Google. These companies also offer free tiers for developers with limited usage. Mistral’s free tier, accessible through its platform ‘la Plateforme,’ enables developers to test its AI models at no cost. However, paid access is required for commercial production.

Mistral has reduced the prices of its AI models, including Mistral NeMo and Codestral, by over 50% and cut the cost of its largest model, Mistral Large, by 33%. This decision reflects the increasing commoditisation of AI models in the developer space, with providers vying to offer more advanced tools at lower prices.

Mistral has integrated image processing into its consumer AI chatbot, le Chat, through its new multimodal model, Pixtral 12B. This model allows users to scan, analyse, and search image files alongside text, marking another advancement in the startup’s expanding AI capabilities.

Slack to transform into AI-powered work operating system

Slack is undergoing a major transformation as it integrates AI features into its platform, aiming to evolve from a simple messaging service to a ‘work operating system.’ CEO Denise Dresser said Slack will now serve as a hub for AI applications from companies like Salesforce, Adobe, and Anthropic. New, pricier features include AI-generated summaries of conversations and the ability to interact with AI agents for tasks such as data analysis, web searches, and image generation.

This shift follows Salesforce’s 2021 acquisition of Slack and its broader move toward AI-driven solutions. Slack’s AI integration seeks to enhance productivity by offering tools to catch up on team discussions, analyse business data, and create branded content, all within the chat environment. However, questions remain about whether users will embrace and pay for these premium features and how this change aligns with Slack’s core identity as a workplace communication tool.

Concerns around data privacy have also surfaced as Slack leans further into AI. The company faced criticism earlier this year for handling customer data, which was used for training purposes, but maintains that it does not use user messages to train its AI models. As Slack continues integrating AI, it must address growing scepticism around managing and safeguarding data.

New Google update will identify AI-edited images

Google is planning to roll out new features that will enable the identification of images that have been generated or edited using AI in search results. This update will highlight such photos in the ‘About this image’ section across Google Search, Google Lens, and the Circle to Search feature on Android. In the future, this disclosure feature may also be extended to other Google platforms like YouTube.

To achieve this, Google will utilise C2PA metadata developed by the Coalition for Content Provenance and Authenticity. This metadata tracks an image’s history, including its creation and editing process. However, the adoption of C2PA standards is limited, and metadata can be altered or removed, which may impact the reliability of this identification method.

Despite the challenges, Google’s action addresses the increasing concerns about deepfakes and AI-generated content. There have been reports of a significant rise in scams involving such content, and losses related to deepfakes are expected to increase dramatically in the coming years. As public concern about deepfakes and AI-driven misinformation grows, Google’s initiative aims to provide more transparency in digital media.

AI-powered fact-checking tech in development by NEC

The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.

The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.

As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.

Facebook and Instagram data to power Meta’s AI models

Meta Platforms will soon start using public posts on Facebook and Instagram to train its AI models in the UK. The company had paused its plans after regulatory concerns from the Irish privacy regulator and Britain’s Information Commissioner’s Office (ICO). The AI training will involve content such as photos, captions, and comments but will exclude private messages and data from users under 18.

Meta faced privacy-related backlash earlier in the year, leading to its decision to halt the AI model launch in Europe. The company has since engaged with UK regulators, resulting in a clearer framework that allows the AI training plans to proceed. The new strategy simplifies the way users can object to their data being processed.

From next week, Facebook and Instagram users in the UK will receive in-app notifications explaining how their public posts may be used for AI training. Users will also be informed on how to object to the use of their data. Meta has extended the window in which objections can be filed, aiming to address transparency concerns raised by both the ICO and advocacy groups.

Earlier in June, Meta’s AI plans faced opposition from privacy advocacy groups like NOYB, which urged regulators to intervene. These groups argued that Meta’s notifications did not fully meet the EU’s privacy and transparency standards. Meta’s latest updates are seen as an effort to align with these regulatory demands.

Experts warn of AI dangers in Oprah Winfrey special

Oprah Winfrey aired a special titled ‘AI and the Future of Us,’ featuring guests like OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. The discussion was largely focused on the potential risks and ethical concerns surrounding AI. Winfrey highlighted the need for humanity to adapt to AI’s rapid development, while Altman emphasised the importance of safety regulations.

Altman defended AI’s learning capabilities but acknowledged the need for government involvement in safety testing. However, his company has opposed California’s AI safety bill, which experts believe would provide essential safeguards. He also discussed the dangers of deepfakes and urged caution as AI technology advances.

Wray pointed out AI’s role in rising cybercrimes like sextortion and disinformation. He warned of its potential to be exploited for election interference, urging the public to remain vigilant in the face of increasing AI-generated content.

For balance, Bill Gates expressed optimism about AI’s positive impact on education and healthcare. He envisioned AI improving medical transcription and classroom learning, though concerns about bias and misuse remain.