Pocket FM taps AI tools to expand content library and boost quality

India-based audio platform Pocket FM is leveraging AI to enrich its content offerings and scale its production capabilities. Despite hosting over 200,000 hours of content, CEO Rohan Nayak emphasised the need for deeper genre coverage and original content. The company has partnered with ElevenLabs to convert written stories into audio series, achieving faster production and significant cost savings. AI models are also being used to adapt stories for diverse regions by handling cultural nuances, ensuring broader appeal across geographies.

Pocket FM is testing AI tools to enhance its creative process. These include a writing assistant that provides alternative plot ideas and insights based on platform data, aiming to empower solo writers with a ‘writer’s room’ experience. A ‘blockbuster engine’ is under development to analyse trends and identify potential hit shows, underscoring the platform’s focus on producing popular content. AI has already contributed to more than 40,000 series on the platform, generating $3 million in revenue.

Despite the benefits, Pocket FM acknowledges challenges in maintaining quality while accelerating production. Industry experts caution that reliance on AI might undermine creativity, with artists needing to ensure authenticity in their work. Nayak affirmed that AI tools are intended to complement rather than replace human creativity. Pocket FM, backed by $197 million in funding, competes with platforms such as Audible and Kuku FM while striving to strike a balance between innovation and content excellence.

Microsoft rejects AI training allegations

Microsoft has refuted allegations that it uses data from its Microsoft 365 applications, including Word and Excel, to train AI models. These claims surfaced online, with users pointing to the need to opt out of the ‘connected experiences’ feature as a possible loophole for data usage.

A Microsoft spokesperson stated categorically that customer data from both consumer and commercial Microsoft 365 applications is not utilised to train large language models. The spokesperson clarified in an email to Reuters that such suggestions were ‘untrue.’

The company explained that the ‘connected experiences’ feature is designed to support functionalities like co-authoring and cloud storage, rather than contributing to AI training. These assurances aim to address user concerns over potential misuse of their data.

Ongoing discussions on social media underscore persistent public worries about privacy and data security in AI development. Questions about data usage policies continue to highlight the need for transparency from technology companies.

AI partnership drives new opportunities for IMAX

IMAX is adopting AI technology to bring its original content to more global audiences. The company has partnered with Dubai-based Camb.ai to use advanced speech and translation models for content localisation. With non-English content growing in popularity, including in English-speaking markets, the initiative aims to increase accessibility and reduce costs.

Camb.ai’s AI platform, DubStudio, supports over 140 languages, including lesser-known ones. Its specialised models, Boli and Mars, ensure accurate text-to-speech translations while preserving nuances like background audio and tone. The startup’s technology has been previously deployed for live events like the Australian Open and Eurovision Sport, showcasing its ability to handle high-pressure scenarios.

IMAX plans a phased rollout of the AI localisation, starting with widely spoken languages. Early tests of Camb.ai’s technology on IMAX’s original documentaries proved promising. The company expects the collaboration to reduce translation expenses while boosting the global appeal of its immersive experiences.

Camb.ai, founded by former Apple engineer Akshat Prakash and his father, recently raised $4 million and is securing additional funding to expand its team and operations. The startup avoids controversial data scraping methods, relying instead on ethically licensed datasets and input from early partners, positioning itself as a reliable choice for AI-driven content solutions.

Victim warns of deepfake Bitcoin scams

A Brighton tradesman lost £75,000 to a fake bitcoin scheme that used a deepfake video of Martin Lewis and Elon Musk. The kitchen fitter, Des Healey, shared his experience on BBC Radio 5 Live, revealing how AI manipulated Martin’s voice and image to create a convincing endorsement. Des admitted he was lured by the promise of quick returns but later realised the devastating scam had emptied his life savings and forced him into debt.

He explained that the fraudsters, posing as financial experts, gained his trust through personalised calls and apparent success in his fake investment account. Encouraged to invest more, he took out £70,000 in loans across four lenders. Only when his son raised concerns about suspicious details, such as background music on calls, did Des begin to suspect foul play and approach the police.

Martin Lewis, Britain’s most impersonated celebrity in scams, described meeting Des as emotionally challenging. He commended Des for bravely sharing his ordeal to warn others. Martin emphasised that scams prey on urgency and secrecy, urging people to pause and verify before sharing personal or financial details.

Although two banks cancelled loans taken by Des, he still owes £26,000 including interest. Des expressed gratitude for the chance to warn others and praised Martin Lewis for his continued efforts to fight fraud. Meanwhile, Revolut reaffirmed its commitment to combating cybercrime, acknowledging the challenges posed by sophisticated scammers.

Brave combines AI chat with search features

Brave Search has unveiled an AI-powered chat feature that lets users ask follow-up questions to refine their initial search queries. This addition builds on Brave’s earlier ‘Answer with AI’ tool, which generates quick summaries for search queries. Now, users can engage further with a chat bar that appears beneath the summary, enabling deeper exploration without starting a new search.

For instance, a search for ‘Christopher Nolan films’ will provide an AI-generated list of his notable works. Users can then ask a follow-up question, such as “Which actors appear most in his films?” The AI will respond with relevant information while citing its sources. Powered by a mix of open and proprietary large language models, the feature seamlessly integrates search and chat for a more versatile user experience.

Unlike Google, which offers AI summaries but lacks a follow-up chat option, Brave is bridging the gap between search engines and chatbots. Brave also emphasizes privacy, ensuring that queries are not stored or used to profile users. With over 36M daily searches and 11M AI responses generated daily, Brave is advancing its commitment to private, user-friendly innovation.

YouTube challenges TikTok with AI video feature

YouTube Shorts has rolled out a new capability in its Dream Screen feature, enabling users to create AI-generated video backgrounds. Previously limited to image generation, this update harnesses Google DeepMind’s AI video-generation model, Veo, to produce 1080p cinematic-style video clips. Creators can enter text prompts, such as ‘magical forest’ or ‘candy landscape,’ select an animation style, and receive a selection of dynamic video backdrops.

Once a background is chosen, users can film their Shorts with the AI-generated video playing behind them. This feature offers creators unique storytelling opportunities, such as setting videos in imaginative scenes or crafting engaging animated openings. In future updates, YouTube plans to let users generate stand-alone six-second video clips using Dream Screen.

The feature, available in the US, Canada, Australia, and New Zealand, distinguishes YouTube Shorts from TikTok, which currently only offers AI-generated background images. By providing tools for creating custom video backdrops, YouTube aims to cement its position as a leader in short-form video innovation.

Irish data authority seeks EU guidance on AI privacy under GDPR

The Irish Data Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling AI-related privacy issues under the EU’s General Data Protection Regulation (GDPR). Data protection commissioners Des Hogan and Dale Sunderland emphasised the need for clarity, particularly on whether personal data continues to exist within AI training models. The EDPB is expected to provide its opinion before the end of the year, helping harmonise regulatory approaches across Europe.

The DPC has been at the forefront of addressing AI and privacy concerns, especially as companies like Meta, Google, and X (formerly Twitter) use EU users’ data to train large language models. As part of this growing responsibility, the Irish authority is also preparing for a potential role in overseeing national compliance with the EU’s upcoming AI Act, following the country’s November elections.

The regulatory landscape has faced pushback from Big Tech companies, with some arguing that stringent regulations could hinder innovation. Despite this, Hogan and Sunderland stressed the DPC’s commitment to enforcing GDPR compliance, citing recent legal actions, including a €310 million fine on LinkedIn for data misuse. With two more significant decisions expected by the end of the year, the DPC remains a key player in shaping data privacy in the age of AI.

AI chatbots in healthcare: Balancing potential and privacy concerns amidst regulatory gaps

Security experts are urging caution when using AI chatbots like ChatGPT and Grok for interpreting medical scans or sharing private health information. Recent trends show users uploading X-rays, MRIs, and other sensitive data to these platforms, but such actions can pose significant privacy risks. Uploaded medical images may become part of training datasets for AI models, leaving personal information exposed to misuse.

Unlike healthcare apps covered by laws like HIPAA, many AI chatbots lack strict data protection safeguards. Companies offering these services may use the data to improve their algorithms, but it’s often unclear who has access or how the data will be used. This lack of transparency has raised alarms among privacy advocates.

X-owner Elon Musk recently encouraged users to upload medical imagery to Grok, his platform’s AI chatbot, citing its potential to evolve into a reliable diagnostic tool. However, Musk acknowledged that Grok is still in its early stages, and critics warn that sharing such data online could have lasting consequences.