AI chatbots mimicking deceased teens spark outrage
The discovery of AI chatbots resembling deceased teenagers Molly Russell and Brianna Ghey on Character.ai has drawn intense backlash, with critics denouncing the platform’s moderation. Character.ai, which lets users create digital personas, faced criticism after ‘sickening’ replicas of Russell, who died by suicide at 14, and Ghey, who was murdered in 2023, appeared on the platform. The Molly Rose Foundation, a charity named in Russell’s memory, described these chatbots as a ‘reprehensible’ failure of moderation.
Concerns about the platform’s handling of sensitive content have already led to legal action in the US, where a mother is suing Character.ai after claiming her 14-year-old son took his own life following interactions with a chatbot. Character.ai insists it prioritises safety and actively moderates avatars in line with user reports and internal policies. However, after being informed of the Russell and Ghey chatbots, it removed them from the platform, saying it strives to ensure user protection but acknowledges the challenges in regulating AI.
Amidst rapid advancements in AI, experts stress the need for regulatory oversight of platforms hosting user-generated content. Andy Burrows, head of the Molly Rose Foundation, argued stronger regulation is essential to prevent similar incidents, while Brianna Ghey’s mother, Esther Ghey, highlighted the manipulation risks in unregulated digital spaces. The incident underscores the emotional and societal harm that can arise from unsupervised AI-generated personas.
The case has sparked wider debates over the responsibilities of companies like Character.ai, which states it bans impersonation and dangerous content. Despite automated tools and a growing trust and safety team, the platform faces calls for more effective safeguards. AI moderation remains an evolving field, but recent cases have underscored the pressing need to address risks linked to online platforms and user-created chatbots.
AI startup Sierra hits $4.5 billion valuation
Sierra, a young AI software startup co-founded by former Salesforce co-CEO Bret Taylor, has secured $175 million in new funding led by Greenoaks Capital. This latest round gives the company a valuation of $4.5 billion, a significant jump from its earlier valuation of nearly $1 billion. Investors such as Thrive Capital, Iconiq, Sequoia, and Benchmark have also backed the firm.
Founded just a year ago, Sierra has already crossed $20 million in annualised revenue, focusing on selling AI-powered customer service chatbots to enterprises. It works with major clients, including WeightWatchers and Sirius XM. The company claims its technology reduces ‘hallucinations’ in large language models, ensuring reliable AI interactions for businesses.
The rising valuation reflects investor enthusiasm for applications in AI that generate steady revenue, shifting from expensive foundational models to enterprise solutions. Sierra operates in a competitive space, facing rivals such as Salesforce and Forethought, but aims to stand out through more dependable AI performance.
Bret Taylor, who also chairs OpenAI’s board, co-founded Sierra alongside former Google executive Clay Bavor. Taylor previously held leadership roles at Salesforce and oversaw Twitter’s board during its takeover by Elon Musk. Bavor, who joined Google in 2005, played key roles managing Gmail and Google Drive.
UK man sentenced to 18 years for using AI to create child sexual abuse material
In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.
Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.
Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.
US prosecutors intensify efforts to combat AI-generated child abuse content
US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.
Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.
The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.
Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.