Musk introduces Aurora image generator to X

Elon Musk’s social media platform X has introduced Aurora, an advanced image generation tool integrated into its Grok AI assistant. Aurora allows users to create photorealistic visuals and explore imaginative concepts. However, some users noted the tool briefly disappeared after its launch.

Aurora, accessible through X’s mobile and web apps, appears to have minimal content restrictions. It can generate images of public and copyrighted figures, though explicit and graphic content is reportedly limited. The tool is still in beta, with Musk promising rapid improvements. While Aurora excels in landscapes and still-life depictions, it struggles with more complex details, like human hands, a common challenge for AI-generated visuals.

The release follows X’s decision to make Grok free for all users, enabling broader access to AI-driven features. Meanwhile, Musk’s xAI team, which developed Aurora, recently secured $6B in funding and is working on further innovations, including Grok 3 and a standalone app.

Data deletion hampers OpenAI lawsuit progress

OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.

The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.

Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.

Disney launches new AI and augmented reality unit

Disney is establishing a new division, the Office of Technology Enablement, dedicated to advancing the company’s use of AI and mixed reality (XR). Led by Jamie Voris, Disney’s former chief technology officer for its film studio, the unit will oversee projects across Disney’s film, television, and theme park segments to leverage these rapidly evolving technologies. This group will focus on coordinating various initiatives without centralising them, ensuring each project aligns with Disney’s broader technological strategy.

The new office, which will ultimately expand to about 100 employees, comes as Disney looks to tap into cutting-edge AI and augmented reality (AR) applications. Disney Entertainment Co-Chairman Alan Bergman emphasised the importance of exploring AI’s potential while mitigating risks, signaling Disney’s intention to create next-generation experiences for theme parks and home entertainment. Voris’s leadership will be succeeded by Eddie Drake as Disney’s new film studio CTO.

Disney has been actively building expertise in AR and virtual reality (VR) as technology companies like Meta and Apple compete in the emerging AR/VR market. The company also rehired Kyle Laughlin, a specialist in these technologies, as Senior VP of Research and Development for Disney Imagineering, its theme park innovation branch. By assembling a team with expertise in advanced tech, Disney aims to create immersive, engaging experiences for its global audience.

ForceField offers new solution to combat deepfakes and AI deception

ForceField is unveiling its new technology at the 2024 TechCrunch Disrupt, introducing tools aimed at fighting deepfakes and manipulated content. Unlike platforms that flag AI-generated media, ForceField authenticates content directly from devices, ensuring the integrity of digital evidence. Using its HashMarq API, the startup verifies the authenticity of data streams by generating a secure digital signature in real time.

The company uses blockchain technology for smart contracts, safeguarding content without relying on cryptocurrencies or web3 solutions. This system authenticates data collected across various platforms, from mobile apps to surveillance cameras. By tracking metadata like time, location, and surrounding signals, ForceField provides insights that aid journalists, law enforcement, and organisations in verifying the accuracy of submitted media.

ForceField was inspired by CEO MC Spano’s personal experience in 2018, when she struggled to submit video evidence following an assault. Her frustration with the justice system sparked the creation of technology that could simplify evidence submission and ensure its acceptance. Now the startup is working with clients such as Erie Insurance and plans to launch commercially by early 2025, focusing initially on the insurance sector but with applications in media and law enforcement.

The company, which is entirely woman-led, has gained financial backing from several angel investors and strategic partnerships. Spano aims to raise a seed round by year’s end, highlighting the importance of diversity in tech leadership. As AI-generated content continues to flood the internet, ForceField’s tools offer a new way to validate authenticity and restore trust in digital information.

AI podcast revives Sir Michael Parkinson

A new podcast titled Virtually Parkinson brings back the voice of Sir Michael Parkinson, using AI technology to simulate the late chat show host. Produced by Deep Fusion Films with support from Parkinson’s family, the series aims to recreate his interview style across eight episodes, featuring new conversations with prominent guests.

Mike Parkinson, son of the late broadcaster, explained that the family wanted listeners to know the voice is an AI creation, ensuring transparency. He noted the project was inspired by conversations he had with his father before he passed, saying Sir Michael would have found the concept intriguing, despite being a technophobe.

The release comes amid growing controversy around AI’s role in the creative arts, with many actors and presenters fearing it could undermine their careers. Though AI is often criticised for replacing real talent, Parkinson’s son argued that the podcast offers a unique way to extend his father’s legacy, without replacing a living presenter.

Co-creator Jamie Anderson clarified that the AI version acts as an autonomous host, conducting interviews in a way reflective of Sir Michael’s original style. The podcast seeks to introduce his legacy to younger audiences, while also raising ethical questions about the use of AI to recreate deceased individuals.

Universal Music aims for ethical AI in new KLAY partnership

Universal Music Group (UMG) has announced a partnership with Los Angeles-based AI music company KLAY Vision to create AI tools designed with an ethical framework for the music industry. According to Universal, the initiative focuses on exploring new opportunities for artists and creating safeguards to protect the music ecosystem as AI continues to evolve in creative spaces. Michael Nash, Universal’s chief digital officer, emphasised the importance of ethical AI use for artists’ rights in a rapidly changing industry.

The collaboration comes as Universal Music faces ongoing legal battles with other AI companies, including Anthropic AI, Suno, and Udio, over the use of its recordings in training music-generating AI models without authorisation. These cases highlight the growing concerns surrounding AI technology’s impact on the creative sector, particularly with respect to artists’ rights and intellectual property.

With this partnership, Universal Music aims to establish AI technologies that support artists’ needs while navigating the complex ethical questions surrounding AI-generated music. By working alongside US based KLAY Vision, Universal hopes to shape the future of AI in music responsibly and to develop solutions that ensure fair treatment of artists and their work.

Brenda Lee’s iconic song revived in Spanish by Universal Music through AI

Universal Music Group has released a Spanish rendition of Brenda Lee’s 1958 hit ‘Rockin’ Around the Christmas Tree.’ Titled ‘Noche Buena y Navidad,’ the new version was produced using AI technology developed by SoundLabs, with approval from Lee herself and under the guidance of Latin music producer Auero Baqueiro.

The song preserves the original instrumental and background arrangements while substituting Lee’s English vocals with newly generated Spanish vocals. These vocals were created using SoundLabs’ MicDrop, an AI-powered plug-in that replicates voices. The result aims to deliver a performance that feels as though the 13-year-old Brenda Lee recorded it in Spanish from the start.

Universal Music highlighted that the project illustrates how AI can be ethically integrated into music, with full artist consent and creative control. Recent controversies over AI-generated content in entertainment have raised questions about copyright and authenticity, making authorised projects like this one particularly noteworthy.

In June, Universal partnered with SoundLabs to develop official AI-powered vocal models for artists. This approach ensures musicians retain ownership of their voice data and maintain authority over the final output, promoting responsible use of AI in music creation.

Thousands of artists protest AI’s unlicensed use of their work

Thousands of creatives, including Kevin Bacon, Thom Yorke, and Julianne Moore, have signed a petition opposing the unlicensed use of their work to train AI. The 11,500 signatories believe that such practices threaten their livelihoods and call for better protection of creative content.

The petition argues that using creative works without permission for AI development is an ‘unjust threat’ to the people behind those works. Signatories from various industries, including musicians, writers, and actors, are voicing concerns over how their work is being used by AI companies.

British composer Ed Newton-Rex, who organised the petition, has spoken out against AI companies, accusing them of ‘dehumanising’ art by treating it as mere ‘training data’. He highlighted the growing concerns among creatives about how AI may undermine their rights and income.

The United Kingdom government is currently exploring new regulations to address the issue, including a potential ‘opt out’ model for AI data scraping, as lawmakers look for ways to protect creative content in the digital age.