Elon Musk reignites legal battle with OpenAI over non-profit to for-profit transition
Elon Musk has reignited his legal fight with OpenAI, accusing the company’s co-founders of manipulating him into investing in the nonprofit startup before turning it into a for-profit business. Musk claims they enriched themselves by draining OpenAI’s key assets and technology. OpenAI, however, has dismissed these claims, describing the lawsuit as part of Musk’s efforts to gain a competitive edge.
OpenAI, which transitioned to a for-profit subsidiary in 2019, attracted billions in outside funding, including from Microsoft. Musk argues the company deviated from its original mission, but OpenAI maintains it remains committed to developing safe and beneficial AI. The startup also suggested Musk’s departure came after his attempt to dominate the organisation failed.
OpenAI has had a turbulent year with leadership changes and rapid growth. The company’s headcount more than doubled, and despite losing key figures, it remains a major player in AI innovation. Recent investments pushed OpenAI’s valuation to $157 billion, underscoring continued investor confidence.
Musk’s ongoing rivalry with OpenAI coincides with his other AI ventures, including xAI, which he launched in 2023. He’s also facing allegations in a Delaware lawsuit accusing his AI company of draining talent and resources from Tesla, potentially harming shareholders.
New Adobe app ensures creator credit as AI grows
Adobe announced it will introduce a free web-based app in 2025 to help creators of images and videos get proper credit for their work, especially as AI systems increasingly rely on large datasets for training. The app will enable users to affix ‘Content Credentials,’ a digital signature, to their creations, indicating authorship and even specifying whether they want their work used for AI training.
Since 2019, Adobe has been developing Content Credentials as part of a broader industry push for transparency in how digital media is created and used. TikTok has already committed to using these credentials to label AI-generated content. However, major AI companies have yet to adopt Adobe’s system, though Adobe continues to advocate for industry-wide adoption.
The initiative comes as legal battles over AI data use intensify, with publishers like The New York Times suing OpenAI. Adobe sees this tool as a way to protect creators and promote transparency, as highlighted by Scott Belsky, Adobe’s chief strategy officer, who described it as a step towards preserving the integrity of creative work online.
Paul McCartney returns with AI-aided Beatles song on new tour
Sir Paul McCartney has announced his return to the stage with the ‘Got Back’ tour, featuring a highly anticipated performance of the last Beatles song, Now and Then. The song, which includes vocals from the late John Lennon, was completed with the help of AI technology and marks a poignant moment in Beatles history.
Now and Then was created using Lennon’s vocals from an old cassette tape, recovered and refined using AI. McCartney and fellow Beatle Ringo Starr worked together on the project, adding guitar parts from the late George Harrison. The song, originally left unfinished in 1977, has now been brought to life, with McCartney singing alongside Lennon’s voice.
The tour will kick off in Montevideo, Uruguay, before moving through South America and Europe, with two dates at Manchester’s Co-op Live and two final shows at London’s O2 Arena in December. McCartney, who last played in the UK at Glastonbury four years ago, has expressed excitement about returning to his home country to end the tour.
Despite some complaints from Liverpool fans over the absence of a hometown gig, McCartney remains enthusiastic about his UK shows. He described the upcoming performances as a ‘special feeling’ and looks forward to closing out the year with a celebration on home soil.
New Cloudflare marketplace to help websites profit from AI scraping
Cloudflare is launching a marketplace that will let websites charge AI companies for scraping their content, aiming to give smaller publishers more control over how AI models use their data. Large AI models scrape thousands of websites to train their systems, often without compensating the content creators, which could threaten the business models of many smaller websites. The marketplace, launching next year, will allow website owners to negotiate deals with AI model providers, charging them based on how often they scrape the site or by setting their terms.
Cloudflare’s launch of AI Audit is a big step for website owners to gain better control over AI bot activity on their sites. Providing detailed analytics on which AI bots access their content empowers site owners to make informed decisions about managing bot traffic. The ability to block specific bots while allowing others can help mitigate issues related to unwanted scraping, which can negatively impact performance and increase operational costs. This tool could be handy for businesses and content creators who rely on their online presence and want to safeguard their resources.
Cloudflare’s CEO, Matthew Prince, believes this marketplace will create a more sustainable system for publishers and AI companies. While some AI firms may resist paying for currently free content, Prince argues that compensating creators is crucial for ensuring the continued production of quality content. The initiative could help balance the relationship between AI companies and content creators, allowing even small publishers to profit from their data in the AI age.
Celebrity voices of John Cena and Judi Dench coming to Meta’s AI Chatbot
Meta Platforms is preparing to introduce a new audio feature for its AI chatbot, which will allow users to select voices from five celebrities, including Judi Dench and John Cena. As part of its efforts to enhance user engagement, Meta will offer the voice options across its platforms such as Facebook, Instagram, and WhatsApp.
The announcement is expected at Meta’s annual Connect conference, where the company is also set to unveil augmented-reality glasses and provide updates on its Ray-Ban Meta smart glasses. These developments reflect Meta’s push to integrate AI more deeply into everyday interactions through its various products.
Celebrity voices are set to roll out this week in the US and other English-speaking markets. Meta hopes that this new feature will appeal to users seeking a more personalised experience with its AI chatbot, positioning itself in competition with AI giants like Google and OpenAI.
As part of its broader AI strategy, Meta has shifted focus towards integrating celebrity voices after earlier text-based characters saw limited success. The company is committed to making its chatbot a core feature across its platforms, striving to stay ahead in the competitive AI landscape.
EU’s AI Act faces tech giants’ resistance
As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.
A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.
The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.
The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.
Runway partners with Lionsgate to revolutionise film-making
Runway, a generative AI startup, has announced a significant partnership with Lionsgate, the studio responsible for popular franchises such as John Wick and Twilight. This collaboration will enable Lionsgate’s creative teams, including filmmakers and directors, to utilise Runway’s AI video-generating models. These models have been trained on the studio’s film catalogue and will be used to enhance their creative work. Michael Burns, vice chair of Lionsgate, emphasised the potential for this partnership to support creative talent.
Runway is considering new opportunities, including licensing its AI models to individual creators, allowing them to create and train custom models. This partnership represents the first public collaboration between a generative AI startup and a major Hollywood studio. Although Disney and Paramount have reportedly been discussing similar partnerships with AI providers, no official agreements have been reached yet.
This deal comes at a time of increased attention on AI in the entertainment industry, due to California’s new laws that regulate the use of AI digital replicas in film and television. Runway is also currently dealing with legal challenges regarding the alleged use of copyrighted works to train its models without permission.
Senators call for inquiry into AI content summarisation
A group of Democratic senators, led by Amy Klobuchar, has called on the United States Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive. The concern is that AI-generated summaries keep users on platforms like Google and Meta, preventing traffic from reaching the original content creators, which can result in lost advertising revenue for those creators.
The senators argue that platforms profit from using third-party content to generate AI summaries, while publishers are left with fewer opportunities to monetise their work. Content creators are often forced to choose between having their work summarised by AI tools or opting out entirely from being indexed by search engines, risking significant drops in traffic.
There is also a concern that AI features can misappropriate third-party content, passing it off as new material. The senators believe that the dominance of major online platforms is creating an unfair market for advertising revenue, as these companies control how content is monetised and limit the potential for original creators to benefit.
The letter calls for regulators to examine whether these practices violate antitrust laws. The FTC and DOJ will need to determine if the behaviour constitutes exclusionary conduct or unfair competition. The push from legislators could also lead to new laws if current regulations are deemed insufficient.