Paul McCartney returns with AI-aided Beatles song on new tour
Sir Paul McCartney has announced his return to the stage with the ‘Got Back’ tour, featuring a highly anticipated performance of the last Beatles song, Now and Then. The song, which includes vocals from the late John Lennon, was completed with the help of AI technology and marks a poignant moment in Beatles history.
Now and Then was created using Lennon’s vocals from an old cassette tape, recovered and refined using AI. McCartney and fellow Beatle Ringo Starr worked together on the project, adding guitar parts from the late George Harrison. The song, originally left unfinished in 1977, has now been brought to life, with McCartney singing alongside Lennon’s voice.
The tour will kick off in Montevideo, Uruguay, before moving through South America and Europe, with two dates at Manchester’s Co-op Live and two final shows at London’s O2 Arena in December. McCartney, who last played in the UK at Glastonbury four years ago, has expressed excitement about returning to his home country to end the tour.
Despite some complaints from Liverpool fans over the absence of a hometown gig, McCartney remains enthusiastic about his UK shows. He described the upcoming performances as a ‘special feeling’ and looks forward to closing out the year with a celebration on home soil.
Google opens Gemini Nano AI to Android developers
Google’s Gemini Nano, a powerful on-device AI model, is now available for developers to integrate into their apps through the newly released AI Edge SDK. By running locally, Gemini Nano offers tasks such as text summarisation and image descriptions, while keeping user data private by processing everything on the device.
Already featured in Google’s Pixel 9 and Samsung’s Galaxy S24 devices, Gemini Nano powers AI functionalities in apps like Pixel Recorder and Google Messages. Developers can now experiment with these tools to bring AI features to their own apps, with Google expanding access to the AI Edge SDK beyond its previous early access programme.
Currently, developers can explore text-to-text prompts, such as smart replies, proofreading, and summarisation. Google plans to add support for other modalities, like image processing, in future updates. This move will enable broader AI integration across third-party apps, offering enhanced user experiences.
By customising Gemini Nano through the AI Edge SDK, developers will have control over how AI processes information, allowing them to adapt responses to suit their app’s needs. This marks a significant step towards more AI-driven apps for Android users.
AV1 robot bridges gap for children unable to attend school
Children who are chronically ill and unable to attend school can now stay connected to the classroom using the AV1 robot, developed by the company No Isolation from Norway. This innovative technology serves as their eyes and ears, allowing them to engage with lessons and interact with friends remotely. Controlled via an app, the robot sits on a classroom desk, enabling students to rotate its view, speak to classmates, and even signal when they want to participate.
The AV1 has been especially valuable for children undergoing long-term treatment or experiencing mental health challenges, helping them maintain a connection with their peers and stay socially included. In the United Kingdom, schools can rent or purchase the AV1, which has been widely adopted, particularly in countries like the UK and Germany, where over 1,000 units are active. For many students, the robot has become a lifeline during extended absences from school.
Though widely praised, there are logistical challenges in introducing the AV1 to schools and hospitals, including administrative hurdles and technical issues like weak Wi-Fi. Despite these obstacles, teachers and families have found the robot to be highly effective, with privacy protections and features tailored to students’ needs, including the option to avoid showing their face on screen.
Research has highlighted the AV1’s potential to keep children both socially and academically connected, and No Isolation has rolled out a training resource, AV1 Academy, to support teachers and schools in using the technology effectively. With its user-friendly design and robust privacy features, the AV1 continues to make a positive impact on the lives of children facing illness and long absences from school.
Meta postpones joining EU AI Pact, focuses on compliance
Meta Platforms has announced it will not immediately join the European Union‘s voluntary AI Pact, which is a temporary initiative ahead of the AI Act coming into force. The company is currently focusing on compliance with the forthcoming regulations set out in the act, but may sign the pact at a later stage.
The EU’s AI Act, agreed in May and adopted by the European Council, will introduce strict rules governing the development and use of artificial intelligence. Under these regulations, companies must provide detailed summaries of the data used to train their AI models. The majority of the law’s provisions will take effect from August 2026.
In the interim, the AI Pact encourages companies to voluntarily adopt some of the key requirements of the forthcoming act. Meta has expressed its support for harmonised EU regulations but is prioritising work on meeting the obligations of the AI Act.
James Cameron, renowned director of films like Titanic and The Terminator, has joined the board of Stability AI, an AI startup based in London. The company, known for its AI-driven image-generation tools, is aiming to transform visual media through innovative technologies.
Stability AI’s CEO, Prem Akkaraju, highlighted the importance of Cameron’s appointment in helping the firm achieve its goal of providing creators with a comprehensive portfolio of AI tools. The company has raised significant funding, including $80 million earlier this year, and is seen as a competitor to AI tools from Google and OpenAI.
Cameron expressed excitement about how generative AI and computer-generated imagery could revolutionise storytelling, offering artists unprecedented ways to bring their ideas to life. Stability AI’s tools include Stable Video Diffusion, a text-to-video generation platform.
While the relationship between AI and Hollywood has grown closer, it has also sparked controversy. In 2023, writers and actors went on strike, pushing for protections against the unregulated use of AI in film and television production. Cameron joins other notable figures on the board, such as former Facebook president Sean Parker.
AI-written police reports spark efficiency debate
Several police departments in the United States have begun using AI to write incident reports, aiming to reduce time spent on paperwork. Oklahoma City’s police department was an early adopter of the AI-powered Draft One software, but paused its use to address concerns raised by the District Attorney’s office. The software analyses bodycam footage and radio transmissions to draft reports, potentially speeding up processes, although it may raise legal concerns.
Paul Mauro, a former NYPD inspector, noted that the technology could significantly reduce the burden on officers, who often spend hours writing various reports. However, he warned that officers must still review AI-generated reports carefully to avoid errors. The risk of inaccuracies or ‘AI hallucinations’ means oversight remains crucial, particularly when reports are used as evidence in court.
Mauro suggested that AI-generated reports could help standardise police documentation and assist in data analysis across multiple cases. This could improve efficiency in investigations by identifying patterns more quickly than manual methods. He also recommended using the technology for minor crimes while legal experts ensure compliance with regulations.
The potential for AI to transform police work has drawn comparisons to the initial resistance to bodycams, which are now widely accepted. While there are challenges, the introduction of AI in police reporting may offer long-term benefits for law enforcement, if implemented thoughtfully and responsibly.
US intelligence official claims that Russia uses AI to influence US election
Russia has been the most active foreign power using AI to influence the upcoming United States presidential election, according to a US intelligence official. Moscow’s efforts have focused on supporting Donald Trump and undermining Kamala Harris and the Democratic Party. Russian influence actors are employing AI-generated content, such as text, images, and videos, to spread pro-Trump narratives and disinformation targeting Harris.
In July, the US Justice Department revealed the disruption of a Russia-backed operation that used AI-enhanced social media accounts to spread pro-Kremlin messages in the US Additionally, Russian actors staged a false hit-and-run video involving Harris, according to Microsoft research. The intelligence official described Russia as a more sophisticated player in comparison to other foreign actors.
China has also been leveraging AI to shape global perceptions, though it is not focused on influencing the US election outcome. Instead, Beijing is using AI to promote divisive political issues in the US, while Iran has employed AI to generate inauthentic news articles and social media posts, targeting polarising topics such as Israel and the Gaza conflict.
Both Russia and Iran have denied interfering in the US election, with China also distancing itself from attempts to influence the voting process. However, US intelligence continues to monitor the use of AI in foreign influence operations as the November 5 election approaches.
Google CEO warns of AI divide and announces $120m education fund
Speaking at the UN Summit of the Future 2024, Google CEO Sundar Pichai described AI as the most transformative technology yet and announced a $120 million Global AI Opportunity Fund. The fund would provide AI education and training worldwide through partnerships with local NGOs and nonprofits.
Pichai highlighted four key areas where AI can contribute to sustainable development: language accessibility, scientific discovery, climate disaster alerts, and economic progress. He stressed the importance of harnessing AI for global advancement while addressing its risks.
He also warned of the potential for an ‘AI divide,’ where some regions may need to catch up in access to the technology. To combat this, Pichai called for smart global regulations that mitigate harm without promoting national protectionism, which could limit the benefits of AI.
Although Pichai did not mention the environmental impacts of AI, he emphasised the need for balanced regulation to ensure equal access and opportunities for AI development worldwide.