OpenAI expands AI tools with text-to-video feature

OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.

The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.

The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.

Palantir and Anduril team up for defence AI

Palantir Technologies and Anduril Industries have joined forces to optimise defence data for AI training. Palantir’s platform will organise and label sensitive defence data for model training, while Anduril’s systems will manage the retention and distribution of this information for national security applications.

The collaboration highlights challenges in deploying AI for defence, where sensitive data complicates model training. Anduril recently partnered with OpenAI to integrate advanced AI into security missions, underscoring its commitment to autonomous defence solutions.

Palantir, a key player in the AI boom, continues to see robust demand from governments and businesses seeking advanced software solutions.

International Red Cross sets guidelines for AI use

The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.

ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.

Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.

Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.

Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.

Meta reports minimal AI impact on global misinformation

Meta Platforms has reported that generative AI had limited influence on misinformation campaigns across its platforms in 2023. According to Nick Clegg, Meta‘s president of global affairs, coordinated networks spreading propaganda struggled to gain traction on Facebook and Instagram, and AI-generated misinformation was promptly flagged or removed.

Clegg noted, however, that some of these operations have migrated to other platforms or standalone websites with fewer moderation systems. Meta dismantled around 20 covert influence campaigns this year. The company aims to refine content moderation while maintaining free expression.

Meta also reflected on its overly strict moderation during the COVID-19 pandemic, with CEO Mark Zuckerberg expressing regret over certain decisions influenced by external pressure. Looking forward, Zuckerberg intends to engage actively in policy debates on AI under President-elect Donald Trump‘s administration, underscoring AI’s critical role in US technological leadership.

Senator Cruz questions foreign influence on US AI policy

Republican Senator Ted Cruz has called for an investigation into whether European governments have improperly influenced US policies on AI. Cruz’s concerns stem from growing international collaborations on AI regulation, including treaties and partnerships initiated by the Biden administration.

Cruz criticised European regulations as overly restrictive, claiming they target American AI companies and could shape US policies unfairly. He also accused the Centre for the Governance of Artificial Intelligence (GovAI), a UK-based nonprofit, of political activities without registering as a foreign agent, though GovAI has denied any wrongdoing.

The European Union has taken a leading role in AI regulation, recently passing the AI Act, the world’s first comprehensive law for governing technology. Cruz has framed these efforts as part of what he describes as ‘radical left’ interference, urging transparency about foreign involvement in shaping US AI laws.

Meta eyes nuclear energy to power AI and data centres

Meta has announced plans to harness nuclear energy to meet rising power demands and environmental goals. The company is soliciting proposals for up to 4 gigawatts of US nuclear generation capacity, with projects set to commence in the early 2030s. By doing so, it aims to support the energy-intensive requirements of AI and data centre operations.

Nuclear energy, according to Meta, offers a cleaner, more reliable solution for diversifying the energy grid. Power usage by US data centres is projected to triple by 2030, necessitating about 47 gigawatts of new capacity. However, challenges such as regulatory hurdles, uranium supply issues, and community resistance may slow progress.

The tech giant is open to both small modular reactors and traditional large-scale designs. Proposals are being accepted until February 2025, with a focus on developers skilled in community engagement and navigating complex permitting processes. An official statement highlighted nuclear’s capital-intensive nature, which demands a thorough request-for-proposals process.

Interest in nuclear power among tech firms is growing. Earlier agreements by Microsoft and Amazon have set precedents for nuclear-powered data centres. Meta’s latest initiative underscores a broader shift towards innovative energy solutions within the industry.

Australia pushes for new rules on AI in search engines

Australia‘s competition watchdog has called for a review of efforts to ensure more choice for internet users, citing Google’s dominance in the search engine market and the failure of its competitors to capitalise on the rise of AI. A report by the Australian Competition and Consumer Commission (ACCC) highlighted concerns about the growing influence of Big Tech, particularly Google and Microsoft, as they integrate generative AI into their search services. This raises questions about the accuracy and reliability of AI-generated search results.

While the use of AI in search engines is still in its early stages, the ACCC warns that large tech companies’ financial strength and market presence give them a significant advantage. The commission expressed concerns that AI-driven search could lead to misinformation, as consumers may find AI-generated responses both more useful and less accurate. In response to this, Australia is pushing for new regulations, including laws to prevent anti-competitive behaviour and improve consumer choice.

The Australian government has already introduced several measures targeting tech giants, such as requiring social media platforms to pay for news content and restricting access for children under 16. A proposed new law could impose hefty fines on companies that suppress competition. The ACCC has called for service-specific codes to address data advantages and ensure consumers have more freedom to switch between services. The inquiry is expected to close by March next year.

AI cloned voices fool bank security systems

Advancements in AI voice cloning have revealed vulnerabilities in banking security, as a BBC reporter demonstrated how cloned voices can bypass voice recognition systems. Using an AI-generated version of her voice, she successfully accessed accounts at two major banks, Santander and Halifax, simply by playing back the phrase “my voice is my password.”

The experiment highlighted potential security gaps, as the cloned voice worked on basic speakers and required no high-tech setup. While the banks noted that voice ID is part of a multi-layered security system, they maintained that it is more secure than traditional authentication methods. Experts, however, view this as a wake-up call about the risks posed by generative AI.

Cybersecurity specialists warn that rapid advancements in voice cloning technology could increase opportunities for fraud. They emphasise the importance of evolving defenses to address these challenges, especially as AI continues to blur the lines between real and fake identities.