US sanctions Iranian and Russian entities over election meddling
Sanctions have been imposed by the US on organisations in Iran and Russia accused of attempting to influence the 2024 presidential election. The Treasury Department stated these entities, linked to Iran’s Revolutionary Guard Corps (IRGC) and Russia’s military intelligence agency (GRU), aimed to exploit socio-political tensions among voters.
Russia’s accused group utilised AI tools to create disinformation, including manipulated videos targeting a vice-presidential candidate. A network of over 100 websites mimicking credible news outlets was reportedly used to disseminate false narratives. The GRU is alleged to have funded and supported these operations.
Iran’s affiliated entity allegedly planned influence campaigns since 2023, focused on inciting divisions within the US electorate. While Russia’s embassy denied interference claims as unfounded, Iran’s representatives did not respond to requests for comment.
A recent US threat assessment has underscored growing concerns about foreign attempts to disrupt American democracy, with AI emerging as a critical tool for misinformation. Officials reaffirmed their commitment to safeguarding the electoral process.
UK Minister warns that NATO must adapt to AI threats
The UK government has announced the launch of a Laboratory for AI Security Research (LASR), an initiative to protect against emerging AI-driven threats and bolster Britain’s cyber resilience. The lab, backed by an initial £8.22 million in government funding, will bring together experts from academia, industry, and government to address AI’s evolving challenges to national security.
Speaking at the NATO Cyber Defence Conference in London, the Chancellor of the Duchy of Lancaster emphasised that AI is revolutionising national security and noted that ‘[…]as we develop this technology, there’s a danger it could be weaponised against us. Our adversaries are exploring how to use AI on the physical and cyber battlefield’.
LASR will collaborate with leading institutions, including the Alan Turing Institute, Oxford University, Queen’s University Belfast, and Plexal, alongside government agencies such as GCHQ, the National Cyber Security Centre, and the MOD’s Defence Science and Technology Laboratory. Partnerships will extend to NATO allies and Five Eyes countries, fostering an international approach to AI security.
In addition to LASR, the government announced a £1 million incident response project to help allies respond more effectively to cyberattacks. This initiative will further enhance international cooperation in managing cyber incidents.
The official communication highlights that this announcement aligns with the government’s broader agenda, including the forthcoming Cyber Security and Resilience Bill (to be introduced to Parliament in 2025) and the designation of data centres as critical national infrastructure (CNI) to secure the UK’s position as a global leader in cybersecurity and AI innovation.
Biden and Xi reach agreement to restrict AI in nuclear weapons decisions
President Joe Biden and China’s President Xi Jinping held a two-hour meeting on the sidelines of the APEC summit on Saturday. Both leaders reached a significant agreement to prevent AI from controlling nuclear weapons systems and made progress on securing the release of two US citizens wrongfully detained in China. Biden also pressured Xi to reduce North Korea’s support for Russia in the ongoing Ukraine conflict.
The breakthrough in nuclear safety, particularly the commitment to maintain human control over nuclear decisions, was reported as an achievement for Biden’s foreign policy. Xi, in contrast, called for greater dialogue and cooperation with the US and cautioned against efforts to contain China. His remarks also acknowledged rising geopolitical challenges, hinting at the difficulties that may arise under a Trump presidency. The meeting showcased a shift in tone from their previous encounter in 2023, reflecting a more constructive dialogue despite underlying tensions.
Reuters reported that it remains uncertain whether the statement will result in additional talks or concrete actions on the issue. The US has long held the position that AI should assist and enhance military capabilities, but not replace human decision-making in high-stakes areas such as nuclear weapons control. Last year, the Biden-Harris administration announced the Political declaration on responsible military use of AI and autonomy, and more than 20 countries endorsed the declaration. The declaration specifically underlines that “military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control”.
Chinese military adapts Meta’s Llama for AI tool
China’s People’s Liberation Army (PLA) has adapted Meta’s open-source AI model, Llama, to create a military-focused tool named ChatBIT. Developed by researchers from PLA-linked institutions, including the Academy of Military Science, ChatBIT leverages an earlier version of Llama, fine-tuned for military decision-making and intelligence processing tasks. The tool reportedly performs better than some alternative AI models, though it falls short of OpenAI’s ChatGPT-4.
Meta, which supports open innovation, has restrictions against military uses of its models. However, the open-source nature of Llama limits Meta’s ability to prevent unauthorised adaptations, such as ChatBIT. In response, Meta affirmed its commitment to ethical AI use and noted the need for US innovation to stay competitive as China intensifies its AI research investments.
China’s approach reflects a broader trend, as its institutions reportedly employ Western AI technologies for areas like airborne warfare and domestic security. With increasing US scrutiny over the national security implications of open-source AI, the Biden administration has moved to regulate AI’s development, balancing its potential benefits with growing risks of misuse.
US military explores deepfake use
The United States’ Special Operations Command (SOCOM) is pursuing the development of sophisticated deepfake technology to create virtual personas indistinguishable from real humans, as per a procurement document from the Department of Defense’s Joint Special Operations Command (JSOC).
These artificial avatars would operate on social media and online platforms, featuring realistic expressions and high-quality images akin to government IDs. JSOC also seeks technologies to produce convincing facial and background videos, including ‘selfie videos’, to avoid detection by social media algorithms.
US state agencies have previously announced frameworks to combat foreign information manipulation, citing national security threats from these technologies. Despite recognising the global dangers posed by deepfakes, SOCOM’s initiative underscores a willingness to engage with the technology for potential military advantage.
Experts expressed concern over the ethical implications and potential for increased misinformation, warning of the entirely deceptive nature of deepfakes, with no legitimate applications beyond deceit, possibly encouraging further global misuse. Furthermore, such practices pose the risk of diminished public trust in government communications, exacerbated by perceived hypocrisy in deploying such technology.
Why does it matter?
This plan reflects an ongoing interest in leveraging digital manipulation for military purposes, despite previous incidents where platforms like Meta dismantled similar US-linked networks. It further shows a contradiction in the US’s stance on deepfake use, as it simultaneously condemns similar actions by countries like Russia and China.
Ukraine accuses Russia of intensifying cyber misinformation
Russia is using generative AI to ramp up disinformation campaigns against Ukraine, warned Ukraine’s Deputy Foreign Minister, Anton Demokhin, during a cyber conference in Singapore. He explained that AI is enabling Russia to spread false narratives on a larger and more complex scale, making it increasingly difficult to detect and counter. The spread of disinformation is a growing focus for Russia, alongside ongoing cyberattacks targeting Ukraine.
Ukrainian officials have previously reported that Russia’s FSB and military intelligence agencies are behind many of these efforts, with the goal of undermining public trust and spreading confusion. Demokhin stressed that Russia’s disinformation efforts are global, calling for international cooperation to tackle this emerging threat. He also mentioned that Ukraine is using AI to track these campaigns but declined to comment on any offensive cyber operations.
Meanwhile, other Russian cyberattacks are targeting Ukraine’s critical infrastructure and supply chains, seeking to disrupt essential services. Ukraine continues to collaborate with the International Criminal Court on investigating Russian cyber activities as potential war crimes.
Russian forces ramp up AI-driven drone deployment
Russia has announced a substantial increase in the use of AI-powered drones in its military operations in Ukraine. Russian Defense Minister Andrei Belousov emphasised the importance of these autonomous drones in battlefield tactics, saying they are already deployed in key regions and proving successful in combat situations. Speaking at a next-generation drone technology center, he called for more intensive training for troops to operate these systems effectively.
Belousov revealed that two units equipped with AI drones are currently stationed in eastern Ukraine and along Russia’s Belgorod and Kursk borders, where they are engaged in active combat. The AI technology enables drones to autonomously lock onto targets and continue missions even if control is lost. Plans are underway to form five additional units to conduct around-the-clock drone operations.
Russia‘s ramped-up use of AI drones comes alongside a broader military strategy to increase drone production by tenfold, with President Putin aiming to produce 1.4 million units by the year’s end. Both Russia and Ukraine have heavily relied on drones throughout the war, with Ukraine also using them to strike targets deep inside Russian territory.
Cybercriminals use AI to target elections, says OpenAI
OpenAI reports cybercriminals are increasingly using its AI models to generate fake content aimed at influencing elections. The startup has neutralised over 20 attempts this year, including accounts producing articles on the US elections. Several accounts from Rwanda were banned in July for similar activities related to elections in that country.
The company confirmed that none of these attempts succeeded in generating viral engagement or reaching sustainable audiences. However, the use of AI in election interference remains a growing concern, especially as the US approaches its presidential elections. The US Department of Homeland Security also warns of foreign nations attempting to spread misinformation using AI tools.
As OpenAI strengthens its global position, the rise in election manipulation efforts underscores the critical need for heightened vigilance. The company recently completed a $6.6 billion funding round, further securing its status as one of the most valuable private firms.
ChatGPT continues to see rapid growth, boasting 250 million weekly active users since launching in November 2022, emphasising the platform’s widespread influence.