Cybercriminals use AI to target elections, says OpenAI

OpenAI reports cybercriminals are increasingly using its AI models to generate fake content aimed at influencing elections. The startup has neutralised over 20 attempts this year, including accounts producing articles on the US elections. Several accounts from Rwanda were banned in July for similar activities related to elections in that country.

The company confirmed that none of these attempts succeeded in generating viral engagement or reaching sustainable audiences. However, the use of AI in election interference remains a growing concern, especially as the US approaches its presidential elections. The US Department of Homeland Security also warns of foreign nations attempting to spread misinformation using AI tools.

As OpenAI strengthens its global position, the rise in election manipulation efforts underscores the critical need for heightened vigilance. The company recently completed a $6.6 billion funding round, further securing its status as one of the most valuable private firms.

ChatGPT continues to see rapid growth, boasting 250 million weekly active users since launching in November 2022, emphasising the platform’s widespread influence.

US intelligence official claims that Russia uses AI to influence US election

Russia has been the most active foreign power using AI to influence the upcoming United States presidential election, according to a US intelligence official. Moscow’s efforts have focused on supporting Donald Trump and undermining Kamala Harris and the Democratic Party. Russian influence actors are employing AI-generated content, such as text, images, and videos, to spread pro-Trump narratives and disinformation targeting Harris.

In July, the US Justice Department revealed the disruption of a Russia-backed operation that used AI-enhanced social media accounts to spread pro-Kremlin messages in the US Additionally, Russian actors staged a false hit-and-run video involving Harris, according to Microsoft research. The intelligence official described Russia as a more sophisticated player in comparison to other foreign actors.

China has also been leveraging AI to shape global perceptions, though it is not focused on influencing the US election outcome. Instead, Beijing is using AI to promote divisive political issues in the US, while Iran has employed AI to generate inauthentic news articles and social media posts, targeting polarising topics such as Israel and the Gaza conflict.

Both Russia and Iran have denied interfering in the US election, with China also distancing itself from attempts to influence the voting process. However, US intelligence continues to monitor the use of AI in foreign influence operations as the November 5 election approaches.

BlackDice and Bin Omran join forces to boost Qatar’s cybersecurity

BlackDice and Bin Omran Trading and Telecommunication have launched a strategic partnership to enhance Qatar’s cybersecurity infrastructure significantly. Combining their expertise will deliver state-of-the-art cybersecurity solutions, with BlackDice leveraging its AI-powered security and data intelligence to safeguard critical infrastructure and sensitive information.

Additionally, their collaboration will focus on strengthening the cybersecurity capabilities of major telecom operators in the region, thereby boosting network resilience and protecting extensive personal and financial data. Consequently, this comprehensive approach supports DA2030’s goal of creating a secure and resilient digital environment essential for Qatar’s economic diversification and social development.

By addressing the evolving needs of the digital landscape in Qatar, BlackDice and Bin Omran Trading and Telecommunication contribute to the nation’s ambition of becoming a global leader in technology and connectivity and ensuring robust protection against emerging cyber threats.

Global AI military blueprint receives support, but China declines

Around 60 nations, including the United States, endorsed a ‘blueprint for action’ on Tuesday to regulate the responsible use of AI in military settings. The blueprint was unveiled at the second Responsible AI in the Military Domain (REAIM) summit in Seoul. However, China was among the countries that declined to support the legally non-binding document.

The blueprint builds on discussions from last year’s summit in Amsterdam and outlines concrete steps, such as risk assessments and ensuring human involvement in decisions related to AI in military operations, including nuclear weapons. It also emphasises preventing AI from being used in weapons of mass destruction (WMD) by non-state actors, such as terrorist groups.

The summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, aims to foster global cooperation without being led by a single entity. Despite this, China and approximately 30 other countries refrained from endorsing the document, highlighting differing views among participants on AI’s military use.

As the international community moves forward, discussions on AI in military contexts are expected to continue at the United Nations General Assembly in October. Experts stress that while the blueprint is a step forward, progress must be made carefully to avoid alienating countries from engaging in future talks.