The deployment of AI tools by scammers and propagandists worldwide to create fake news sites raises concerns among researchers. These AI tools offer cheaper and faster ways to produce content, leading to the spread of misinformation, especially during high-stakes elections. An example includes a fabricated story about Israeli Prime Minister Benjamin Netanyahu’s psychiatrist committing suicide, which went viral.

The proliferation of AI-generated content poses a threat as these sites can appear legitimate to users. NewsGuard, for example, found 739 AI-created news sites operating with minimal human oversight, bearing generic titles like ‘Ireland Top News.’

Furthermore, using AI to spread misinformation may escalate during important elections, such as those in the US and India, potentially influencing public opinion. This trend undermines trust in information sources and poses challenges for advertisers inadvertently supporting misleading content.

Why does it matter?

Taking heed of these concerns, leading AI companies, such as Anthropic, OpenAI, Google, and Meta, are implementing measures to prevent technology-driven disruptions in upcoming elections. OpenAI, for example, is working to restrict its chatbot from impersonating real individuals or organisations, while Meta is committed to improving the labelling of AI-generated content. Nevertheless, whether these measures will effectively address these challenges remains to be seen.

cross-circle