In a recent study published in Science Advances, researchers found that the GPT-3 machine-learning model behind ChatGPT demonstrates a notable ability to spread disinformation online at a faster rate and with greater persuasiveness compared to humans. The study assessed how AI models, particularly GPT-3, impact the information landscape and influence people’s perception and interaction with information and misinformation, especially in social media. By concentrating on tweets posted on Twitter, the researchers investigated individuals’ ability to discern between disinformation and accurate information, ultimately highlighting GPT-3’s proficiency in effectively disseminating disinformation. According to the findings, respondents demonstrated a stronger ability to identify disinformation in tweets authored by humans but containing false information, in contrast to tweets generated by GPT-3. However, respondents showed lower effectiveness in detecting false information when it originated from AI.

These findings raise significant concerns, as they indicate the potential for GPT-3 to propagate disinformation at an accelerated rate, posing profound implications for online conversations and public sentiment. Importantly, it’s crucial to recognise that GPT-3 itself is not inherently malicious and can be utilised for positive purposes. Nonetheless, the study emphasises the importance of responsibly using AI language models to combat the dissemination of disinformation on the internet. In conclusion, the study underscores the potential dangers associated with AI-powered text generators like GPT-3 and highlights the urgent need to address and mitigate the rapid spread of false information.

cross-circle