ICC rolls out AI to combat toxic content on social media

The International Cricket Council (ICC) has introduced a social media moderation programme ahead of the ICC Women’s T20 World Cup 2024. The initiative is designed to protect players and fans from toxic online content. More than 60 players have already joined, with further onboarding expected.

To safeguard mental health and promote inclusivity, the ICC has partnered with GoBubble. Together, they will use a combination of AI and human oversight to monitor harmful comments on social media platforms. The service will operate across Facebook, Instagram, and YouTube, with the option for players to use it on their own accounts.

The technology is designed to automatically detect and hide negative comments, including hate speech, harassment, and misogyny. By doing so, it creates a healthier environment for teams, players, and fans to engage with the tournament which will be held in Bangladesh.

Finn Bradshaw, ICC’s Head of Digital, expressed his satisfaction with the programme’s early success. Players and teams have welcomed the initiative, recognising the importance of maintaining a positive digital atmosphere during the tournament.

X agrees to Brazil court orders amid social media ban

Elon Musk’s social media platform, X, is taking steps to comply with Brazil’s Supreme Court in an effort to lift its ban in the country. The platform was banned in Brazil in August for failing to moderate hate speech and meet court orders. The court had ordered the company to appoint a legal representative and block certain accounts deemed harmful to Brazil’s democracy. X’s legal team has now agreed to follow these directives, appointing Rachel de Oliveira Villa Nova Conceicao as its representative and committing to block the required accounts.

Despite previous defiance and criticism of the court’s orders by Musk and his company, X has shifted its stance. The court gave X five days to submit proof of the appointment and two days to confirm that the necessary accounts had been blocked. Once all compliance is verified, the court will decide whether to extend or lift the ban on X in Brazil.

Additionally, X has agreed to pay fines exceeding $3 million and begin blocking specific accounts involved in a hate speech investigation. This represents a shift in the company’s stance, which had previously denounced the court orders as censorship. X briefly became accessible in Brazil last week after a network update bypassed the ban, though the court continues to enforce its block until all conditions are met.

AI-powered fact-checking tech in development by NEC

The Japanese Technology Corporation, NEC (Nippon Electric Company), is developing an AI technology designed to analyze and verify the trustworthiness of online information. The project, launched under Japan’s Ministry of Internal Affairs and Communications, aims to help combat false and misleading content on the internet. The system will be tested by fact-checking organizations, including the Japan Fact-check Center and major media outlets, with the goal of making it widely available by 2025.

The AI uses Large Language Models (LLMs) to assess different types of content such as text, images, video, and audio, detecting whether they have been manipulated or are misleading. The system then evaluates the information’s reliability, looking for inconsistencies and ensuring accurate sources. These reports allow for user-driven adjustments, such as removing unreliable information or adding new details, to enhance fact-checking operations helping organizations streamline their verification processes.

As the project progresses, NEC hopes to refine its AI system to assist fact-checkers more effectively, ensuring that false information can be identified and addressed in real time. The technology could become a vital tool for media and fact-checking organizations, addressing the growing problem of misinformation online.