UN discusses ethical tech and inclusion at IGF 2024

Speakers at IGF 2024 highlighted digital innovation within the United Nations system, demonstrating how emerging technologies are enhancing services and operational efficiency. Representatives from UNHCR, UNICEF, the UN Pension Fund, and UNICC shared their organisations’ progress and collaborative efforts.

Michael Walton, Head of Digital Services at UNHCR, detailed initiatives supporting refugees through digital tools. These include mobile apps for services and efforts to counter misinformation. Walton stressed the importance of digital inclusion and innovation to bridge gaps in education and access for vulnerable groups.

Fui Meng Liew, Chief of Digital Center of Excellence at UNICEF, emphasised safeguarding children’s data rights through a comprehensive digital resilience framework. UNICEF’s work also involves developing digital public goods, with a focus on accessibility for children with disabilities and securing data privacy.

Dino Cataldo Dell’Accio from the UN Pension Fund presented a blockchain-powered proof-of-life system that uses biometrics and AI in support of e-Government for the aging population. This system ensures beneficiaries’ security and privacy while streamlining verification processes. Similarly, Sameer Chauhan of UNICC showcased digital solutions like AI chatbots and cybersecurity initiatives supporting UN agencies.

The session’s collaborative tone extended into discussions of the UN Digital ID project, which links multiple UN agencies. Audience members raised questions on accessibility, with Nancy Marango and Sary Qasim suggesting broader use of these solutions to support underrepresented communities globally.

Efforts across UN organisations reflect a shared commitment to ethical technology use and digital inclusion. The panellists urged collaboration and transparency as key to addressing challenges such as data protection and equitable access while maintaining focus on innovation.

FTC’s Holyoak raises concerns over AI and kids’ data

Federal Trade Commissioner Melissa Holyoak has called for closer scrutiny of how AI products handle data from younger users, raising concerns about privacy and safety. Speaking at an American Bar Association meeting in Washington, Holyoak questioned what happens to information collected from children using AI tools, comparing their interactions to asking advice from a toy like a Magic 8 Ball.

The FTC, which enforces the Children’s Online Privacy Protection Act, has previously sued platforms like TikTok over alleged violations. Holyoak suggested the agency should evaluate its authority to investigate AI privacy practices as the sector evolves. Her remarks come as the FTC faces a leadership change with President-elect Donald Trump set to appoint a successor to Lina Khan, known for her aggressive stance against corporate consolidation.

Holyoak, considered a potential acting chair, emphasised that the FTC should avoid a rigid approach to mergers and acquisitions, while also predicting challenges to the agency’s worker noncompete ban. She noted that a Supreme Court decision on the matter could provide valuable clarity.

Coventry University project bridges education gap in Vietnam with AI tools

Coventry University researchers are using AI to support teachers in northern Vietnam‘s rural communities, where access to technology and training is often limited. Led by Dr Petros Lameras, the GameAid project introduces educators to generative AI, an advanced form of AI that creates text, images, and other materials in response to prompts, helping teachers improve lesson development and classroom engagement.

The GameAid initiative uses a game-based approach to demonstrate AI’s practical benefits, providing tools and guidelines that enable teachers to integrate AI into their curriculum. Dr Lameras highlights the project’s importance in transforming educators’ technological skills, while Dr Nguyen Thi Thu Huyen from Hanoi University emphasises its potential to close the educational gap between Vietnam’s urban and rural areas.

The initiative is seen as a key step towards promoting equal learning opportunities, offering much-needed educational resources to under-represented groups. Researchers at Coventry hope that their work will support more positive learning outcomes across Vietnam’s diverse educational landscape.

UK man sentenced to 18 years for using AI to create child sexual abuse material

In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.

Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.

Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.

US prosecutors intensify efforts to combat AI-generated child abuse content

US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.

Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.

The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.

Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.

AV1 robot bridges gap for children unable to attend school

Children who are chronically ill and unable to attend school can now stay connected to the classroom using the AV1 robot, developed by the company No Isolation from Norway. This innovative technology serves as their eyes and ears, allowing them to engage with lessons and interact with friends remotely. Controlled via an app, the robot sits on a classroom desk, enabling students to rotate its view, speak to classmates, and even signal when they want to participate.

The AV1 has been especially valuable for children undergoing long-term treatment or experiencing mental health challenges, helping them maintain a connection with their peers and stay socially included. In the United Kingdom, schools can rent or purchase the AV1, which has been widely adopted, particularly in countries like the UK and Germany, where over 1,000 units are active. For many students, the robot has become a lifeline during extended absences from school.

Though widely praised, there are logistical challenges in introducing the AV1 to schools and hospitals, including administrative hurdles and technical issues like weak Wi-Fi. Despite these obstacles, teachers and families have found the robot to be highly effective, with privacy protections and features tailored to students’ needs, including the option to avoid showing their face on screen.

Research has highlighted the AV1’s potential to keep children both socially and academically connected, and No Isolation has rolled out a training resource, AV1 Academy, to support teachers and schools in using the technology effectively. With its user-friendly design and robust privacy features, the AV1 continues to make a positive impact on the lives of children facing illness and long absences from school.

Ello’s new AI tool lets kids create their own stories

Ello, an AI reading companion designed to help children struggling with reading, has introduced a new feature called ‘Storytime’. This feature enables kids to create their own stories by choosing from a range of settings, characters, and plots. Story options are tailored to the child’s reading level and current lessons, helping them practise essential reading skills.

Ello’s AI, represented by a bright blue elephant, listens to children as they read aloud and helps correct mispronunciations. The tool uses phonics-based strategies to adapt stories based on the child’s responses, ensuring personalised and engaging experiences. It also offers two reading modes: one where the child and Ello take turns reading and another, more supportive mode for younger readers.

The Storytime feature distinguishes itself from other AI-assisted story creation tools by focusing on reading development. The technology has been tested with teachers and children, and includes safeguards to ensure age-appropriate content. Future versions of the product may allow even more creative input from children, while maintaining helpful structure to avoid overwhelming them.

Ello’s subscription costs $14.99 per month, with discounted pricing for low-income families. The company also partners with schools to offer its services for free, and has recently made its collection of decodable children’s books available online at no cost.