Experts in child safety are concerned about the spreading of AI-generated child sexual abuse images on the dark web. US non-profit organization Thorn: Digital Defenders of Children claims, found that child safety tools can block and prevent the sharing of known victims’ content using image hashing and detection. However, these tools are limited in their ability to identify and block newly generated AI-generated images.

The Federal Bureau of Investigation (FBI) recently issued a public alert cautioning individuals about the emergence of synthetic content, often known as ‘deepfakes.’ In these cases, harmless photographs or videos are manipulated by malicious actors to target victims. Disturbingly, the reports include instances where explicit content was created by altering their images or videos.

AI researchers are also investigating new approaches, such as imprinting coding on AI photos to ensure producers are constantly linked to their material to prevent the creation of such images.

cross-circle