This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
On January 31st, The U.S. National Center for Missing and Exploited Children (NCMEC) said that 4,700 reports had been received within the last year regarding artificial intelligence that depicted sexual exploitation of children.
Meta's CEO Mark Zuckerberg, TikTok's CEO Shou Zi Chew, X’s CEO Linda Yaccarino, and Discord's CEO Jason Citron were among the invited attendees at the Senate Judiciary Committee hearing on online child sexual exploitation, which took place at the U.S. Capitol, in Washington, on January 31, 2024.
This AI news has been met with concern in a number of industries as AI Deepfakes continue to dominate many headlines.
The NCMEC explained that they expect the problem only to grow as technology continues to get better. This joins calls from many child safety experts and researchers, who have been vocal about the potential risks that create text and images from prompts typed in by the user. These could make exploitation and abuse a bigger problem.
The NCMEC received a total of 88 million child abuse content reports in 2022 and hasn’t yet published numbers for 2023.
"We are receiving reports from the generative AI companies themselves, platforms and members of the public. It's absolutely happening," warned John Shehan, senior vice president of NCMEC, explaining that the image generation software was already being used for negative and harmful purposes.
The CEOs of Meta, X, TikTok, Snap, and Discord all testified in the hearing and were questioned about what they were doing to protect younger people from predators and abuse online.
Fallon McNulty from CyberTipline, run by NCMEC, warned that AI Deepfakes and generated imagery is becoming “more and more photo realistic” and this makes it harder to work out whether the victim is a real person or not.
In 2023, a report by Stanford Internet Observatory warned that AI-generated images could be used to harm real children by creating images matching their likeness, in a similar way to recent AI Deepfakes of Taylor Swift that have been circulated.
OpenAI, creator of the popular ChatGPT, will now feed reports to NCMEC, and McNulty also said the organization was talking to other AI companies in a bid to make a safer future, even as the technology driving AI image generation becomes more powerful.
Safeguarding must be a priority for governments and large technology companies as this technology becomes more publicly available to the masses to ensure our children are protected against the risks that AI releases.
This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl
On January 31st, The U.S. National Center for Missing and Exploited Children (NCMEC) said that 4,700 reports had been received within the last year regarding artificial intelligence that depicted sexual exploitation of children.
Meta's CEO Mark Zuckerberg, TikTok's CEO Shou Zi Chew, X’s CEO Linda Yaccarino, and Discord's CEO Jason Citron were among the invited attendees at the Senate Judiciary Committee hearing on online child sexual exploitation, which took place at the U.S. Capitol, in Washington, on January 31, 2024.
This AI news has been met with concern in a number of industries as AI Deepfakes continue to dominate many headlines.
The NCMEC explained that they expect the problem only to grow as technology continues to get better. This joins calls from many child safety experts and researchers, who have been vocal about the potential risks that create text and images from prompts typed in by the user. These could make exploitation and abuse a bigger problem.
The NCMEC received a total of 88 million child abuse content reports in 2022 and hasn’t yet published numbers for 2023.
"We are receiving reports from the generative AI companies themselves, platforms and members of the public. It's absolutely happening," warned John Shehan, senior vice president of NCMEC, explaining that the image generation software was already being used for negative and harmful purposes.
The CEOs of Meta, X, TikTok, Snap, and Discord all testified in the hearing and were questioned about what they were doing to protect younger people from predators and abuse online.
Fallon McNulty from CyberTipline, run by NCMEC, warned that AI Deepfakes and generated imagery is becoming “more and more photo realistic” and this makes it harder to work out whether the victim is a real person or not.
In 2023, a report by Stanford Internet Observatory warned that AI-generated images could be used to harm real children by creating images matching their likeness, in a similar way to recent AI Deepfakes of Taylor Swift that have been circulated.
OpenAI, creator of the popular ChatGPT, will now feed reports to NCMEC, and McNulty also said the organization was talking to other AI companies in a bid to make a safer future, even as the technology driving AI image generation becomes more powerful.
Safeguarding must be a priority for governments and large technology companies as this technology becomes more publicly available to the masses to ensure our children are protected against the risks that AI releases.