< Back to 68k.news UK front page

A.I.-Generated Child Sexual Abuse Material May Overwhelm Tip Line

Original source (on modern site) | Article images: [1] [2]

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

A report by Stanford researchers cautions that the National Center for Missing and Exploited Children doesn't have the resources to help fight the new epidemic.

Journalists outside a Senate Judiciary Committee hearing on online child sexual exploitation in January. A new report says A.I. is adding to the problem.Credit...Tom Brenner for The New York Times

By Cecilia Kang

Cecilia Kang reports on internet and A.I. policy from Washington.

A new flood of child sexual abuse material created by artificial intelligence is threatening to overwhelm the authorities already held back by antiquated technology and laws, according to a new report released Monday by Stanford University's Internet Observatory.

Over the past year, new A.I. technologies have made it easier for criminals to create explicit images of children. Now, Stanford researchers are cautioning that the National Center for Missing and Exploited Children, a nonprofit that acts as a central coordinating agency and receives a majority of its funding from the federal government, doesn't have the resources to fight the rising threat.

The organization's CyberTipline, created in 1998, is the federal clearinghouse for all reports on child sexual abuse material, or CSAM, online and is used by law enforcement to investigate crimes. But many of the tips received are incomplete or riddled with inaccuracies. Its small staff has also struggled to keep up with the volume.

"Almost certainly in the years to come, the CyberTipline will be flooded with highly realistic-looking A.I. content, which is going to make it even harder for law enforcement to identify real children who need to be rescued," said Shelby Grossman, one of the report's authors.

The National Center for Missing and Exploited Children is on the front lines of a new battle against sexually exploitative images created with A.I., an emerging area of crime still being delineated by lawmakers and law enforcement. Already, amid an epidemic of deepfake A.I.-generated nudes circulating in schools, some lawmakers are taking action to ensure such content is deemed illegal.

A.I.-generated images of CSAM are illegal if they contain real children or if images of actual children are used to train data, researchers say. But synthetically made ones that do not contain real images could be protected as free speech, according to one of the report's authors.

Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

< Back to 68k.news UK front page