AI deepfake pornography targeting teens is rising, new survey warns

A disturbing new report is shedding light on just how widespread AI-generated deepfake pornography has become among teenagers.

By the numbers:

A new survey from the nonprofit Thorn found that one in eight young people ages 13 to 20 say they personally know someone who has been targeted by fake explicit images created with artificial intelligence. Even more alarming, one in 17 say they’ve been victims themselves.

Why you should care:

Experts say technology has made it shockingly easy to take a simple photo, often pulled from social media, and manipulate it into realistic fake explicit content.

"It can take just one image online," advocates warn, "for someone to create something that never actually happened."

Attorney and former prosecutor JoDee Neil says what she learned about the technology during her time as a prosecutor forced her to rethink her own digital footprint.

"I was so horrified by what I saw that I spent two full days taking down every image I could locate of my children from my social media," Neil said, adding that deepfakes are being created using photos parents often believe are harmless.

Neil calls it "not a tomorrow problem, but a today problem."

Dig deeper:

Researchers also say the issue isn’t limited to outside predators. In some cases, teens themselves are using AI tools to generate fake explicit images of classmates.

"Even other kids at school are taking pictures and using computers to make the pictures look like they're doing things that didn't really happen," Neil said.

Last year, Florida made it a felony to create or possess AI-generated sexual depictions of a child. The state also passed "Brooke’s Law," named after a Jacksonville teen whose fake nude image was spread online. The law requires websites to remove reported deepfake content within 48 hours of a victim’s request.

Brooke Curry, who became the victim of a deepfake image, says she was fortunate to have strong family support and resources to fight back.

What's next:

Neil argues that stronger accountability is still needed from the tech companies behind the tools where the child pornographic images are generated.

"This is a battle cry to parents," Neil said. "We have got to organize and really interrogate our elected leaders. What are you gonna do with these companies?"

Several lawsuits and investigations are now underway, targeting AI platforms accused of enabling the creation and spread of child sexual abuse material and deepfakes. The outcomes could set a major legal precedent for how these cases are handled moving forward.

The Source: Information for this story was gathered from a Thorn nonprofit survey on youth and deepfake abuse, Florida state legislation: "Brooke’s Law," interviews with a victim of a deepfake image, an attorney and former prosecutor, as well as past FOX 13 News reporting.

FloridaTechnologyArtificial Intelligence