New York
AI-generated videos showing what appear to be underage girls in sexualized clothing or positions have together racked up millions of likes on TikTok, even though the platform’s rules prohibit such content, according to new research from an online safety non-profit.
Researchers found more than a dozen accounts posting videos featuring AI-generated girls wearing tight clothing, lingerie or school uniforms, sometimes in suggestive positions. The accounts have hundreds of thousands of followers combined. Comments on many of the videos included links to chats on the messaging platform Telegram, which offered child pornography for purchase, according to the report.
Thirteen accounts remained active as of Wednesday evening after 15 of them were flagged through TikTok’s reporting tool last week, according to Carlos Hernández-Echevarría, who led the research as assistant director and head of public policy at Maldita.es. The Spain-based non-profit, which studies online disinformation and promotes media transparency, released the report Thursday.
The report raises questions about TikTok’s ability to enforce its own policies regarding AI content, even when that content appears to show sexualized images of computer-generated children. Tech platforms including TikTok face increased pressure to protect young users as more jurisdictions pass online safety legislation, including Australia’s under-16 social media ban, which went into effect this week.
“This is not nuanced at all,” Hernández-Echevarría told CNN. “Nobody that is, you know, a real person doesn’t find this gross and want it removed.”
TikTok says it has a zero tolerance policy for content that “shows, promotes or engages in youth sexual abuse or exploitation.” Its community guidelines specifically prohibit “accounts focused on AI images of youth in clothing suited for adults, or sexualized poses or facial expressions.” Another section of its policies states that TikTok does not allow “sexual content involving a young person, including anything that shows or suggests abuse or sexual activity,” which includes “AI-generated images” and “anything that sexualizes or fetishizes a young person’s body.”
The company says it uses a combination of vision, audio and text-based tools, along with human teams, to moderate content. Between April and June 2025, TikTok removed more than 189 million videos and banned more than 108 million accounts, according to the company. It says 99% of content violating its policies on nudity and body exposure, including of young people, was removed proactively, and 97% of content violating its policies on AI-generated content was removed proactively.
The findings
Maldita.es discovered the TikTok videos through test accounts it uses to monitor for potential disinformation or other harmful content as part of its work.
“One of our team members started to see that there was this trend of these (AI-generated videos of) really, really young kids dressed as adults and, particularly when you went into the comments, you could see that there was some money incentive there,” Hernández-Echevarría said.
Some of the accounts described their videos in their bio sections as “delicious-looking high school girls” or “junior models,” according to the report. “Even more subtle videos like those of young girls licking ice cream are full of crude sexual comments,” it states.
In some cases, the accountholders used TikTok’s “AI Alive” feature — which animates still images — to turn AI-generated images into videos, Hernández-Echevarría said. Other videos appeared to have been created using external AI tools, he said.
You must be logged in to post a comment.