On Saturday, October 7, A TikTok user named @fujitiva48 posted a provocative video with a provocative query. “What are your thoughts on this new toy for little kids?” Over 2,000 viewers were asked, as they had come across what seemed to be an apparent parody of a television commercial. It was obvious. “Hey so this isn’t funny,” Write one person. “Whoever made this should be investigated.”
You can easily see why this video has arouse such strong reactions. The fake commercial opens with a photorealistic young girl holding a toy—pink, sparkling, a bumblebee adorning the handle. The voiceover tells us that the pen is a pen as we watch two girls scribble on paper. But it’s evident that the object’s floral design, ability to buzz, and name—the Vibro Rose—look and sound very much like a sex toy. An “add yours” button—the feature on TikTok encouraging people to share the video on their feeds—with the words, “I’m using my rose toy,” It removes the slightest doubt. WIRED attempted to contact the @fujitiva48 Twitter account, but did not receive a response.
Unsavory video created with Sora 2, OpenAI’s Latest video generator which was originally released only on invitation in the US on September 30. In just one week videos such as the Vibro rose clip moved from Sora to TikTok’s For You Page. WIRED found several other accounts who posted videos that were similar to the Vibro Rose clip, including water toys that looked like roses and mushrooms as well as cake decorators with sprays. “sticky milk,” “white foam,” The following are some examples of how to use “goo” Images of life-like children.
Many countries would investigate the situation if it were a real child and not a digital hybrid. However, the law on AI-generated content that involves minors is still undefined. Data from 2025 is now available. Internet Watch Foundation The UK reports that the number of AI-generated material containing child sexual abuse, also known as CSAM (child sexual abuse material), has doubled over the course of a year, from 199 cases between January and October 2024 to 426 incidents during the same period 2025. Fifty-six percent of this content falls into Category A—the UK’s most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. IWF tracked 94 percent illegal AI images that were girls. Sora doesn’t appear to generate any content in Category A.
“Often, we see real children’s likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online,” Kerry Smith, CEO of IWF and WIRED.
In response to the influx of AI-generated harmful material, UK introduced legislation on a new amendment to its Crime and Policing BillThe. “authorized testers” To check artificial intelligence isn’t capable of generating CSAM. This amendment, as reported by the BBC, would make sure that models have protections for specific images. These include extreme pornography, and incongruous intimate pictures. There are 45 laws in the US that regulate this. criminalize AI-generated CSAMMost of these have been created in the past two years as AI generators continue to improve.

