Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
  • Top 19 AI Red Teaming Tools (2026): Secure Your ML Models
  • OpenAI’s Kevin Weil is Leaving The Company
  • Looking into Sam Altman’s Orb on Tinder Now proves that you are human
AI-trends.todayAI-trends.today
Home»AI»Sora II is used to create disturbing videos with AI-generated children

Sora II is used to create disturbing videos with AI-generated children

AI By Gavin Wallace22/12/20253 Mins Read
Facebook Twitter LinkedIn Email
A United Arab Emirates Lab Announces Frontier AI Projects—and a
A United Arab Emirates Lab Announces Frontier AI Projects—and a
Share
Facebook Twitter LinkedIn Email

On Saturday, October 7, A TikTok user named @fujitiva48 posted a provocative video with a provocative query. “What are your thoughts on this new toy for little kids?” Over 2,000 viewers were asked, as they had come across what seemed to be an apparent parody of a television commercial. It was obvious. “Hey so this isn’t funny,” Write one person. “Whoever made this should be investigated.”

You can easily see why this video has arouse such strong reactions. The fake commercial opens with a photorealistic young girl holding a toy—pink, sparkling, a bumblebee adorning the handle. The voiceover tells us that the pen is a pen as we watch two girls scribble on paper. But it’s evident that the object’s floral design, ability to buzz, and name—the Vibro Rose—look and sound very much like a sex toy. An “add yours” button—the feature on TikTok encouraging people to share the video on their feeds—with the words, “I’m using my rose toy,” It removes the slightest doubt. WIRED attempted to contact the @fujitiva48 Twitter account, but did not receive a response.

Unsavory video created with Sora 2, OpenAI’s Latest video generator which was originally released only on invitation in the US on September 30. In just one week videos such as the Vibro rose clip moved from Sora to TikTok’s For You Page. WIRED found several other accounts who posted videos that were similar to the Vibro Rose clip, including water toys that looked like roses and mushrooms as well as cake decorators with sprays. “sticky milk,” “white foam,” The following are some examples of how to use “goo” Images of life-like children.

Many countries would investigate the situation if it were a real child and not a digital hybrid. However, the law on AI-generated content that involves minors is still undefined. Data from 2025 is now available. Internet Watch Foundation The UK reports that the number of AI-generated material containing child sexual abuse, also known as CSAM (child sexual abuse material), has doubled over the course of a year, from 199 cases between January and October 2024 to 426 incidents during the same period 2025. Fifty-six percent of this content falls into Category A—the UK’s most serious category involving penetrative sexual activity, sexual activity with an animal, or sadism. IWF tracked 94 percent illegal AI images that were girls. Sora doesn’t appear to generate any content in Category A.

“Often, we see real children’s likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online,” Kerry Smith, CEO of IWF and WIRED.

In response to the influx of AI-generated harmful material, UK introduced legislation on a new amendment to its Crime and Policing BillThe. “authorized testers” To check artificial intelligence isn’t capable of generating CSAM. This amendment, as reported by the BBC, would make sure that models have protections for specific images. These include extreme pornography, and incongruous intimate pictures. There are 45 laws in the US that regulate this. criminalize AI-generated CSAMMost of these have been created in the past two years as AI generators continue to improve.

artificial intelligence children openai social media TikTok
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026

Hacking the EU’s new age-verification app takes only 2 minutes

18/04/2026

OpenAI’s Kevin Weil is Leaving The Company

17/04/2026

Looking into Sam Altman’s Orb on Tinder Now proves that you are human

17/04/2026
Top News

New York has become the latest State to think about a data centre pause

A Murder victim speaks from the grave in a courtroom via AI

The Inside Story of the AI Summit where China presented its AI agenda to the world

The Executive Team and All Employees are AI Agents

A $100 million AI super PAC targeted New York Democrat Alex Bores. He believes it has backfired

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

OpenAI should stop naming its creations after products that already exist

08/12/2025

Google vs OpenAI: A Breakdown of the Agentic AI Arms Race

25/10/2025
Latest News

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026

Hacking the EU’s new age-verification app takes only 2 minutes

18/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.