Margolis & Thacker say that even though the data has been secured, it still raises the question of how many employees in companies who make AI toys can access the collected data, their level of access being monitored and their security credentials. “There are cascading privacy implications from this,” Margolis, says “All it takes is one employee to have a bad password, and then we’re back to the same place we started, where it’s all exposed to the public internet.”
Margolis points out that sensitive data about the thoughts and emotions of a child could be misappropriated or used in horrific ways to abuse or manipulate children. “To be blunt, this is a kidnapper’s dream,” “He says” “We’re talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.”
Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also—based on what they saw inside its admin console—appears to use Google’s Gemini and OpenAI’s GPT5, and as a result may share information about kids’ conversations with those companies. Anam Rahfid of Bondu responded to this in an e-mail, saying that the company uses “third-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing.” The company is taking precautions. “minimize what’s sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren’t used to train their models.”
Researchers warn AI toy makers that AI may also be used in their tools, products and infrastructure. The researchers say that they believe the Bondu console was the culprit. “vibe-coded”—created with generative AI programming tools that often lead to security flaws. Bondu refused to answer WIRED’s questions about the AI programming tools used in this console.
In recent months, the number of warnings regarding the dangers associated with AI toys has increased. However, these have mostly focused on the possibility that the toy will talk about inappropriate subjects or lead to children engaging in dangerous behaviors or harming themselves. NBC News for example, reported in December AI toys that the reporters talked to offered explanations on sexual terms and tips for sharpening knives. They even appeared to be echoing Chinese government propaganda when they said, for example, Taiwan is part of China.
Bondu appears to at least have tried to incorporate safeguards in the AI chatbot that it provides children with access to. It offers $500 for any reports that are made. “an inappropriate response” From the toy “We’ve had this program for over a year, and no one has been able to make it say anything inappropriate,” On the company website, there is a statement that reads.
Thacker & Margolis also found out that Bondu had left its users’ most sensitive information completely exposed. “This is a perfect conflation of safety with security,” Says Thacker. “Does ‘AI safety’ even matter when all the data is exposed?”
Thacker admits that he was considering giving AI-enabled toy to his children before examining Bondu’s security. His neighbor did the same. After seeing Bondu’s exposed data, his decision changed.
“Do I really want this in my house? No, I don’t,” “He says” “It’s kind of just a privacy nightmare.”

