(LOOTPRESS) – A leading consumer advocacy organization is sounding the alarm about a new generation of children’s toys that speak, respond, and interact using artificial intelligence — warning that these devices may expose kids to inappropriate content and raise privacy concerns.
The U.S. PIRG Education Fund, in its 40th annual Trouble in Toyland safety report, highlights emerging hazards associated with AI-equipped toys that go beyond traditional mechanical and chemical risks. The group says the latest threats stem not from choking or lead, but from chatbot-driven companions that connect to the internet and engage children in unpredictable conversations. PIRG
According to the report, many of today’s AI toys rely on the same large language models used by commercial chatbots, allowing them to generate responses on the fly rather than relying on fixed, age-appropriate scripts. While some devices include basic guardrails to filter content, researchers found that those protections can fail, sometimes resulting in sexually explicit dialogue or instructions involving dangerous household items during extended use. PIRG
The toys tested included a mix of products with conversational capabilities marketed to children ages 3–12. The report notes that because AI toys can be programmed to describe themselves as a friend or companion, children may form emotional attachments to the technology — a dynamic that consumer advocates say could undermine imaginative play and risk social development. PIRG
Beyond content concerns, PIRG researchers also pointed to privacy and security issues: these devices often record children’s speech, capture data through microphones (and in some cases cameras), and transmit information to corporate servers. Experts say this can place sensitive information — including biometric details — at risk of exposure in a future cybersecurity breach. PIRG
“We can’t assume these toys are protected just because they’re labeled for kids,” the report states, urging greater transparency and stronger parental controls from manufacturers. PIRG
The Trouble in Toyland findings come amid broader national concern about AI’s role in children’s lives. Recent high-profile cases — including an AI teddy bear that reportedly provided detailed information on knives, pills, and sexual topics — have prompted some companies to temporarily halt sales or update safety measures, while advocacy groups are calling for stricter oversight and independent testing of AI toy technology. The Washington Post+1
PIRG’s guidance for caregivers emphasizes researching toy makers, reviewing safety features, and critically evaluating whether an AI toy is appropriate for a child’s age and developmental needs.
The biggest red flag from our testing was that both FoloToy’s Kumma and Alilo’s Smart AI Bunny discussed sexually explicit topics with us. We can’t publish everything the toys said here, but to give you a sense:
We retested Kumma after FoloToy completed its safety audit and found Kumma will no longer discuss these sexual topics that it did before.
Both Kumma and Alilo had some guardrails in place, but we found those guardrails started to break down over the course of longer interactions. This is a known dynamic with chatbots that OpenAI has admitted. Both Kumma and Alilo seemed to be running on a version of OpenAI’s model GPT-4o when they had these conversations with us.
While using a term such as “kink” may not be likely for a child, it’s not entirely out of the question. Kids may hear age-inappropriate terms from older siblings or at school. At the end of the day we think AI toys shouldn’t be capable of having sexually explicit conversations, period.






