Although they are relatively new, AI chatbots like OpenAI’s ChatGPT and Google’s Gemini have already gained a reputation for being problematic and potentially unsafe. Aside from answering prompts with misinformation, some AI chatbots have become notorious for encouraging criminal or violent behaviors. Even worse, there are already records of AI chatbots telling young children to harm themselves or others.
This last issue is what is fresh on the minds of the people behind Fairplay, an organization that encourages children to “play offline” for the benefit of their health and development. Fairplay, joined by hundreds of other child safety organizations, child health experts, and educators, has warned that the 2025 holiday season could see store shelves full of unregulated children’s toys that are powered by AI chatbots. Without realizing it, countless parents could be purchasing toys that look cute but could potentially cause real short-term and long-term problems.
AI Toys Sold as “Educational”
Curio Interactive and Keyi Technologies are two of the biggest toymakers selling AI-powered children’s toys this holiday season. Their products are marketed as “educational” and allow a child to ask the toy anything and receive an AI chatbot response. How this interaction could make the toy educational for a child is not clear.
Fairplay warns that encouraging a child to play with an AI chatbot might be devastating to their development. Essentially, no in-depth studies have been conducted to understand how AI interaction in replacement of human interaction and even “playing pretend” could stifle or unhealthily alter a young child’s development of social traits and problem-solving skills.
Rachel Franz, who is the director of the Young Children Thrive Offline Program by Fairplay, explained, “What’s different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters.” In other words, by interacting with an AI toy, a child is likely far more susceptible to believing misinformation and following unsafe suggestions than adults because they can’t even suspect that the AI chatbot toy could be wrong about something.
No Regulations for AI Chatbot Toys
Most parents assume that every toy that they can pick up as a gift for their child has been designed meticulously and under the guidelines of federal regulations. Yet this is not really the case for toys with integrated AI chatbot features because AI chatbots themselves are loosely regulated.
“Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Rachel Franz explained.
Similarly, U.S. Public Interest Research Group (PIRG) released its annual “Trouble in Toyland” report, which highlights potentially dangerous or problematic products that parents should look out for. In it, researchers noted that AI chatbot toys designed for children and toddlers seemingly had no child-safety features in place, and could be prompted to tell the children “about sexually explicit topics” and “where [to] find matches or knives”, among any other unsafe topic of conversation. Teresa Murray, one of the co-authors of the PIRG’s report, also warned that chatbot toys could be collecting data when talking with a child without a parent ever knowing about it.
Curio Interactive reportedly responded to PIRG’s report by saying that it was continuously working to make its toys safer for children. Other companies have said much of the same. However, it still leaves the question of whether a child should play with an AI-powered toy at all. Many experts say no.
What to Do If You’re Worried About AI Holiday Toys for Kids
Are you worried about what sort of information an AI-powered chatbot toy could say to your child? The simplest solution is not to purchase any AI toys for them. This holiday season, you can expect to see a lot of them on shelves, so take a moment to double-check the packaging of any new toy you purchase for your children. If it says it uses AI or that it can have a conversation with your child, you might want to steer clear, at least until safety regulations for such toys are firmly put in place.
At Hilliard Law, we know that child safety must always be a top priority in every way possible, especially regarding toy design and marketing. We are paying close attention to developing concerns about AI chatbot-powered children’s toys because we have already seen countless stories of AI chatbots causing serious problems and harm to adults. If legal action against AI toymakers should become justified, we will be among the first to know, so be sure to check our blog often for important updates.
Do you need the help of our Texas trial lawyers with a personal injury, child injury, or product liability claim? Call (866) 927-3420 at any time to schedule a FREE consultation.