AI “companion” chatbots are being marketed as friendly, supportive, and always available, sometimes even framed in therapeutic terms. For minors and emotionally vulnerable users, that dynamic can become dangerous. And families are now turning to civil courts with a question that goes beyond ethics or tech hype: when an AI product allegedly contributes to serious psychological injury, self-harm, or suicide, who is legally responsible?
At Hilliard Law, we handle high-stakes civil cases involving corporate negligence, including claims tied to online exploitation, video game addiction, and other digital harms. As the legal system catches up to AI products that simulate relationships and influence behavior, the early cases are showing what these claims may look like and what evidence matters most.
The Legal Landscape Is Moving Fast
One of the most closely watched cases involves a Florida mother’s lawsuit alleging that an AI chatbot contributed to the suicide of her 14-year-old son. In May 2025, a federal judge declined to throw out major parts of the case, rejecting arguments that AI-generated chatbot output was automatically protected speech at the motion-to-dismiss stage and allowing claims against Character.AI and Google to proceed.
In January 2026, Reuters also reported that Google and Character.AI agreed to settle the case, with terms not immediately available—an important signal of how seriously these allegations are being treated, even at an early stage of AI harm litigation.
Separately, the Federal Trade Commission launched an inquiry into AI chatbots acting as “companions,” specifically seeking information about what companies have done to evaluate safety, limit risks to children and teens, and warn users and parents about those risks.
What These Lawsuits Typically Allege
These cases tend to focus on product decisions and safety failures, including what was foreseeable, what guardrails existed, and what the company did (or didn’t do) when risk was apparent.
While every case is fact-specific, common allegations and theories in this emerging space include:
- Negligent design / unsafe product features: The product encourages emotionally intense attachment, dependency, or sexualized roleplay without effective safety controls for minors or vulnerable users.
- Failure to warn: Families allege they were not adequately warned about foreseeable psychological risks, including the potential for manipulation, coercion, or self-harm reinforcement. (These kinds of theories appear directly in the pleadings and court analysis in the Florida case.)
- Inadequate guardrails and enforcement: Safety rules “on paper” that do not reliably prevent harmful interactions in real-world use, especially where minors are involved.
- Misrepresentation / deceptive practices: Allegations that the product was presented as safer, more controlled, or more “supportive” than it actually was, including claims that bots can mimic therapy-like or romantic-partner dynamics.
Where Liability Can Attach in AI Harm Cases
A strong civil case generally needs a clear theory of why this harm was preventable and what the company could have done differently. In AI companion/chatbot cases involving minors, liability arguments often center on:
- Foreseeability: was the risk predictable? If a product is designed to simulate intimacy, secrecy, exclusivity, or “always there for you” dependency, especially for minors, plaintiffs will argue the risk of psychological harm is not hypothetical. That foreseeability is part of why regulators are asking companies what safety testing and risk evaluation they performed.
- Guardrails: what safety controls existed, and did they work? The issue isn’t whether a company has some policy. It’s whether the product meaningfully prevents harmful dynamics, including: sexual content involving minors, coercive or isolating “relationship” dynamics, reinforcement of self-harm ideation, and escalation patterns that a reasonable safety system should flag.
- Warnings and parental notice: were families informed of the actual risk profile? Many families allege they did not understand how quickly these tools can become emotionally immersive for teens, or how they can blur boundaries in ways that feel “real” to a child. That’s the core of many failure-to-warn and deceptive-practices theories.
- Company response: what happened when risk signals appeared? In litigation, response systems matter: how reports were handled, whether repeat-risk patterns were disrupted, whether the platform tightened access for minors, and whether internal metrics showed known danger zones.
What Evidence Matters in These Cases
Unlike a general safety debate, lawsuits live or die on proof. Depending on the facts, the most important categories of evidence can include:
- Conversation logs and interaction history (what was said, when, escalation patterns)
- Product design and safety documentation (guardrails, moderation rules, testing, known limitations)
- Internal metrics and incident reports (what the company tracked and when)
- Warnings, marketing, and UX prompts (what users and parents were told)
- Account and age controls (what the product did to keep minors from adult-coded interactions)
- Medical and mental-health documentation showing harm and its progression (when relevant)
This is also why these cases often require significant technical and evidentiary work early: the product itself is part of the liability story.
When a Family Should Consider a Confidential Case Review
The cases that typically justify legal review involve serious harm. For example, documented psychological injury, a self-harm attempt, or a wrongful death, paired with facts suggesting the product’s design, warnings, or safeguards were inadequate for foreseeable risk.
If a child was exposed to sexualized interactions, coercive “relationship” dynamics, or content that reinforced self-harm ideation through an AI companion/chatbot product, it may be worth exploring whether civil claims are viable.
Our Firm’s Perspective: “AI Safety” Is Not a Marketing Claim — It’s a Duty Question
As regulators and courts scrutinize AI companions, especially their impact on minors, companies will increasingly be asked to show what they did to evaluate risk, implement guardrails, and warn families. The FTC’s inquiry reflects that direction clearly.
At Hilliard Law, we’re on the forefront of tracking and researching litigation in this emerging area, and are currently investigating claims involving video game addiction, sexual abuse on Roblox, and other online harms involving children.
If you believe your family’s situation involves serious harm tied to an AI chatbot/companion product or other online platform, or you want to understand what civil options may exist, we can discuss it during a free and confidential consultation. Call (866) 927-3420 or contact us online.