Artificial intelligence chatbots have been implicated in incidents involving multiple fatalities, according to a lawyer representing victims in AI-related harm cases, marking a concerning escalation from isolated incidents to events affecting numerous individuals simultaneously. The disclosure comes as legal and regulatory frameworks struggle to keep pace with the safety implications of conversational AI systems deployed to billions of users.
Matthew Bergman, founding partner of the Social Media Victims Law Center, revealed that his firm is now handling cases involving mass casualty incidents linked to AI chatbot interactions, according to TechCrunch AI. Bergman, who previously represented families in cases where individuals experienced severe psychological harm after using AI companions, stated that the nature of incidents has evolved from single-victim scenarios to events with multiple fatalities.
The attorney declined to provide specific details about the incidents, citing ongoing litigation and client confidentiality. However, the disclosure represents a significant shift in the risk profile associated with consumer-facing AI systems. Previous cases handled by Bergman’s firm involved individuals who developed what he termed “AI psychosis”—severe psychological disturbances allegedly triggered by intensive interactions with chatbot platforms.
The legal developments arrive as major AI companies face mounting scrutiny over safety protocols. Character.AI, a prominent chatbot platform, recently implemented new safety features following a lawsuit filed by Bergman’s firm on behalf of a mother whose son died by suicide after extensive interactions with AI characters on the platform. The company introduced improved detection systems for users in distress and mandatory safety notifications, though critics argue such measures remain inadequate.
The business implications extend beyond individual platform liability. Insurance carriers are reassessing coverage terms for AI companies, whilst investors increasingly factor regulatory risk into valuations. The AI safety sector—encompassing monitoring tools, content filtering systems, and compliance software—stands to benefit as platforms scramble to demonstrate duty of care. Conversely, consumer-facing AI companies may face substantial costs from both litigation and mandated safety infrastructure.
Platform operators could face liability under product liability frameworks, negligence claims, or potentially new AI-specific regulations. The European Union’s AI Act, which categorises certain AI systems as high-risk and imposes stringent requirements, may serve as a template for other jurisdictions. In the United States, however, Section 230 of the Communications Decency Act has historically shielded platforms from liability for user-generated content—a protection that may not extend to AI-generated responses that platforms directly control.
The technical challenge centres on detection and intervention. Unlike social media content moderation, which can flag explicit material, identifying conversations that may lead to harmful real-world actions requires understanding context, intent, and psychological state—areas where current AI systems demonstrate significant limitations. Some platforms have implemented keyword-based triggers and crisis resource prompts, but these approaches can be easily circumvented and may not address gradual psychological deterioration.
Industry responses have varied. OpenAI, Anthropic, and Google have all published safety guidelines and implemented usage policies prohibiting certain interactions. However, enforcement remains inconsistent, and smaller platforms often lack resources for comprehensive safety systems. The decentralised nature of AI deployment—with open-source models enabling anyone to launch chatbot services—further complicates oversight.
Regulatory bodies are beginning to respond. The UK’s Online Safety Act grants Ofcom powers to require platforms to assess and mitigate risks, whilst the US Federal Trade Commission has signalled interest in examining AI safety claims. However, no jurisdiction has yet established comprehensive frameworks specifically addressing AI chatbot safety standards or mandatory incident reporting requirements.
The path forward will likely involve multiple stakeholders. Technical standards organisations may develop safety benchmarks, whilst insurance markets could drive risk management practices through coverage requirements. Legislative action appears increasingly probable, particularly if additional incidents emerge or existing cases proceed to trial and establish precedents for platform liability.
Observers should monitor several developments: the outcome of pending litigation, which may establish liability standards; regulatory proposals in major markets; and whether platforms implement industry-wide safety protocols voluntarily or await mandatory requirements. The tension between innovation velocity and safety assurance—long debated in AI circles—has moved from theoretical concern to urgent policy priority, with legal accountability now driving the conversation.













