AI as an Emerging Tort Risk: Where Mass Torts Could Develop

May 8, 2026

Artificial intelligence is already drawing legal challenges, but the more important question is which issues could develop into broader plaintiff activity over time. Stanford's 2025 AI Index reported 233 AI-related incidents in 2024, a 56.4% increase over 2023.1 That is not a chatbot-litigation count, but it does show that documented AI-related harm events are rising. Four areas stand out:

Early Indicators of AI Litigation Risk and Mass-Tort Potential

Biometric Privacy: the Clearest Current Litigation Signal

Biometric privacy is the strongest current candidate for AI-related mass-tort-style exposure. Many AI-enabled tools rely on voice analysis, facial recognition, speaker identification, or other identity-linked data. That matters because the same alleged collection or use of biometric data can be repeated across large groups of users or meeting participants in a standardized way. Illinois has become the main reference point because the Biometric Information Privacy Act (BIPA) defines a biometric identifier to include a voiceprint and a scan of hand or face geometry.8

A 2025 biometric litigation review reported that at least 100 putative BIPA class actions were filed in 2025. 2 That figure is not limited to AI cases, but it still matters because AI note-takers, meeting assistants, speaker-recognition tools, and facial-analysis systems fit naturally into the same plaintiff model. For now, biometric privacy is the best statistical sign that at least one AI-adjacent category already has the structure that can support broader plaintiff activity.

Chatbot Harm: Early, but Harder to Dismiss

Chatbot dependency and psychological-harm claims are less developed numerically, but they are harder to dismiss than they were a year ago. The concern is not just that a chatbot gave bad or troubling responses. It is that some systems may encourage emotional dependency or continued engagement despite signs of vulnerability.

There is also evidence that chatbots are already being used in emotionally sensitive settings. Pew reported in February 2026 that 12% of U.S. teens say they have used AI chatbots to get emotional support or advice. 3

Incorrect AI Guidance: Exposure Is Rising Faster Than Litigation

Reliance-based harm from incorrect AI guidance is another area to watch, especially as AI moves deeper into medical, financial, and legal settings. The stronger theory may not be that AI simply gave a wrong answer, but that it was used in a setting where people were likely to rely on it without enough warning, escalation, or human review. At the moment, direct tort statistics are thin. The better support comes from exposure.

The chart below highlights consumer survey data suggesting that AI use in health and financial settings is already broader than it may appear. 5, 6 These figures do not establish a mass tort, but they do show that AI use in high-stakes settings is already broad enough to support broader plaintiff activity if reliance claims become more standardized.

Voice Cloning and Digital Likeness: Large Harm Environment, Fragmented Claims

Training-data, voice-cloning, and digital-likeness claims are broader and less tidy than the first three categories. Some of the biggest disputes in this area are still copyright or licensing cases.

Even so, the voice-cloning and impersonation side deserves attention because the same tools can be used repeatedly and at scale, with harms that reach beyond intellectual-property disputes. Even without fitting the classic mass-tort model, this category could still create meaningful exposure.

The best number in this section is a harm indicator. The FTC reported that impersonation scams caused $2.95 billion in losses in 2024.7 That figure is not limited to AI-enabled scams, but it is still relevant because AI voice cloning and digital impersonation can intensify exactly this kind of loss.

Today, biometric privacy appears to be the clearest current litigation-volume signal, while chatbot harm, incorrect-guidance claims, and voice-cloning or digital-likeness disputes remain developing areas to watch.

How Alan Gray Can Help

As AI-related litigation risks develop, organizations may need more than legal monitoring alone. They may also need help understanding where exposures are forming, how claims patterns could spread across books of business, and whether internal processes are equipped to respond as these issues move from isolated disputes to broader claims activity. Alan Gray helps clients evaluate emerging liability trends, improve visibility into claim and legal-spend activity, and strengthen decision-making where new risks are beginning to take shape.

  • Emerging liability assessment: Alan Gray can help clients track and assess developing litigation themes, including where claim patterns, severity signals, or repeatable allegations may point to broader exposure.
  • Claims and legal-spend visibility: Through claims review and legal spend management capabilities, Alan Gray can help clients identify how new litigation categories are affecting defense activity, outside counsel use, and cost trends.
  • Data-driven decision support: Alan Gray can help clients use claim, litigation, and portfolio data to support reserve discussions, risk monitoring, and strategic planning as emerging issues evolve.

Citations

  1. Stanford Institute for Human-Centered Artificial Intelligence, "Responsible AI," The 2025 AI Index Report, 2025.
  2. Alan Friel, James Ko, and Kristin Bryan, "2025 Year-In-Review: Biometric Privacy Litigation," Privacy World, Dec. 11, 2025.
  3. Pew Research Center, "How Teens Use and View AI," Feb. 24, 2026.
  4. Reuters, "Lawsuit says Google's Gemini AI chatbot drove man to suicide," Mar. 4, 2026
  5. KFF, "Tracking Poll on Health Information and Trust: Use of AI for Health Information and Advice," Mar. 25, 2026.
  6. Consumer Financial Protection Bureau, "Chatbots in Consumer Finance," June 6, 2023.
  7. Federal Trade Commission, "FTC Highlights Actions to Protect Consumers from Impersonation Scams," Apr. 4, 2025.
  8. Illinois General Assembly, Biometric Information Privacy Act, 740 ILCS 14/10.

Additional Articles

Private Credit and P&C Insurers: How Weaker Investment Returns Can Pressure Underwriting

April 20, 2026

The bigger risk for many P&C carriers may not be a sudden credit event.

What Makes New York Construction Litigation Different:Falls, claim severity, and the role of Labor Law § 240

April 10, 2026

Even with fewer construction incidents in 2024, severity remains the key issue due to Labor Law § 240 which can significantly affect liability exposure.

GLP-1s as an Emerging Tort Risk: What We Still Don’t Know

April 8, 2026

Even without major verdicts or settlements, GLP-1 litigation already has several markers of an emerging tort risk: rapid filing