Article

Oct 16, 2025

SB 243 and its Impact on Companion Bot Platforms

On October 13, 2025, Governor Gavin Newsom signed Senate Bill (SB) 243 into law, setting an important precedent for AI governance in the United States.

As conversational AI systems grow more personal, emotionally intelligent, and embedded in daily life, policymakers are acknowledging the urgent need for explicit safeguards to protect vulnerable users.

SB 243 directly applies to companion chatbot platforms operating in California, introducing new requirements around safety, transparency, and accountability. The law will take effect on January 1, 2026, with annual reporting obligations beginning July 1, 2027.

Who is this law for?

The bill directly targets companion chatbot platforms, which it defines as follows:

 “Companion chatbot” means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.” 

Under the law, an “operator” is any person or company that makes such a chatbot platform available to users in California.

Key Requirements for Operators
  1. Suicide Prevention Protocols

Operators must:

This marks one of the first times that a law explicitly mandates AI-driven crisis-response obligations for conversational systems.

  1.  Protections for Minor Users

For users known to be minors, operators must:

  • Disclose that the user is interacting with an AI system.

  • Provide a clear and conspicuous reminder every three hours during continuous chatbot interactions that:

    • The user is engaging with an AI, not a human, and

    • The user should take a break.

  • Implement reasonable safeguards to prevent the chatbot from:

    • Producing or displaying sexually explicit material (defined as per Section 2256 of Title 18 of the United States Code), or

    • Encouraging minors to engage in sexually explicit conduct.

These safeguards aim to reduce the risks of emotional dependency, manipulation, or exposure to inappropriate content for younger users.

  1. Annual Reporting Requirements

Beginning July 1, 2027, operators must file an annual report to the Office of Suicide Prevention, including:

  • The number of crisis service referrals issued in the previous calendar year.

  • Protocols for detecting, removing, and responding to suicidal ideation.

  • Protocols that prohibit companion chatbots from responding to suicidal ideation or actions.

Clavata helps platforms detect self-harm and suicidal ideation and we are already supporting customers in meeting upcoming reporting requirements.

  1.  Disclosures and Warnings

Operators must clearly disclose, within their application or platform interface, that companion chatbots may not be suitable for some minors.

This provision ensures that users and guardians understand the emotional and psychological limitations of AI companionship systems.

  1. Civil Litigation

Individuals who experience harm (“injury in fact”) due to an operator’s noncompliance may bring a civil action seeking:

  • Injunctive relief,

  • Damages (the greater of actual damages or $1,000 per violation), and

  • Reasonable attorney’s fees and costs.

The law also notes that these duties and remedies are cumulative, meaning they add to  rather than replace  existing legal obligations under state or federal law.

Get to know Clavata.
Let’s chat.

See how we tailor trust and safety to your unique use cases. Let us show you a demo or email us your questions at hello@clavata.ai