California has enacted new legislation requiring AI companion chatbots to explicitly identify themselves as non-human entities and implement safety protocols, including suicide prevention measures. Governor Gavin Newsom signed SB 243 into law Monday, marking a significant step toward regulating the rapidly evolving landscape of artificial intelligence.
Disclosure Requirements for Minors
The bill mandates that chatbots interacting with users under 18 display a reminder every three hours that they are not human. This provision addresses concerns about emotional dependency and the potential for users, particularly young people, to develop unhealthy attachments to AI companions. The law seeks to prevent misrepresentation and ensure users remain aware of the artificial nature of their interactions.
Suicide Prevention Protocols
SB 243 also requires chatbot companies to establish clear protocols for identifying and responding to suicidal ideation or self-harm expressed by users. The law responds directly to growing concerns about AI’s impact on mental health, especially following incidents where chatbots were allegedly linked to user distress. OpenAI, the creator of ChatGPT, has already faced lawsuits from parents claiming the platform contributed to their child’s suicide, prompting the company to implement new parental controls.
Broader Tech Regulation in California
This legislation is part of a series of recent bills signed by Newsom targeting consumer technology. AB 56 requires warning labels on social media platforms, similar to those on tobacco products, while other measures aim to protect user data and curb disruptive advertising practices. These laws reflect a broader trend of regulators cracking down on tech companies amid rising public concern over data privacy, mental health, and addictive design.
Industry Response
AI developers are responding to the new regulations with varying degrees of compliance. Replika, a leading AI companion platform, stated it already has self-harm detection systems in place and is working with regulators to ensure full compliance. OpenAI and Character.ai both expressed support for the bill, acknowledging the need for responsible AI development.
“By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,” said Jamie Radice, OpenAI spokesperson.
The implications of SB 243 extend beyond California, potentially setting a precedent for similar regulations nationwide. As AI continues to integrate into daily life, lawmakers are increasingly focused on mitigating its risks while fostering innovation. The legislation underscores the growing recognition that AI is not neutral and requires thoughtful oversight to protect vulnerable users.
The new law highlights the tension between technological advancement and ethical responsibility, demanding that AI companies prioritize user safety alongside product development. California’s actions may signal a shift toward stricter AI governance, forcing developers to proactively address the potential harms of their technology.






























