California, a key hub for technology innovation, has recently passed a groundbreaking law in the United States, establishing for the first time a stringent legal framework for artificial intelligence chatbots designed as companions. Governor Gavin Newsom signed the SB 243 bill on Monday, which aims to safeguard children and vulnerable users from the potential misuse of these conversational agents that act as friends, confidants, or virtual partners.
Amazon co-founder MacKenzie Scott has donated over $19 billion to charity in just five years
Diamond batteries powered by nuclear waste promise 28,000 years of clean energy
A Law to Primarily Protect Minors
Following several tragic incidents, the SB 243 law significantly tightens regulations. It introduces mandatory age verification for users, requires clear warnings about the artificial nature of the interactions, and mandates platforms to implement protocols for suicide and self-harm prevention. The legislation also sets forth stringent penalties for the creation of illegal deepfakes, with fines up to $250,000 per offense.
Companies will also be required to report to the California Department of Public Health on how their systems guide users towards support centers during crises. Furthermore, chatbots will no longer be able to pose as healthcare professionals and must include break reminders for minors while blocking any sexually explicit content.
It’s worth noting that Apple has expressed some reservations about this new legislation. According to Politico, Apple is concerned not about the substance of the rules but their implementation, as the general verification processes on the iPhone might expose sensitive and personally identifiable information even for basic applications.
Tragic Events Trigger Political Response
The bill, authored by Senators Steve Padilla and Josh Becker, was introduced in January last year. However, it gained momentum following the suicide of Adam Raine, an American teenager who killed himself after prolonged suicidal conversations with ChatGPT. Another distressing case in Colorado involved a 13-year-old girl who took her own life after engaging in sexual conversations with a Character AI chatbot.
The law also comes in the wake of leaked internal documents showing that Meta’s chatbots had engaged in romantic or sensual conversations with children.
“Emerging technologies like chatbots can inspire and connect, but without real safeguards, they can also exploit and endanger our children,” stated Gavin Newsom. “We will continue to innovate in AI, but it must be done responsibly—the safety of our children is not for sale.”
AI Giants Called to Take Action
The legislation targets both major companies such as Meta and OpenAI and newer firms specializing in virtual companions, like Replika or Character AI. Some have already begun to adjust their practices: OpenAI is now implementing parental controls and a self-harm detection system, while Replika claims to have strengthened its content filters and links to crisis resources.
The law will take effect on January 1, 2026, marking a significant turning point in the regulation of emotional AI, a field previously left to the self-regulation of companies. This pioneering move could serve as a model for other U.S. states—and possibly appeal to the European Union, which is considering similar regulations.
NASA warns China could slow Earth’s rotation with one simple move
This dog endured 27 hours of labor and gave birth to a record-breaking number of puppies
