California’s New AI Safeguard Law: What SB243 Means for the Future of Learning
- Gil Reiter

- Oct 25
- 4 min read
Updated: 6 days ago
In October 2025, California became the first state in the nation to pass comprehensive safeguards for “companion” artificial intelligence systems. Known as Senate Bill 243, this landmark legislation responds to a rapidly changing digital world - one in which conversational AI can mimic empathy, sustain relationships, and influence users far beyond the limits of a traditional app.
SB 243 recognizes that when technology begins to talk with us rather than merely to us, the stakes change. It’s no longer just about data privacy or screen time. It’s about trust, transparency, and the emotional well-being of the humans on the other side of the conversation.

What SB 243 Actually Requires
At its core, the law sets out several commonsense protections for users - especially minors and vulnerable populations - who interact with AI over time.
1. Disclosure and honesty.
Every companion AI must clearly state that it is an artificial system, not a human being. The disclosure must be explicit at the start of an interaction and repeated periodically so users are never misled.
2. Safety protocols for crisis language.
When a user expresses thoughts of self-harm or suicidal ideation, the AI must follow a documented crisis-response procedure that connects the person to real-world help.
3. Independent oversight.
Developers must publish summaries of third-party audits to ensure the system operates safely, ethically, and transparently.
4. Protection from manipulative design.
SB 243 prohibits AIs from using addictive engagement tactics - such as unpredictable rewards or “keep-you-hooked” interactions - particularly when used by children.
5. Boundaries around sensitive topics.
The law forbids AI systems from discussing mental-health, sexual, or self-harm content with minors, and from presenting themselves as emotional companions in those domains.
6. Public accountability.
Companies must provide anonymized annual data about crisis-related detections and responses, giving regulators and the public visibility into how these systems behave in the real world.
Although the legislation was drafted with companion chatbots in mind, it signals a broader shift: AI developers can no longer rely solely on good intentions; they must demonstrate clear safeguards and social responsibility.
Why This Is a Good Idea
Some have framed SB 243 as a constraint on innovation, but in truth, it’s a necessary foundation for sustainable progress. When technology speaks, listens, and learns from us, it enters the domain of human relationship - and relationships require accountability.
By codifying transparency and safety, SB 243 helps rebuild public confidence in AI at a time when skepticism is high. Parents, educators, and students deserve assurance that an AI coach or tutor will never manipulate emotions, blur identity lines, or cross ethical boundaries.
Regulation doesn’t stifle innovation - it establishes trust, the prerequisite for any meaningful educational or therapeutic use of AI.
Why It Especially Matters for Neurodiverse Learners
Children and adolescents are among the most impressionable users of technology. They are still forming their sense of identity, control, and emotional regulation. For neurodiverse students with ADHD, autism, learning differences - the risks and opportunities of AI are even greater.
Neurodiverse learners often benefit from structure, predictability, and consistent feedback - qualities that well-designed AI systems can provide beautifully. But the same sensitivity that allows them to thrive with supportive technology also makes them more vulnerable to confusing or manipulative interactions.
When an AI mirrors human emotion too perfectly, it risks fostering dependence, blurring the line between guidance and companionship, and ultimately undermining independent problem-solving. Yet, with thoughtful goals and a profound understanding of how the brain learns and how to gradually fade support, we can achieve something far more transformative: designing AI that cultivates autonomy, nurtures agency, and reinforces the quiet confidence at the heart of genuine learning.
For educators and parents, this distinction is profound. The goal is not to make AI feel human - it’s to make AI serve humanity, particularly in the classroom, where trust and clarity are essential ingredients for learning.
A Moment of Ethical Maturity
SB 243 represents more than a policy milestone; it’s a cultural turning point. It acknowledges that emotional safety is as critical as data security. It asks AI creators to think like educators and caregivers, not just engineers.
As these standards ripple outward, they set a model other states and sectors will likely follow. The conversation has shifted from “Can we build it?” to “Should we - and if so, how responsibly?”
That’s a healthy sign of maturity in an industry that touches children’s lives daily.
Our Perspective at My Learning Labs
Long before SB 243 was drafted, we at My Learning Labs built our AI coaching philosophy around the very principles the law now enforces: transparency, ethical boundaries, non-addictive engagement, and student empowerment.
Our mission has never been to replace human teachers, but to strengthen the learner’s own executive-function skills - the mental “muscles” of planning, organization, and self-control. From day one, we’ve treated clarity, respect, and emotional safety not as compliance checkboxes but as core educational values.
SB 243 didn’t change how we work; It simply affirmed why we do. We believe every interaction between a student and an AI should reinforce confidence, independence, and humanity. That is, and will remain, the guiding principle behind My Learning Labs.


