California SB-53 Targets Big AI—AgeTech Should Care Too

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the "Transparency in Frontier Artificial Intelligence Act" (TFAIA). The law targets companies developing massive AI systems—specifically those using computing power equivalent to 10^26 operations and generating over $500 million in annual revenue.

This law seems irrelevant to companies in all industries, including AgeTech. Your care coordination platform, fall detection system, or medication management app is not getting even remotely close meet these thresholds. But three realities make SB-53 matter anyway.

SB-53 regulates "large frontier developers", referring to companies at the absolute cutting edge of AI capability. "Frontier" refers to the most advanced AI systems being developed today, the ones pushing the boundaries of what's technically possible.

Currently, this captures approximately 10-15 companies worldwide, including OpenAI (ChatGPT), Anthropic (Claude), Google DeepMind (Gemini), Meta (Llama), and a handful of other tech giants with the resources to train models at this scale. These are companies meeting both criteria:

  1. They've trained AI models using at least 10^26 operations of computing power

  2. They and their affiliates exceeded $500 million in annual revenue in the preceding year

To put that computing threshold in perspective: 10^26 operations is roughly equivalent to every person on Earth (about 8 billion people) performing one calculation per second for 400 million years. It's the amount of computational work needed to train systems like GPT-4 or Claude.

Asking yourself: "In what universe does this law apply to my company?!"

1. You're Building on Regulated Foundations

For AgeTech companies using APIs from OpenAI, Anthropic, Google, or similar providers, these vendors are now subject to SB-53's transparency requirements.

The law requires large frontier developers to "write, implement, and clearly and conspicuously publish on its internet website a frontier AI framework" describing their approach to safety, including "cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer" and procedures for "identifying and responding to critical safety incidents." (Section 22757.12)

Your AI vendors will become more cautious and transparent as they implement compliance infrastructure. Expect more detailed terms of service, more frequent safety updates, and potentially higher costs or restrictions on use cases.

2. The Thresholds Will Move

SB-53 requires the California Department of Technology to annually "assess recent evidence and developments relevant to the purposes of this chapter and shall make recommendations about whether and how to update" the definitions of "frontier model," "frontier developer," and "large frontier developer" to "accurately reflect technological developments, scientific literature, and widely accepted national and international standards." (Section 22757.14)

The legislation's findings explicitly acknowledge that "in the future, foundation models developed by smaller companies or that are behind the frontier may pose significant catastrophic risk, and additional legislation may be needed at that time." (Section 1, finding (n))

As AI becomes more accessible and computing power cheaper, today's frontier becomes tomorrow's standard practice. The regulatory floor will rise to meet the technology.

3. The Standards Will Shape Market Expectations

SB-53 establishes transparency practices that will influence what customers expect from all AI systems:

Transparency reports:

Before deploying new frontier models, developers must publish reports including "the intended uses of the frontier model," "any generally applicable restrictions or conditions on uses," and "summaries" of "assessments of catastrophic risks" and their results. (Section 22757.12(c))

This means companies like OpenAI, Anthropic, and Google must publicly document what their AI systems are designed to do, what they're prohibited from doing, and what safety testing they've conducted before release. For example, a transparency report might specify that a model is intended for general-purpose text generation but restricted from providing instructions for creating weapons, along with summaries of testing that evaluated whether the model could bypass these restrictions.

These reports create a public record that regulators, researchers, and customers can review, establishing baseline expectations for what "responsible AI deployment" looks like.

Whistleblower protections:

The law prohibits frontier developers from preventing employees from disclosing information about safety risks and requires large frontier developers to provide "a reasonable internal process through which a covered employee may anonymously disclose information" about safety concerns, with "a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation." (Labor Code Section 1107.1(e))

This establishes that AI companies must create cultures where employees can raise safety concerns without fear of retaliation. It matters because the people most likely to spot dangerous AI capabilities or safety shortcuts are the engineers and researchers working directly with these systems.

By protecting and institutionalizing these reporting channels at the frontier AI level, SB-53 signals that safety culture isn't optional—it's a regulatory expectation that will likely extend to healthcare-related AI as regulations evolve.

Incident reporting:

Frontier developers must report "critical safety incidents" to the California Office of Emergency Services within 15 days, or within 24 hours if the incident "poses an imminent risk of death or serious physical injury." (Section 22757.13(c))

When families are selecting monitoring systems for aging parents, or senior living communities are evaluating care coordination platforms, they'll increasingly expect this level of transparency and accountability—regardless of company size.

The Regulatory Context

SB-53 isn't isolated. While the federal government has largely opposed comprehensive AI regulation, states are moving forward aggressively. In 2025, all 50 states (plus D.C., Puerto Rico, and the U.S. Virgin Islands) introduced AI-related legislation, and 38 states adopted or enacted approximately 100 measures across topics like government use, health, workforce, consumer protection, and transparency.

For AgeTech companies—whether two employees or two hundred—this is the moment to adopt AI governance as part of responsible product development. Not because SB-53 requires it, but because trust depends on it.

Where to start? Standards like ISO/IEC 42001 (AI management systems) and ISO/IEC 42005 (AI risk assessment) provide scaffolding you can use today. They don’t need to be pursued immediately as certifications; instead, they offer practical structures for building processes that protect users, reassure partners, and prepare your organization for the rules already emerging across states and sectors.

Beyond standards, there are other early steps that reinforce trust:

  • Internal safety culture: even small teams can designate someone responsible for monitoring AI risks and user concerns.

  • Documentation discipline: keeping lightweight records of design choices, testing outcomes, and limitations builds habits that scale with the company.

  • Engagement with users and caregivers: inviting feedback not only surfaces blind spots but signals openness and accountability.

  • Peer benchmarking: watching how other AgeTech and healthtech firms communicate their safeguards can help you anticipate market expectations.

None of these require a compliance department. They simply extend the scaffolding that standards provide into practical ways of showing—internally and externally—that safety and dignity are built into your product’s DNA.

Users, families, caregivers and care providers will not draw fine distinctions between “frontier” and “non-frontier” when evaluating whether to trust an AI system in care. AgeTech+AI thrives on trust. And trust is built not just on features, but on the safety and accountability your company demonstrates from day one.

Next
Next

Managing AI Conversations in AgeTech Apps