Utah's SB 226 & AgeTech: A Comprehensive Guide
Ezra Schwartz © 2025
What is Utah's SB 226?
Utah's Senate Bill 226, the "Artificial Intelligence Consumer Protection Amendments," will take effect May 7, 2025 (Section 9). This landmark legislation creates specific requirements for any business using generative AI in consumer interactions, with significant implications for the AgeTech sector serving older adults.
Relevancy to AgeTech AI-Powered Products and Services
1. Definition of Generative AI
The law provides a specific definition of generative AI as AI systems that:
Are data-trained: The system must be built on training data
Simulate human conversation: The AI must engage with consumers through text, audio, or visual communication methods
Generate autonomous content: The system produces non-scripted outputs resembling human-created content with minimal human oversight
(Section 13-75-101(4), lines 44-53)
This broad definition encompasses many AgeTech solutions including virtual companions, medication reminders, telehealth interfaces, and customer service chatbots designed for older adults.
2. Disclosure Requirements
When implementing generative AI tools for older adults, AgeTech companies must:
Respond truthfully to direct inquiries: If a user specifically asks whether they're interacting with AI or a human, you must disclose the truth (Section 13-75-103(1), lines 98-102)
Proactively disclose high-risk interactions: You must prominently inform users when AI is involved in collecting sensitive information or providing consequential advice, including:
(Section 13-75-101(5), lines 54-67 and Section 13-75-103(2), lines 106-112)
These disclosure requirements are particularly critical for AgeTech, as older adults may have different expectations around human vs. AI interactions in sensitive contexts.
3. No AI Defense for Violations
This pivotal section of Utah's SB 226 eliminates a potential corporate defense strategy by making it impossible to shift blame to the AI system when violations occur:
What It Means in Practice: Companies cannot escape legal responsibility by claiming "the AI did it, not us."
Three Specific Scenarios Where This Applies:
AI-generated false statements: If your AI assistant provides inaccurate or misleading information to an older adult
AI actions violating consumer laws: If your AI system takes actions that breach consumer protections
AI-assisted violations: If you leverage AI technology to facilitate violations of consumer protection laws
Real-World Example for AgeTech: Consider an AI medication management system that incorrectly advises an older adult about prescription dosing or interactions. The company cannot defend itself by arguing, "Our engineers didn't make that recommendation—the AI system did." The law establishes that the company bears full responsibility for its AI's outputs.
Why This Matters for Vulnerable Populations: Older adults may be particularly susceptible to misleading information due to varying levels of technological familiarity. This provision ensures AgeTech companies implement rigorous safeguards and oversight for their AI systems rather than treating them as separate entities.
Bottom Line for Compliance: Companies must treat their AI systems as extensions of their business operations, applying the same standards, review processes, and accountability measures as they would for human employees.
(Section 13-75-102, lines 90-95)
4. Safe Harbor Provision
The law provides an important compliance pathway that can shield companies from enforcement:
A business is protected from enforcement actions related to disclosure requirements if their generative AI system:
Makes clear disclosures at the beginning: The AI must identify itself at the start of any consumer transaction or regulated service
Maintains transparency throughout: The AI must consistently remind users of its non-human nature during the entire interaction
Uses explicit identification language: The AI must clearly state that it is: (1) generative artificial intelligence, (2) not human, or (3) an artificial intelligence assistant
(Section 13-75-104(1), lines 119-127)
For AgeTech companies, this safe harbor offers a straightforward compliance strategy—implementing consistent, clear AI identification practices throughout all user interactions.
5. Enforcement and Penalties
The law establishes a comprehensive enforcement framework:
Legal classification: Violations constitute breaches of Utah's Consumer Sales Practices Act (Section 13-75-105(1), line 134)
Administrative penalties: The Division of Consumer Protection can impose fines up to $2,500 per individual violation (Section 13-75-105(4)(a), lines 144-145)
Potential court actions: Legal proceedings may result in injunctions, disgorgement of profits, victim compensation, and additional per-violation fines (Section 13-75-105(5), lines 147-154)
Escalated penalties for non-compliance: Companies violating administrative or court orders face elevated penalties up to $5,000 per violation (Section 13-75-105(7)(a), lines 160-162)
For AgeTech companies serving numerous older adults, these per-violation penalties could accumulate rapidly, making compliance a financial imperative beyond just ethical considerations.
Action Items for AgeTech Companies
1. Audit Your AI Systems
Conduct a comprehensive inventory of all customer-facing AI applications, identifying which ones meet the definition of generative AI under Section 13-75-101(4). This should include virtual assistants, chatbots, voice interfaces, and any AI systems that communicate directly with older adults.
2. Update User Interfaces
Redesign your interfaces to incorporate clear, age-appropriate AI disclosures as outlined in Section 13-75-104(1). These should be visually prominent, use plain language, and appear consistently at the start of and throughout all AI interactions to secure safe harbor protection.
3. Enhance Transparency
Implement technical capabilities ensuring your AI systems can recognize and truthfully answer when users ask if they're interacting with AI or a human, as required by Section 13-75-103(1). This may require modifying conversational flows and training models to recognize various phrasings of this question.
4. Re-evaluate Risk Levels
Systematically assess your AgeTech applications against the "high-risk AI interactions" criteria in Section 13-75-101(5). Pay particular attention to systems handling health data, providing personalized recommendations, or offering advice that could influence important decisions for older adults.
5. Train Staff
Develop comprehensive training programs explaining these new liability standards to all team members involved in AI development, implementation, and oversight. Emphasize that Section 13-75-102 establishes full company responsibility for AI actions and outputs.
6. Document Compliance
Create robust documentation processes that record how your systems meet the disclosure requirements in Sections 13-75-103 and 13-75-104. This documentation will be essential if your compliance is ever questioned by regulators.
7. Test with Older Users
Conduct specialized usability testing with older adults to verify that your AI disclosures are easily understood across varying levels of technological literacy and cognitive abilities. This is especially critical for high-risk interactions covered by Section 13-75-103(2) where misunderstandings could have serious consequences.
By taking these proactive steps well before the May 2025 effective date, AgeTech companies can ensure both legal compliance and continued trust among the older adults they serve.
8. Establish an AI Governance Structure
Implement a formal AI governance framework with clear lines of responsibility and accountability:
Executive oversight: Designate C-suite responsibility for AI compliance
Cross-functional committee: Create a team spanning legal, product, engineering, and ethics to review AI applications
Documentation protocols: Establish processes for maintaining records of decisions about AI design, training, and deployment
Regular review cadence: Schedule recurring meetings to assess compliance with SB 226 requirements
Risk assessment framework: Develop a standardized methodology for evaluating AI systems against regulatory requirements
Incident response plan: Create procedures for addressing potential violations or consumer complaints about AI interactions
Compliance monitoring: Implement ongoing testing and verification of AI system disclosures and behaviors
This governance structure will ensure systematic, organization-wide implementation of the requirements in SB 226 while creating clear accountability for AI system compliance.
By taking these proactive steps well before the May 2025 effective date, AgeTech companies can ensure both legal compliance and continued trust among the older adults they serve.
If you need help thinking through implementing these and similar AI requirements (Colorado AI Law, EU AI ACT, etc.), please reach out for personalized guidance.