AI Compliance in Telemedicine and Health Tech: What the 2026 Regulatory Landscape Actually Looks Like
The regulatory environment for artificial intelligence in telemedicine has shifted from policy drafting to active enforcement. Multiple state AI laws took effect on January 1, 2026. Federal agencies have issued substantial new guidance. The first major lawsuits are moving through the courts. And penalty structures — state and federal — are now fully operational.
This is a data-backed breakdown of where things stand, what cases and regulations matter most, and where enforcement is heading for the rest of 2026. If you're a clinician building a practice, a health tech founder, or anyone deploying AI in a healthcare setting, this is the compliance picture you need to understand.
The Numbers Driving Regulatory Attention
Regulatory urgency follows market scale, and the numbers here are hard to ignore.
$26.1 billion — global AI-in-telemedicine market value in 2025, projected to surpass $175 billion by 2034 (Precedence Research)
$160 billion+ — total global telemedicine market in 2025, on track to exceed $800 billion within a decade
1,250+ FDA-authorized AI-enabled medical devices as of mid-2025, up from 950 the year prior (Bipartisan Policy Center)
7 million+ controlled-substance prescriptions issued via telemedicine without a prior in-person visit in 2024 alone (HHS)
250+ healthcare AI bills introduced across 34+ states by mid-2025 (Manatt Health)
That last figure is the one that reframes the compliance conversation entirely. This is no longer a federal-only issue. It's a state-by-state map — and if you're running a multi-state telehealth practice, you're navigating multiple regulatory frameworks simultaneously, determined by where your patients sit, not where you are.
Federal Regulatory Developments
FDA: Risk-Based Oversight of AI-Enabled Software
In January 2026, the FDA published guidance clarifying that low-risk AI-enabled software tools and consumer wearables generally fall outside medical device regulation — specifically when clinicians retain the ability to independently review the AI's clinical recommendations. High-risk products that diagnose or treat disease without clinician intermediation remain subject to full premarket oversight. (Telehealth.org)
The operative phrase is "independently review." If a licensed clinician is evaluating the AI's output, applying their own judgment, and making the clinical decision — the tool likely sits outside FDA device regulation under current guidance. If the workflow looks more like automated approval with a provider signature, the analysis changes significantly.
The American Hospital Association reinforced this distinction in a December 2025 letter to the FDA, arguing that clinical decision support tools excluded under the 21st Century Cures Act should not be pulled into new AI measurement requirements. (AHA Letter) The tension between "decision support" and "decision-making" is precisely where enforcement risk concentrates.
The FDA also finalized cybersecurity guidance in February 2026, requiring "secure by design" development, Software Bills of Materials (SBOMs), and embedded threat modeling for AI-enabled devices.
DEA Telehealth Prescribing: Extended Through 2026
On January 2, 2026, HHS and the DEA jointly extended telemedicine prescribing flexibilities through December 31, 2026 — the fourth such extension. Patients can continue receiving controlled-substance prescriptions without a prior in-person visit while permanent regulations, including the proposed Special Registration for Telemedicine, are finalized. (HHS)
The data driving this decision: when similar flexibilities lapsed in September 2025, fee-for-service telemedicine visits dropped 24% almost immediately. In 2024, over 7 million controlled-substance prescriptions were issued through telehealth without an in-person encounter.
The strategic consideration: these are temporary flexibilities. If your practice model — particularly in MAT, psychiatry, or pain management — depends on them, build your compliance infrastructure as though the stricter permanent rules are already in effect.
HIPAA Security Rule Revision
The proposed HIPAA Security Rule update, published in January 2025, would explicitly establish that ePHI used in AI training data, prediction models, and algorithm development is protected under HIPAA. It's on the HHS regulatory agenda for finalization by mid-2026, with implementation expected within 180 days.
Key provisions include mandatory annual compliance audits, comprehensive asset inventories and network mapping, required multi-factor authentication and encryption, and 24-hour breach notification requirements for business associates. (Healthcare Law Insights)
HIPAA remains technology-neutral by design — there is no "AI section." The Privacy Rule's minimum necessary standard, the Security Rule's full safeguards, and the requirement for Business Associate Agreements all apply to AI systems exactly as they apply to any technology processing PHI. The update makes this more explicit and adds enforcement mechanisms. (HIPAA Journal | Foley & Lardner)
The Federal Preemption Question
In late 2025, the White House issued an executive order directing the Attorney General to form an AI Litigation Task Force to challenge state AI laws deemed inconsistent with federal policy. Colorado's AI Act was specifically cited. (Akerman LLP)
This creates genuine uncertainty: states are actively legislating healthcare AI while the federal government signals it may challenge those laws. However, no state law has been invalidated to date. The executive order does not carry the force of law. All existing state requirements remain enforceable. Plan for the most restrictive standards you face.
State-Level AI Regulation: Where the Real Compliance Risk Lives
With Congress yet to pass comprehensive AI legislation, states have become the primary regulatory actors. Several major laws took effect on January 1, 2026, creating a multi-jurisdictional compliance landscape with distinct enforcement mechanisms and penalty structures.
California
AB 489 (effective January 1, 2026): Prohibits AI developers and deployers from using language, titles, or design elements implying the AI holds a healthcare license. Your chatbot cannot call itself a "virtual physician" or use design cues suggesting licensure. Enforcement through healthcare professional licensing boards.
SB 243 (effective January 1, 2026): Regulates "companion chatbots" with mandatory AI disclosure, self-harm prevention protocols, and crisis service referrals. Establishes a private right of action — individuals can sue for up to $1,000 per violation (or actual damages if higher), plus attorney's fees. (Nixon Law Group)
Texas
TRAIGA (effective January 1, 2026): Requires conspicuous written disclosure of AI use in diagnosis or treatment before or at the time of patient interaction. Attorney General enforcement with penalties of $10,000–$200,000 per violation, plus $2,000–$40,000 per day for ongoing non-compliance.
SB 1188 (effective September 1, 2025): Permits AI for diagnostic or treatment purposes only when the practitioner personally reviews all AI-generated content before clinical decisions.
Illinois
The Wellness and Oversight for Psychological Resources Act (effective August 1, 2025) prohibits AI from making independent therapeutic decisions, directly interacting with clients in therapeutic communication, or generating treatment plans without licensed-professional review. AI is limited to administrative and supplementary functions only. (Manatt Health)
Colorado
Colorado AI Act (enforcement begins June 30, 2026): Mandates disclosure when AI is used in high-risk decisions, annual impact assessments, anti-bias controls, and record retention for at least three years. AG enforcement under unfair-trade-practices law.
New York
AI Companion Law (effective November 4, 2025): AG can impose civil penalties up to $15,000 per day per violation.
The multi-state takeaway: if you see patients in any of these states via telehealth, these laws apply to you based on patient location. Build your compliance to the most restrictive applicable standard.
The Lawsuits Defining AI Liability in Healthcare
Sharp HealthCare: Ambient AI and Patient Consent
This is the case that should be on every clinician's radar. In November 2025, patient Jose Saucedo filed a proposed class action against Sharp HealthCare in San Diego, alleging the health system deployed Abridge's ambient clinical documentation tool to record doctor-patient conversations without consent. (KPBS | Medscape)
The allegations are significant on multiple levels. Audio of clinical encounters was transmitted to a third-party vendor's cloud servers, creating potential unauthorized disclosure under California's Confidentiality of Medical Information Act. AI-generated clinical notes reportedly contained false attestations that patients had been "advised" and had "consented" to the recording — when no consent had actually been obtained. And the health system lacked a functional deletion-on-demand process for vendor-held audio data.
Plaintiffs' attorneys estimate over 100,000 patient encounters were recorded during the rollout. California's penal code permits $5,000 per violation. Legal analysts at Fisher Phillips have noted that vendors publicly listing their healthcare system customers effectively gives plaintiffs' firms pre-built class definitions. (Fisher Phillips)
If you're using any ambient AI scribe — Abridge, Nabla, DeepScribe, Nuance DAX — this case puts the entire category on notice. The legal theory is that AI-driven transcription constitutes electronic eavesdropping if proper consent isn't obtained, regardless of whether a human ever "listens."
Algorithmic Coverage Denials
UnitedHealth, Humana, and Cigna face class actions alleging they deployed the nH Predict AI model to override physician recommendations and deny Medicare Advantage coverage despite a known high error rate. In Estate of Gene B. Lokken v. UnitedHealth Group, federal courts in Minnesota and Kentucky allowed claims to proceed in 2025. A January 2026 motion to compel discovery signals the litigation is intensifying. (Georgetown Litigation Tracker)
AI Chatbot Wrongful Death Claims
Multiple families have filed wrongful death actions against AI companies — including Raine v. OpenAI in California and Character Technologies cases in Colorado and New York — alleging chatbots encouraged minors to self-harm. These are considered potential bellwether decisions for generative AI liability in consumer-facing health contexts. (Law360)
HIPAA and AI: The Gap Between Perceived and Actual Compliance
A persistent issue across the industry: organizations assume their AI vendor "handles" HIPAA compliance. A Business Associate Agreement is necessary but not sufficient.
Three AI-specific risk categories require attention:
Re-identification risk. Researchers have demonstrated that AI can restore patient identities from de-identified health data — undermining the framework HIPAA relies on for permissible secondary use. The de-identified health data market currently exceeds $9 billion.
Governance gaps. Approximately 50% of healthcare organizations lack a formal approval process for AI adoption. Only 31% actively monitor their deployed AI systems. Staff entering PHI into consumer-grade tools without BAAs is a live compliance exposure. (Foley Hoag)
Vendor data practices. Beyond the BAA, organizations need to understand what data AI vendors retain, for how long, whether it's used for model training, and who on the vendor side can access it.
HIPAA violation penalties reach $50,000 per violation, including for unknown violations. Criminal penalties for knowing violations range from one to ten years of imprisonment and fines up to $250,000.
The "Digital Kickback" Theory
An emerging enforcement development that deserves more attention: federal agencies are now treating AI software that steers patients toward higher-margin products or services not as a neutral tool, but as a potential kickback mechanism embedded in platform architecture. This theory blends Anti-Kickback Statute, FDCA misbranding, and consumer protection theories into a single enforcement vector. (White & Case)
In November 2025, executives were convicted in a $100 million scheme using deceptive advertising and auto-refill technology to distribute controlled substances. A parallel $2.7 million genetic testing fraud case involved falsified Medicare enrollment documents. In both cases, the technology architecture was integral to the fraud theory.
Healthcare fraud enforcement did not slow down in 2025 despite expectations of deregulation. It intensified. If you're building algorithms that influence prescribing, product selection, or treatment pathways — and there are financial incentives embedded in those algorithms — regulators will make the connection.
Six Predictions for the Rest of 2026
1. Ambient AI consent litigation will proliferate. The Sharp case is a replicable litigation model. Vendor customer lists serve as pre-built class definitions. Expect similar filings in every two-party consent state.
2. State attorneys general will be the primary enforcement actors. Colorado enforcement begins June 30. Texas has TRAIGA. California has licensing board jurisdiction. State-level enforcement will outpace federal action this year.
3. The HIPAA Security Rule revision will advance. On the regulatory calendar for mid-2026. Even if delayed, the direction is set: explicit AI risk assessment requirements, tighter vendor rules, enhanced technical controls.
4. Documented human-in-the-loop review becomes the primary legal defense. Across FDA guidance, state laws, and litigation risk, the ability to demonstrate that a licensed clinician independently evaluated AI output before clinical action is the single most protective compliance measure available.
5. Algorithmic bias auditing requirements will expand. Colorado's annual impact assessment mandate will serve as a template for other states. Organizations should implement bias monitoring before they're required to.
6. MSO–PC structures will face AI-specific scrutiny. Where management services organizations supply AI tools to physician-owned professional corporations, the allocation of AI compliance responsibility between entities will attract regulatory attention — particularly at the intersection of corporate practice of medicine doctrine and AI governance.
What to Do Now
Conduct an AI tool inventory. Document every AI system in use — clinical, administrative, patient-facing. Map data access, vendor identity, and BAA status.
Implement encounter-level consent. For ambient AI documentation, obtain explicit verbal consent at the start of each visit. Ensure chart documentation reflects actual patient authorization — not auto-populated language.
Build to the most restrictive standard. Federal requirements are the floor. Identify the most restrictive state law applicable to your patient population and calibrate compliance to that.
Strengthen vendor contracts. BAAs need AI-specific provisions: data retention limits, model training restrictions, deletion timelines and verification, and vendor personnel access controls.
Document clinician review at every encounter. Create a traceable workflow showing licensed-provider independent evaluation of AI output before clinical action. This is your primary defense under every applicable framework.
Maintain active legislative monitoring. The Manatt Health AI Policy Tracker and Georgetown Health Care Litigation Tracker are two of the strongest free resources available.
The Bottom Line
The regulatory environment for AI in telemedicine has moved from development to enforcement. Laws are live. Lawsuits are active. Penalties are real and operational.
For clinician-led practices and health technology companies, AI compliance is a structural requirement of operating in 2026 — not a future consideration. Organizations that build compliance infrastructure now — documented clinician review, layered multi-state compliance, robust vendor governance — will carry lower risk and stronger market positioning as regulatory expectations continue to tighten.
Camino Strategy Group helps clinicians build compliant private practices and navigate healthcare regulatory complexity. Learn more at privatepracticestack.io.

