LegisTrack
Back to all bills
S 2997119th CongressIn Committee

Right to Override Act

Introduced: Oct 9, 2025
Sponsor: Sen. Markey, Edward J. [D-MA] (D-Massachusetts)
Standard Summary
Comprehensive overview in 1-2 paragraphs

The Right to Override Act would shield health care professionals from adverse employment actions when they override AI/CDSS (artificial intelligence clinical decision support systems) outputs in the course of patient care. It requires covered entities (employers, facilities, health plans, etc.) to adopt policies that preserve clinicians’ independent judgment, allow timely overrides, and enable feedback about AI/CDSS performance. The bill creates comprehensive enforcement mechanisms—through HHS (via the Office for Civil Rights) for Title I and through the Department of Labor for Title II—plus a private right of action. It also establishes training, governance (an AI/CDSS committee), and educational materials, with various protections for whistleblowers and multiple state enforcement options. Overall, the law aims to normalize clinician overrides of AI recommendations while imposing standards on how AI/CDSS are used and monitored. Impact would include stronger protection for clinicians who rely on their professional judgment over algorithmic outputs, potentially altering how health systems deploy AI tools. Institutions would need formal policies, governance structures, and ongoing training, and they could face penalties for retaliating against clinicians who override AI outputs. There are also detailed provisions on data privacy for override data, arbitration restrictions, and state-level enforcement, with non-preemption of existing state laws and agreements.

Key Points

  • 1Policies to use and override AI/CDSS: Covered entities must have policies ensuring AI outputs are not an unchallengeable substitute for clinicians’ independent judgment, allow timely overrides when appropriate, enable feedback on AI outputs (including bias or inaccuracy), and prohibit sharing override data that could identify individual clinicians or groups. They must also inform clinicians and their representatives about the policy and provide training on usage, override circumstances, development inputs, and potential AI biases. An AI/CDSS committee (with labor/employee representation) must be established to advise on policies and meet quarterly.
  • 2Enforcement and penalties: Violations can be enforced by federal agencies (HHS OCR for Title I; Department of Labor for Title II), including civil monetary penalties and the possibility of injunctive relief. There is also a private right of action allowing individuals adversely affected by a violation to sue in federal courts, with damages (including potential treble damages), statutory damages for specific violations, and attorney’s fees. Arbitration agreements cannot pre-dispute claims under this title.
  • 3Adverse employment actions and whistleblower protections: The act defines a broad set of potential adverse actions (termination, suspension, demotion, punitive scheduling, denied promotions or benefits, loss of privileges, reassignment, etc.). It prohibits adverse actions against clinicians who override AI/CDSS outputs and protects individuals who file complaints, seek help, participate in investigations, or discuss possible violations.
  • 4Education, state roles, and regulatory framework: The Secretary of HHS must develop and disseminate educational materials within a year for entities and clinicians. The act contemplates regulations developed in coordination with other federal agencies (including labor, justice, EEOC, NLRB), and permits state enforcement actions (parens patriae) with the possibility for intervention by the federal agencies. State actions are designed to complement, not preempt, federal enforcement.
  • 5Definitions and scope: “AI/CDSS” includes systems that provide outputs based on clinical guidelines or data-derived models, including unsupervised learning, and that produce predictions or recommendations. “Covered entity” includes any party employing or engaging health care professionals (including facilities and health plans). The act also covers those who grant admitting privileges and others involved in care decisions, and it clarifies that the statute does not shield malpractice claims from being pursued if an override occurs.

Impact Areas

Primary group/area affected- Health care professionals (clinicians who may override AI/CDSS outputs) and the covered entities that employ or engage them (hospitals, clinics, health systems, ancillary facilities, and health plans). They would face new policy, governance, training, and oversight requirements, plus protections against retaliation for overriding AI outputs.Secondary group/area affected- Patients and patient care decisions, who may benefit from clinicians maintaining independent judgment and potentially more nuanced consideration beyond AI outputs. The policy also touches on patient-informed decision processes in override situations.Additional impacts- AI/CDSS developers and vendors, who may need to align their tools with the policy framework, provide transparent inputs and limitations, and support feedback mechanisms. State regulators and federal agencies (HHS OCR, DOL, EEOC, DOJ, NLRB) would have expanded roles in enforcement and rulemaking. There could be financial implications for entities due to penalties and compliance costs, as well as potential changes in how disputes related to AI-driven care are litigated (including the possibility of civil actions and statutory damages).
Generated by gpt-5-nano on Oct 23, 2025