Introduction: AI in Healthcare - Promise and Peril
The Transformation of Medicine
In a hospital room in Boston, a radiologist reviews a chest X-ray. The image analysis AI has flagged a subtle nodule in the upper right lobe—a finding the human eye might have missed in a routine scan. Biopsy confirms early-stage lung cancer, caught years before symptoms would have appeared. The patient receives treatment and survives.
Three thousand miles away in Phoenix, an 82-year-old woman recovering from hip surgery receives a letter. Her insurance company's AI system has determined that she no longer requires skilled nursing care. Her surgeon disagrees, but the algorithm doesn't ask the surgeon. It calculates a predicted recovery date based on diagnosis codes and demographic data, and that date has passed. She must leave the facility or pay $600 per day out of pocket.
These two scenes capture the duality of AI in healthcare: technology with extraordinary power to help and to harm, often deployed with minimal oversight, rarely understood by the patients whose lives it touches.
The Scale of Healthcare AI
Artificial intelligence has penetrated every corner of healthcare delivery. This isn't a future projection—it's present reality:
Diagnostic AI now assists physicians in radiology, pathology, dermatology, ophthalmology, and cardiology. Algorithms read medical images, flag abnormalities, and even provide preliminary interpretations. The FDA has approved more than 500 AI/ML-enabled medical devices, with approvals accelerating each year.
Clinical Decision Support systems embedded in electronic health records recommend medications, dosages, diagnostic tests, and treatment pathways. Sepsis prediction algorithms monitor vital signs and trigger early warnings. Drug interaction checkers prevent dangerous prescriptions.
Claims Processing AI at insurance companies determines whether procedures require prior authorization, whether claims meet medical necessity standards, and how much to pay. These algorithms process millions of decisions per day—most never reviewed by a human.
Administrative AI handles scheduling, coding, documentation, and patient communications. AI scribes listen to patient encounters and generate clinical notes. Chatbots answer patient questions and triage symptoms.
Research AI accelerates drug discovery, identifies clinical trial candidates, and analyzes genomic data to match patients with targeted therapies.
The global healthcare AI market reached $28.4 billion in 2025, with projections exceeding $187 billion by 2030. But market size tells only part of the story. What matters is that AI systems now influence—and sometimes determine—who receives care, what kind of care they receive, and whether their insurance covers it.
Why Healthcare AI Ethics Is Different
Every industry deploying AI faces ethical challenges. Healthcare presents unique considerations that elevate the stakes:
Life-and-Death Consequences
A product recommendation algorithm that makes a poor suggestion costs someone money or time. A diagnostic algorithm that misses a tumor costs someone their life. A claims algorithm that wrongly denies coverage forces patients to forgo treatment or face financial ruin.
This isn't hypothetical. Real harms have occurred:
In 2023, a class action lawsuit alleged that UnitedHealthcare's naviHealth algorithm was used to deny elderly patients skilled nursing care. The algorithm predicted a standard recovery timeline for each diagnosis, and when patients exceeded the predicted days, coverage was terminated—even when treating physicians recommended continued care. According to court documents, the algorithm had only a 10% accuracy rate in predicting actual patient recovery, yet it was used to cut coverage for patients who desperately needed it.
In 2024, reporting revealed that Cigna physicians were denying claims in bulk—reportedly handling 50 cases in 10 seconds—without opening patient files. An algorithm flagged claims for denial, and the physician review was perfunctory at best.
These aren't edge cases. They represent systematic practices at major insurers, enabled by AI systems designed to reduce costs, not to serve patients.
Vulnerability of the Affected Population
Healthcare AI doesn't affect people at random. It affects them when they're sick, scared, and dependent on systems they don't control. Patients facing serious illness have limited ability to shop for alternatives, negotiate with insurers, or navigate appeals processes. They rely on physicians who are themselves constrained by prior authorization requirements and insurance coverage decisions.
The vulnerability extends to specific populations disproportionately affected by AI limitations:
Patients with darker skin tones face AI systems trained predominantly on images of lighter-skinned patients. Dermatology AI shows accuracy gaps of 15-20% for conditions like melanoma in Black patients. Pulse oximeters—not AI per se but illustrative of the problem—have been found to overestimate oxygen levels in Black patients, leading to delayed treatment during COVID-19.
Patients in rural and community settings are served by AI trained on data from academic medical centers. Practice patterns, patient populations, and available resources differ significantly—but the algorithms don't adjust.
Non-native English speakers struggle with AI chatbots and symptom checkers that don't account for language variation, cultural differences in describing symptoms, or the simple need for translation.
Elderly patients are particularly affected by algorithms that predict recovery timelines based on younger patient data. Recovery takes longer with age, but many algorithms don't adequately account for this biological reality.
Regulatory Complexity
Healthcare is already one of the most regulated industries in America. AI must navigate existing frameworks that weren't designed with algorithms in mind:
HIPAA governs patient data privacy. AI training, deployment, and monitoring must comply with privacy and security rules that date to 1996—an era when "artificial intelligence" meant science fiction. The regulatory framework hasn't caught up, leaving gray areas around de-identification, cloud processing, and third-party AI services.
FDA regulates medical devices, and many AI tools qualify as Software as a Medical Device (SaMD). But the regulatory framework assumes devices remain static after approval—a poor fit for AI systems that learn and adapt.
State insurance regulations govern claims processing, prior authorization, and medical necessity determinations. Arizona's HB 2175 represents a new wave of AI-specific requirements in this space.
Anti-discrimination laws prohibit discrimination in healthcare settings, but enforcement agencies are only beginning to examine how algorithms might produce discriminatory outcomes.
State AI laws like Colorado's SB 205 automatically classify healthcare AI as high-risk, triggering additional compliance obligations.
For healthcare organizations, compliance means navigating all of these requirements simultaneously—often with limited guidance on how they interact.
The Trust Imperative
The physician-patient relationship is built on trust. Patients share intimate information with their doctors because they believe it will be used to help them. When AI intervenes in that relationship—often invisibly—trust can erode.
Patient surveys consistently show concern about AI in healthcare:
- 89% want to know when AI is involved in their care
- 76% want the option to request human-only review
- 68% are concerned about AI errors in diagnosis
- 81% believe doctors should have final say over AI recommendations
These aren't technophobic responses. Patients are appropriately cautious about technology that affects their health without their knowledge or consent. Healthcare organizations that deploy AI without transparency risk damaging the trust that makes care possible.
Who This Track Serves
This learning track is designed for healthcare professionals across the industry:
Healthcare Administrators managing AI deployment must understand both the opportunities and the compliance requirements. This includes hospital executives, practice managers, and health system leaders making decisions about AI adoption.
Clinical Staff using AI tools in practice need to understand how these systems work, what their limitations are, and when to override algorithmic recommendations. This includes physicians, nurses, and allied health professionals.
Health IT Professionals implementing and maintaining AI systems must ensure technical compliance with security, privacy, and regulatory requirements.
Compliance Officers navigating regulatory requirements need frameworks for applying HIPAA, FDA, state insurance laws, and emerging AI regulations.
Payer Staff working for insurance companies or third-party administrators must understand the ethical and legal requirements for AI in claims processing.
What You'll Learn
By completing this track, you will be able to:
-
Apply HIPAA requirements to AI systems — You'll understand how privacy and security rules affect AI training, deployment, and monitoring.
-
Navigate FDA regulation — You'll know when AI qualifies as a regulated medical device and what that requires.
-
Implement the Arizona HB 2175 model — You'll be able to design claims processing systems with appropriate human oversight.
-
Build ethical clinical decision support — You'll understand the principles of alert design, explainability, and bias monitoring.
-
Create organizational governance — You'll have templates and frameworks for healthcare AI ethics programs.
The Ethical Framework
Throughout this track, we apply five core principles adapted from medical ethics to the AI context:
| Principle | Traditional Meaning | AI Application |
|---|---|---|
| Beneficence | Act in the patient's best interest | AI must improve patient outcomes |
| Non-maleficence | First, do no harm | AI must not cause injury through errors or bias |
| Autonomy | Respect patient self-determination | Patients must understand and consent to AI involvement |
| Justice | Fair distribution of benefits and burdens | AI must serve all populations equitably |
| Transparency | Honest communication | AI decisions must be explainable to patients and clinicians |
These principles aren't abstract ideals. They provide concrete guidance for the decisions you'll face: Should we deploy this AI tool? How do we configure it? What oversight is required? When must we disclose AI involvement to patients?
Before You Proceed
Prepare for this track by taking inventory of AI in your environment:
- What AI tools are used in clinical care? (diagnostic aids, decision support, risk scoring)
- What AI is embedded in your EHR? (order recommendations, clinical pathways, documentation)
- What AI affects claims and coverage? (prior authorization, medical necessity, coding)
- What AI interacts with patients? (chatbots, patient portals, symptom checkers)
- What AI do your vendors use that you may not see directly?
This inventory is your starting point. You cannot govern what you cannot see, and you cannot make ethical choices about AI you don't know exists.