AI in Healthcare: Ethical Questions Leaders Must Ask

As Artificial Intelligence (AI) rapidly advances in clinical care, healthcare leaders must consider its ethical implications to ensure AI investments align with values and prioritize outcomes, fairness, and patient trust. Before committing to AI technologies, asking critical questions to prevent unintended harm is essential.

Rick Newell

Rick Newell , MD, MPH

Chief Transformation Officer

Published March 19, 2025

Electronic Medical Records

Why Ethics Matter

Despite AI’s potential to enhance diagnostic accuracy, improve workflows, and mitigate workforce shortages, it poses significant risks. Numerous studies have demonstrated how biased algorithms can worsen disparities, especially for minorities, women, and underserved populations.

One case reported in JAMA Network Open involved a chronic disease management algorithm prioritizing patients based on prior healthcare utilization. Because Black patients historically have less access to care, they had fewer healthcare encounters even when they were just as sick as white patients. As a result, Black patients were systematically overlooked for potentially life-saving interventions.

It's imperative that healthcare leaders ensure that AI supports equitable, patient-centered care and that physicians play an integral role in AI assessment, selection, and implementation.

Key Ethical Questions to Ask Before Investing in AI

AI can be a powerful tool for innovation, but only if deployed responsibly. Administrators need to ask the right questions—not just about performance and cost but also about ethics, transparency, and impact.

Here are some critical considerations before investing in AI.

1. Who Will Benefit — and Who Might Be Harmed?

Examine how AI impacts different patient populations and understand that making an algorithm more accurate for one group may inadvertently make it less accurate for another. These tradeoffs should be explicitly discussed with input from diverse stakeholders, including patient advocates and frontline clinicians.

2. What Data Is Being Used to Train AI?

AI systems reflect the data on which they are trained. If training datasets underrepresent specific populations, the algorithm’s predictions will be less accurate for those groups. Demand transparency from vendors about the demographics included in AI training — including race, ethnicity, gender, socioeconomic status, and age.

3. How Transparent Is the Algorithm?

Many AI systems function as “black boxes,” making it difficult for clinicians and administrators to understand how recommendations are made. Vendors should provide clear documentation of how the algorithm works, what data it considers, and what safeguards are in place to avoid bias. Explainability tools should be required to show clinicians why a recommendation was made.

4. How Will We Monitor for Bias Over Time?

Bias is not a one-time problem — it can emerge over time as algorithms learn from clinical data and patterns and may drift. AI must be subject to continuous monitoring to detect and mitigate new biases. Establishing regular audits, combined with feedback from clinicians and patients, can help catch these issues early.

5. What Safeguards Are in Place for Human Oversight?

AI should augment — not replace — human decision-making. Clinicians must remain actively involved in interpreting AI recommendations and making final decisions. Clear protocols should be established for resolving conflicts between human clinical judgment and algorithmic recommendations. Given that the clinician is ultimately responsible for the care delivered, there must be an accountability and feedback framework for AI “mistakes”.

6. Are Patients Informed and Consenting to AI Use?

Transparency doesn’t stop with clinicians — patients have a right to know when AI influences their care. Research suggests that patients often want more information when AI is involved, especially around how decisions are made and how AI compares to human clinicians. Patients need to be informed when AI is used and given meaningful opportunities to opt out or request alternative approaches.

7. What Is the Long-Term Cost and Impact?

While AI is often framed as a cost-saving tool, its long-term financial and operational impacts are more complex. Maintaining AI systems requires continuous monitoring, retraining, and oversight, all of which add to long-term costs. Evaluating whether these investments will genuinely improve patient outcomes and system efficiency sustainably is important.

8. Will Patient and Institutional Privacy Be Protected?

AI requires vast amounts of data to function effectively, creating potential privacy risks. AI solutions must comply with HIPAA and other privacy regulations while also setting clear policies on how data is collected, stored, and used. Vendors should demonstrate that they follow the principle of data minimization, retaining only the data necessary for system functionality and for the shortest possible time.

For example, Sayvant, an AI-powered clinical documentation tool, exemplifies this approach by retaining only the minimum necessary data and only for the duration needed to process clinician notes for treatment, payment, and operations.

Practical Recommendations for AI Adoption

  • Collaborate with diverse stakeholders — Engage developers, clinicians, patient advocates, and community members throughout the AI lifecycle to address ethical concerns early and often.
  • Demand vendor transparency — Work only with vendors who openly share data sources, algorithm design processes, and bias mitigation efforts.
  • Develop internal ethical governance policies — Establish clear guidelines for AI selection, deployment, and monitoring, including specific bias audits and dispute resolution processes.
  • Educate staff and patients — Train clinicians on AI capabilities and limitations while ensuring patients have clear, accessible information about how AI influences their care.

Conclusion

As healthcare leaders, we are all responsible for ensuring that AI enhances equity, quality, and patient trust rather than undermines them. We must diligently shape a future where AI serves all patients fairly and responsibly by asking the right ethical questions up front and maintaining rigorous oversight.

The path forward requires curiosity and caution—a willingness to embrace innovation and a steadfast commitment to ethical diligence. With thoughtful leadership, AI can become a tool for greater equity, transparency, and innovation in healthcare.

About Rick Newell, MD, MPH

Rick Newell, MD, MPH, is the Chief Transformation Officer for Vituity and the CEO of Inflect, Vituity’s innovation hub. In these roles, he is responsible for developing, implementing, and executing strategies, programs, and technologies that enhance and transform enterprise-wide healthcare delivery.

Dr. Newell is board-certified in clinical informatics and emergency medicine. He obtained his Master of Public Health in healthcare management from Harvard University and earned his medical degree at the State University of New York at Buffalo, where he was selected as a junior member of Alpha Omega Alpha. He completed his emergency medicine residency at Harbor-UCLA Medical Center.

As shared owners in their partnership, Vituity clinicians always look for ways to improve care. Many of our signature solutions originated at single hospitals, with front-line providers leading the way. Dr. Newell is committed to continuing this physician engagement and innovation legacy through programming, strategic partnerships, and cultural alignment.

Learn more at vituity.com and inflect.health

Partnering to improve patient lives

Vituity branding with wave pattern footer