Embracing AI Wisely: A Health Plan Leader’s Guide to Benefits, Risks, and Readiness

Commentary
Article

The authors provide a detailed framework for assessing the adoption of artificial intelligence.

We’ve entered an era of artificial intelligence (AI) mania. Health plan executives face a constant stream of vendor solutions claiming to harness AI for innovation and efficiency. Executives are under pressure from members, providers, regulators, and shareholders alike to balance AI adoption with responsible implementation.

Tom Martin, M.A., MBA

Tom Martin, M.A., MBA

In some respects, health plans have been using AI for decades through activities such as data processing and predictive analytics to assess population health risks. Today, AI technologies have evolved significantly, offering new opportunities alongside emerging risks. This guide is designed to help health plan leaders navigate the AI market and drill down to what really matters for their organizations and members.

AI readiness: Risks and benefits

While AI has the potential to enhance efficiency, streamline processes, and provide valuable insights, it also comes with risks that require careful management. Leaders must consider data usage, regulatory compliance, cybersecurity vulnerabilities, and potential disruptions to workflows and workforce dynamics. AI can offer significant opportunities to reduce manual processing time, improve decision-making, and scale operations with greater agility. The following chart outlines key risks and benefits to help organizations shape a balanced AI strategy that maximizes value while mitigating potential pitfalls.

Every health plan is unique and will ultimately weigh the risks and benefits of a given use case differently. We developed a robust framework for evaluating organizational readiness, ensuring that AI solutions not only integrate seamlessly into existing workflows but also safeguard member safety and equity. 

AI readiness: Key considerations for implementation

Eric Levine, M.P.H.

Eric Levine, M.P.H.

Viju Shamana

Viju Shamana

AI adoption in healthcare isn’t one-size-fits-all. Health plans must navigate complex security frameworks, controls, and certifications to meet strict regulatory requirements at both the state and federal levels. Beyond safeguarding protected health information under the Health Insurance Portability and Accountability Act (HIPAA), AI must ensure safe, accurate, and policy-compliant interactions—resisting manipulation and preventing misinformation that could lead to serious consequences. Medical advice guardrails, for example, help prevent unauthorized or misleading guidance in this high-risk domain.

Implementing AI requires careful evaluation across multiple dimensions. Beyond the promise of efficiency and cost savings, AI introduces risks related to data security, compliance, clinical accuracy, and ethics. Leaders must take a structured approach to ensure that AI solutions align with organizational goals, regulatory requirements, and operational workflows.

To guide decision-making, the following framework provides key considerations across critical categories: strategic alignment, data integrity, ethical compliance, clinical validation, integration, vendor transparency, financial and legal implications, and change management. By proactively addressing these factors, health plans can develop a responsible AI strategy that maximizes benefits while mitigating risks.

Strategic alignment and business case

Key considerations

  • What problem are we solving with AI? What are we trying to achieve with AI? (e.g., saving costs, expediting processes, improving output)
  • How does this AI solution align with our healthcare organization's mission, values, and strategic goals?
  • Have we conducted a cost-benefit analysis to justify investment in this AI tool?
  • Does the AI system offer a clear advantage over existing workflows?
  • Are there new capabilities that the AI system would allow us deliver?
  • What is our organization’s risk tolerance as it relates to AI implementation? Are there certain business units or functions for which we are more risk averse (e.g., member- or provider-facing functions vs. back-office functions)?

Data integrity and AI model performance

Key considerations

  • What datasets were used to train this AI models
  • Was the AI trained on real-world claims, medical records, and/or other healthcare data sets, or was it trained on general and/or synthetic data?
  • Will the AI model be trained on plan-specific data? If so, what are the protections in place to ensure confidentiality is maintained
  • Is the model's performance validated across diverse patient populations to ensure accuracy?
  • How often is the AI updated or retrained to maintain performance as new medical data emerges
  • Are there known limitations or biases in the model? If so, how are they addressed
  • What cybersecurity measures are in place internally and at the AI vendor to protect against breaches?

Ethical and compliance considerations

Key considerations

  • Does the AI comply with HIPAA and other data privacy regulations?
  • Are there any current federal or state regulations that would limit our use of AI for the given function?
  • Can the vendor explain how AI makes its decisions (explainability vs. black-box AI)?
  • How do we ensure the AI system does not introduce bias in patient care recommendations?
  • Does this AI system create equity concerns, potentially disadvantaging certain patient groups?
  • Does the vendor have guardrails and third-party verification of their AI systems to ensure error-free communication with AI agents and generative AI solutions?

Clinical validation and patient safety

Key considerations

  • Has the AI solution undergone independent, peer-reviewed validation studies?
  • What is the error rate of the AI? How does it compare to human judgment? 
  • What are the potential risks if the AI system provides incorrect or misleading recommendations?
  • How will human oversight be maintained in AI-driven decision-making?
  • Has the AI vendor conducted clinical trials or pilot studies to demonstrate safety and effectiveness?

Integration and workflow compatibility

Key considerations

  • Can the AI system be easily integrated with existing systems such as electronic health record systems, claims processing systems, etc.?
  • Will the AI improve workflows, or will it add complexity and burden staff?
  • How will AI fit into our current decision-making processes for physicians, nurses, and administrators?
  • How will AI fit into our current decision-making processes for physicians, nurses, and administrators?
  • Does the AI provide real-time insights, or does it require post-processing delays

Vendor transparency and support

Key considerations

  • Is the AI vendor transparent about their technology, methodology, and training data?
  • What customer support is available for troubleshooting and continuous improvement?
  • Does the vendor offer ongoing monitoring and updates for the AI solution?
  • What is the contingency plan if the AI system fails or produces incorrect results?

Financial and legal issues

Key considerations

  • What are the total costs, including licensing, training, and maintenance?
  • What are the anticipated cost savings or other benefits? How will those be measured and monitored?
  • What are the liability concerns if AI recommendations lead to medical errors?
  • Are the appropriate accountability measures in place for AI to mitigate against errors?
  • Do we have a clear contractual agreement with the vendor outlining data ownership, AI performance guarantees, and compliance measures

Training and change management

Key considerations

  • How will clinicians, nurses, and staff be trained to use the AI effectively?
  • What resistance to AI adoption might we face from staff or members?
  • Do we have a change management strategy to ensure smooth implementation?
  • Will there be any audit processes on top of the AI function to monitor and correct performance?

As health plan leaders navigate the complex landscape of AI adoption, a balanced and thoughtful approach is essential. Health plan executives should carefully weigh the benefits against the risks to make informed decisions that align with their organization’s mission and values. Prioritizing strategic alignment, data integrity, and patient safety allows health plan leaders to harness AI responsibly, driving innovation while maintaining member-centric values, trust, and ethical integrity in healthcare.

Tom Martin, M.S., MBA, is vice president, client relations at DRG Claims Management. Eric Levine, M.P.H., is an associate principal at Avalere Health. Viju Shamana is vice president of the AI lab at Ushur.

© 2025 MJH Life Sciences

All rights reserved.