The very real cybersecurity risks associated with the use of artificial intelligence demand careful attention both to known vulnerabilities and likely implications.
Joe Oleksak
About three decades of ethical hacking experience and retrospection have left me with a still prescient impression. In the collective rush to embrace what was then a very new internet, even risk-averse enterprises placed a treasure trove of information online without considering the danger. It was a grossly irresponsible approach. There was no security built into the internet by design, the very notion of cybersecurity was just emerging, and for a time, any hacker worth their salt could breach most organizations at will.
Today, in a throwback to that time, we are again in the midst of a seismic shift with a technology that promises to change everything. And like the internet was then, artificial intelligence (AI) is all the rage, with the top of the hype cycle still before us. What we need isn’t more hype. What we need is more honesty about artificial intelligence.
AI is already being used in healthcare, but its application remains relatively limited. Much of what is labeled as “AI” is often actually sophisticated automation, not intelligence. Where true AI is being used and showing the most promise is behind the scenes for things like claims processing, physicians’ notes, and streamlining call centers. While we are getting glimpses of its potential to read medical images and diagnose diseases, the technology is still in its early stages. We’re still crawling, not running. Until we understand the numerous implications involved, prudence is key.
AI is powerful, but it’s not magic. Conceptually, it’s just data, math and risk. Like any data-fueled innovation or application, it demands clear and unbiased inputs, thoughtful governance and an operational framework that reflects a deep understanding of the human systems it is supposed to enhance. In the absence of these, AI will not solve healthcare’s most difficult problems; it will simply automate them. It will also create significant security and privacy issues, especially as use cases increase and we begin to realize its enormous potential.
As we navigate the very real questions that accompany AI, we have an opportunity not just to determine what it can do, but more importantly how we can wisely and ethically use it. Much like in “The Matrix,” we must learn to live with the machines. If that sounds existential, it is. The challenge with truly transformative technologies is to make sure they serve us, not the other way around.
For that to occur with AI, we must first come to terms with the cyber threats associated with it and how the reckless embrace of its capabilities can impact proprietary data, ePHI, business success and, most importantly, patient outcomes. Current AI use cases in healthcare underscore this reality.
Although we are still crawling with AI, it’s already demonstrating great promise. It is also bringing to light significant security vulnerabilities and the governance and compliance issues associated with them. Consider the following:
Doctors’ notes. AI scribes promise to dramatically decrease the time doctors spend documenting care, particularly in value-based models. For many physicians, “pajama time” is a significant contributor to physician burnout and could be an area that AI can help alleviate. However, the ambient listening scribes raise important security and compliance questions. Have patients consented to being recorded? What about doctors and nurses? Who is processing and storing the data? Is it going to a third-party provider? Has that provider been fully vetted for their own governance and security protocols? These are baseline cybersecurity considerations beyond the significant cultural and operational questions leaders must also ask. Do people want to work in an environment where everything is recorded?
Claims processing. A recent incident involving a major payer’s use of an AI algorithm to review claims garnered significant attention. The use of the algorithm resulted in dramatically higher denial rates, raising questions about its use and the motivations behind it. Was it created to purposefully and unethically increase denial rates or just to process claims faster but too rigidly? The answers remain unclear. But regardless, it underscores the importance of a fundamental rule in AI and related cybersecurity: it needs human oversight. Clearly this was lacking, either unintentionally or purposefully, but the dynamic nature of AI makes human oversight and governance crucial.
Diagnostics. We know that AI already has great potential for reading medical images and diagnosing some conditions. But AI has a significant vulnerability in how it learns and is trained. How is the training process monitored and how can it be discovered if a bad actor manipulates data used to inform AI? In a time when the ability to create convincing video, audio and text-based deepfakes is advancing, the risk of an attacker impersonating a physician in a telehealth consultation or taking control of a chatbot may seem like science fiction. It's not. It is a very real concern.
There is no silver bullet for AI-related security risks, nor can it be forgotten that many healthcare organizations already struggle to satisfy the most basic cybersecurity standards. The dynamic nature of AI raises the stakes and calls for immediate action. Here are some basic steps leaders should take:
Hold vendors to higher standards: With the exception of the very largest healthcare organizations, AI is mostly driven by third-party vendors, which must be vetted. Do these vendors meet your data security, privacy and compliance regulations? Where is the data kept and who maintains it? AI providers should be transparent about their data use, model integrity and monitoring. Don’t rely solely on their assurances. Demand regular audits with an unbiased, external expert.
Take responsibility for compliance. Organizations should not try to apply a 2003 framework to 2025 problems. The Health Insurance Portability and Accountability Act (HIPAA) was written to protect healthcare data in a static, structured world, not one where data is being fed into black-box models, large language engines, or real-time AI-driven workflows. Until HIPAA catches up, healthcare leaders need to proactively build internal AI risk policies — especially around how electronic protected health information interacts with AI systems — and avoid assuming compliance equals security. Remember, AI doesn’t replace the need for HIPAA. It expands the surface area HIPAA will ultimately cover. Hint: Emerging frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001 can provide a foundation for building responsible AI governance strategies tailored to healthcare.
Embrace a tailored approach to AI security. While AI vendors often include basic safeguards, it is critical to tailor your own security protocols to address AI-specific threats like data poisoning or adversarial attacks. This isn’t just about firewalls and network monitoring. It is a new world that requires organizations to proactively address how AI models are trained and interact with data.
Create an AI team that includes physicians. Assemble a team that includes operational leaders and has board representation, including information technology and security leaders, among them chief information officers and chief information security officers. In addition, legal and compliance must help shape policies that evolve with both technology and regulation. Perhaps most importantly, make sure physicians, nurses and administrative leaders have a seat at this table. Strong participation by physician leaders is imperative. Doctors are the bridge between data and humanity. They must be involved in the selection, deployment and monitoring of AI that impacts patient care.
Avoid a plug-and-play approach to AI. It can be a dangerous illusion or a powerful ally, but its adoption should not be driven by peer pressure or trends. We cannot let AI into the exam room until we’ve childproofed the outlets.
Safe AI starts with intentional design, not retroactive damage control. Let’s not repeat the mistakes of the 1990s internet rollout, where innovation outpaced regulation, trust was an overthought and blind, naïve optimism was the rule. This time, let’s choose the red pill and clearly acknowledge the risks, build guardrails early, and make AI work for healthcare, not at the expense of it.
Joe Olesak is a partner in Plante Moran’s cybersecurity practice.
Get the latest industry news, event updates, and more from Managed healthcare Executive.