Healthcare's Embrace of AI | October 2025 Cover Story

Feature
Article
MHE PublicationMHE October 2025

At first, it was tentative. But now almost all parts of the U.S. healthcare system are racing to adopt some form of artificial intelligence. Some leaders caution about using it too widely, too fast.

If you Google the keywords in Kedar Mate, M.D.’s, biography, the search results might lead you to assume he wants nothing to do with artificial intelligence (AI). After all, many of the misgivings about the use of AI in healthcare center on whether it will deepen inequity and social injustice, issues that Mate has spent his career working on. Yet his values have not caused Mate to shy away from AI; if anything, they have deepened his involvement.“ For me, this is all about the choices that we decide to make,” he says.

Kedar Mate, M.D.

Kedar Mate, M.D.

Mate is the co-founder and chief medical officer of Qualified Health, a digital health company that has embraced the use of AI in healthcare. He also served on the National Academy of Medicine committee that published an AI code of conduct for healthcare earlier this year.

He says concerns about AI in healthcare can be resolved with deliberate, eyes-wide-open actions. “We have options around how to train those tools,” he says. “We can train them on biased information … or we can prune that knowledge base to look for highly reliable, valuable information that we believe is, in fact, factually accurate.”

Mate’s description of choices alludes to the still-unsettled AI landscape in healthcare. Three years after the debut of ChatGPT, AI has become a real and meaningful part of the healthcare ecosystem. As the use of AI increases, the brightest minds in the industry largely agree that AI cannot fully replace humans. The question is where to draw the line between what AI does better than humans and what is best handled by good old-fashioned human gray matter.

Generative AI shift

U.S. healthcare’s rather quick embrace of AI is, to some extent, more appearance than reality and a consequence of branding. Dean Slawson, vice president of advanced technology at PointClickCare, said companies like his have been using predictive analytics and machine learning (ML) since long before anyone had heard of ChatGPT. “We used to say ‘AI-slash-ML,’ because there was a time when I said anything that isn’t [ML] in AI is science fiction,” he says.

Dean Slawson

Dean Slawson

Slawson says the advent of generative AI was analogous to the invention of the web browser. The internet had existed for many years, but the web browser made it usable in a way that had not previously been possible. “Similarly, various kinds of AI have been around for a long time,” he says, “but suddenly everyone was aware of things you could do with a certain kind of large language model.”

Slawson says the invention of generative AI led to the creation of a number of new uses within the healthcare industry. Mate says the level of interest in those applications has soared. He contrasts the current situation with the 2009 Health Information Technology for Economic and Clinical Health Act, a law designed to promote greater adoption of electronic health records (EHRs). Unlike EHRs, Mate says the growth of generative AI in healthcare is stoked by demand. “Clinicians want it. They want to experiment with it. They want to use it. They want to try it,” he says. “They see it as a solution to problems that they’ve had for years.”

Hearing what human ears miss

Ben Scharfe

Ben Scharfe

One of the fastest-growing uses of AI in healthcare is ambient listening technology, in which a computer system records the interactions between patients and providers and then generates clinical notes based on those recordings. Ben Scharfe, executive vice president of AI initiatives at Altera Digital Health and one of Managed Healthcare Executive’s 2025 Emerging Leaders in Healthcare, says previous iterations of the technology involved recording patient/provider interactions, encoding the recordings and sending them overseas for humans to transcribe. That process could take hours. “Now, with AI, essentially, it can be almost instantaneous,” he says.

Related: Ben Scharfe, CPA | 2025 MHE Emerging Leaders in Healthcare

Slawson says many providers have been surprised by how the technology affects their practice. “It may or may not save as much time as they expected, but it does help them engage better with the patient,” he says, by allowing them to focus on the patient. “And people really appreciate that.”

Personalized plans

Scharfe says a next step from ambient listening is to turn those insights into personalized patient-education materials. His company is currently prototyping such a product.

“Essentially, the way we can do this is we pull in official, vetted content … from sources that are curated by the care organization, and then we combine that contextually with the transcript of the encounter, as well as with relevant data from the patient’s chart,” he says.

The result is a document patients can take home that offers highly personalized — and medically valid — advice. Instead of giving a patient sedentary boilerplate advice to increase their moderate physical activity, an ambient listening model might learn that the patient has a pet dog and coach them to take their goldendoodle on longer or more frequent walks. If the patient mentions that they play bingo at their local community center, their instructions might encourage them to walk to bingo one night each week or park a couple of blocks away to get more steps in. “You can really get this hyper-personalized,” Scharfe says. “Where right now, a lot of the patient education is very sterile.”

Coding accuracy

Chris Rigsby, senior vice president of payer solutions at Omega Healthcare, says AI is also making inroads in medical coding. He says AI can help ensure coding accuracy, but humans still play an important role in the coding process. “What you provide coding-wise is accurate — and you’re seeing a lot of improvement in that space with AI — but it may not be complete,” he says. If an AI system omits a key codable event, it’s important that a human be able to notice the omission and rectify it, he notes.

Jay Anders, M.D., M.S.

Jay Anders, M.D., M.S.

Jay Anders, M.D., M.S., the chief medical officer at Medicomp Systems, notes that omissions are not the only problem with AI-generated records. He says ambient listening technologies can sometimes introduce inaccuracies. He has heard of examples of ambient technology referring to a patient by different genders in the same medical note or attributing a patient’s family member’s medical condition to the patient himself. And once a piece of faulty information is buried in a patient’s data, Anders says, “It’s almost impossible tob get [it] back out.”

Prior authorization

Another AI use generating attention is prior authorization. Scharfe says AI can help on both the payer and provider sides of the equation. He says payers have been quick to embrace AI to accelerate approval of routine prior authorization requests.

On the provider side, he notes that different insurers have different rules and expectations for how they define medical necessity, among other things.

“AI can help to consume those different rule sets, those different factors, and prepare a prior authorization for submission,” he says.

The result, Scharfe says, is a much smoother system. He sees the potential for same-day treatment approvals if both payer and provider use AI. This could also free up the humans to focus on the most complex cases.

“[We can] take what has really historically been a bit of an adversarial point of friction and turn it into a way to create a bridge between payers and care organizations,” he says.

Then again, while using AI to swiftly approve prior authorization requests may decrease friction, the use of AI to deny requests remains highly controversial. A February survey from the American Medical Association showed 61% of physician respondents feared that the use of AI was leading to an increase in prior authorization denials. Meanwhile, UnitedHealthcare is facing a class-action lawsuit from Medicare Advantage members who claim the company is using AI to deny members’ claims.

Don’t go there

Yet even as AI adoption spreads throughout the healthcare industry, some questions from the pre-ChatGPT days continue to linger. Many healthcare organizations initially integrated AI into backend systems, such as scheduling and revenue cycle management, that do not have a direct impact on patient care. They have been slower to use the technology in direct clinical contexts. That is changing, says Slawson, noting the strong evidence that AI provides benefits in diagnostic tasks such as noticing patterns and features in radiologic images that the human eye — and brain — miss.

Slawson says predictive models are also growing in acceptance. PointClickCare, for instance, has a predictive return-to-hospital (pRTH) product that assesses the likelihood that a patient being sent to a skilled nursing facility will be readmitted to the hospital. “Predictive models like pRTH, and others, have a strong application, and they’re fairly straightforward to build and validate if you have the right kind of data and the right kind of expertise,” he says.

Medicomp’s Anders says AI can also indirectly impact patient care. He says AI systems can ensure that surgical instruments and machines are scheduled to maximize their use.

“One of the positives about AI is its idyllic memory,” he says. “It doesn’t forget things, it doesn’t get stressed, [and] it doesn’t have a bad day.”

Who is liable?

AI may be increasing in reliability, but its legal liability remains an open question. Anders says the AI imperative has led to the collection of vast amounts of data. Some providers worry that being in possession of such data might create legal exposure. “There are a lot of data there that’s collected, and if you miss it, or the AI misses it, who’s responsible for that?” he says.

The answers, at least from a legal perspective, are not quite clear. Rigsby says if AI cannot tease out the most relevant information, health systems and providers may decide AI is not worth the risk.“ [B]ecause people will be very, very leery to adopt it, and ultimately have too much information that they can’t absorb,” he says.

Scharfe notes that these questions are not unique to healthcare, although the stakes might be different. One consideration, he says, is whether to keep the original unvetted transcripts created by AI. “I think that the best approach, essentially, is not necessarily to retain the transcript, but rather to focus on the vetted artifact, which is the encounter note as the output,” he says. He adds that there are reasons one might want to keep the original transcript. For instance, such documents might be useful for audits of AI systems, to ensure they’re working as intended.Not only do healthcare entities of all stripes have to decide whether and where to use AI, they also have to decide which digital health companies to partner with, and potentially when to make their own
AI products.

Anders says the attractiveness of the healthcare industry to technology firms is a double-edged sword. On the one hand, it means many companies with exciting solutions are looking to help improve healthcare. On the other hand, it also means a lot of companies without much healthcare expertise are angling to get into a sector teeming with specialized knowledge. He described a real-life instance in which a friend who runs a small hospital implemented a sepsis-prediction algorithm. In theory, the system could be a significant cost-saver by flagging patients most at risk of sepsis. In reality, though, the system only had a 50% accuracy rate. Among the reasons it failed was that it over-weighted the importance of high heart-rate readings, Anders says. In an emergency setting, he notes, heart rate can be an unreliable metric.

“When you come into an emergency room, do you think your heart rate’s going to be normal?” he says. “You’re there because you’ve got a problem.”

Rigsby, at Omega Healthcare, says his company has partnered with Microsoft to leverage the tech giant’s advanced large language models and pair them with Omega Healthcare’s revenue cycle management and healthcare products. It’s a way to quickly scale up the company’s healthcare offerings in a way that builds trust with customers.

At Qualified Health, a major initiative has been to create a self-authoring platform, where clients can build their own AI solutions. Mate says healthcare workers have ideas for ways AI might make their jobs more efficient; he hopes his company’s self-authoring platform will empower those workers to make their ideas a reality.

Anders says that whether AI products are developed by outside firms or in-house, it’s critical that the technology follows the needs of the providers.

“[Clinicians] know what they need help with,” he says. “Work on that first. Don’t rush to a total solution that’s going to solve every problem all the time. Rush to the solution that’s going to augment their ability to deliver that quality patient care.”

Newsletter

Get the latest industry news, event updates, and more from Managed healthcare Executive.

Recent Videos
2 experts in this video
© 2025 MJH Life Sciences

All rights reserved.