Artificial intelligence technology has reached the point where even early adopters are asking what its limits should be.
Consider this: a patient feels a scratchiness in the back of her throat, so she calls her doctor’s office and speaks with a nurse. The nurse takes a wait-and-see approach, but the following day the patient spikes a fever and makes an appointment to see her primary care provider. A few questions and a positive strep test later, she is given a prescription and heads to her local pharmacy to have it filled.
Now imagine a different scenario. The patient’s throat feels a bit off, so she opens an artificial intelligence (AI)-based symptom checker on her smartphone. The symptom checker details a number of possibilities, but that list gets narrowed down the following day when she records a high fever. She opens her pharmacy’s app and orders an at-home strep test, which is delivered later that day. The test comes back positive, and within an hour the pharmacy is delivering her prescription after being automatically alerted to the positive test.
The first scenario is so routine it’s almost quaint. The second has the same outcome and treatment, yet it poses many fresh issues and possibly daunting challenges. In a highly regulated, change-averse industry, AI is forcing healthcare organizations and insurers to face major questions about which technologies to prioritize and how those technologies will affect future business models.
Mahi Rayasam, Ph.D., MBA, a partner at management consulting firm McKinsey & Company who specializes in healthcare systems and analytics, says although fully automated healthcare would raise a lot of concerns, some aspects of it are definitely possible. “I could foresee a bot that will tell you, ‘OK, this seems like something that you should go to an urgent care for or you should schedule a (primary care provider) appointment,’” he says. The bot might then offer a list of nearby providers along with available appointment slots or up-to-date wait times, offering patients a one-click sign-up solution. “I mean, those kinds of applications will definitely happen,” Rayasam says.
Greg Johnsen, CEO of the healthcare chatbot firm Lifelink Systems, is one of the people working on exactly those types of products. Lifelink markets chatbots that automate workflows and provide patients with personalized information and services, using data from their electronic health records, among other sources.
Johnsen noted that many workflows in healthcare require multiple conversations with multiple people, which precisely the kind of activity that AI can streamline. “These are multiconversational flows, and almost all of them in healthcare are defined by — they have to be defined by — protocol,” he says. And if there is one thing machines can do reliably well, he says, it’s following protocols.
The limits of protocols
However, in the exam room and hospital wards, healthcare operations are largely guided by best practices and informed best guesses rather than strict protocols. AI can help there, too, but Rayasam says such technologies raise ethical and practical questions such as whether patients will act on recommendations made by bots and whether insurers will reimburse such care. More fundamentally, is it safe to downplay the role of human providers in favor of machines? “I see a lot of patient safety concerns there,” he says.
AI proponents sometimes paint a picture of super-efficient, super-smart AI taking over much of healthcare. But Rayasam says it is important to not remove human clinicians from healthcare decisions. As for insurers, he says, United States law prohibits companies using AI to overrule human decisions.
“Regulation prevents any denial of care based on (decisions made) without human intervention, i.e., without M.D. intervention,” he says. “You cannot deny care without an M.D. going through that case thoroughly and then determining (whether) this particular care is not medically appropriate.”
But what about cases in which an algorithm recommends a treatment based on an algorithmic analysis that is simply too complicated for a noncomputer scientist to understand?
“What you’re hitting on is the topic of interpretability,” says
Saahil Jain, M.S., a computer scientist who has worked extensively on AI and machine learning in healthcare.
Jain says much of the debate around AI in healthcare has been focused on the interpretability of the results, meaning that physicians and other providers should be able to understand why a particular recommendation was made. However, the debate is not merely about how to make results interpretable; it’s also about whether interpretability is even that important. Although interpretability enables accountability, Jain notes that other areas of healthcare function just fine with mechanistic ambiguity.
“Honestly, there are a lot of drugs that we already use (for which) we don’t really understand all the mechanisms by which they work,” he says. “So it’s not new for us to use things that are not interpretable.”
What’s more important, he says, is rigorous testing and randomized controlled trials that prove a technology works, as well as careful evaluation of the data used to come to conclusions.
Eye on edge cases
Even with rigorous trials, Johnsen’s point about protocols will come into play. After all, some of the thorniest healthcare decisions relate to patients with rare conditions or rare characteristics. Insofar as AI and machine learning are reliant on big data, what happens if the available data are sparse?
“Solving for the edge cases is what really worries me,” Rayasam says, adding that the problem is exacerbated by data silos — data locked away in proprietary databases that could be put to use if they were set free and mixed and matched with other data. Jain is excited about a concept called federated learning that aims to work around the problem.
“There are different ways of doing it, but one example might be that you have a model and everybody uses their data to train the model without sharing the data, without aggregating the data in one central place,” he says.
Jain says some of the problems related to edge cases and data silos might be solved if federated learning became a widespread.
In the meantime, Johnsen says, a transformation of the healthcare industry can happen even without solving all of the ethical and technical problems associated with AI taking over clinical decisions. He says automating the parts of the system that need to run exactly the same way, every time, can lead to significant cost savings and efficiencies that lower the cost of healthcare. For instance, efforts to promote value-based care have centered on replacing the fee-for-service model with new models such as fixed-payment regimes where physicians are incentivized to lower the cost of care in order to earn higher profit margins. In theory, such models work by encouraging physicians to avoid unnecessary tests and procedures.
Behind the scenes
However, Johnsen says one of the easiest ways to improve healthcare and glean cost savings through AI would be to optimize operations in ways that improve the patient experience while reducing overhead. “You will see new models of physicians and physician groups, and entirely new models around primary care,” he says. Although AI is sparking conversations about what the future of patient care will look like, Johnsen says much of the impact of AI at present is going on behind the scenes.“It's always the shiny bright object that gets focused on,” he says, “but there's some real work to do and (some) real effective uses of AI in doing material things right now.”
Jared Kaltwasser, a frequent contributor to Managed Healthcare Executive®, is a freelance writer in Iowa.