• Hypertrophic Cardiomyopathy (HCM)
  • Vaccines: 2023 Year in Review
  • Eyecare
  • Urothelial Carcinoma
  • Women's Health
  • Hemophilia
  • Heart Failure
  • Vaccines
  • Neonatal Care
  • NSCLC
  • Type II Inflammation
  • Substance Use Disorder
  • Gene Therapy
  • Lung Cancer
  • Spinal Muscular Atrophy
  • HIV
  • Post-Acute Care
  • Liver Disease
  • Biologics
  • Asthma
  • Atrial Fibrillation
  • Type I Diabetes
  • RSV
  • COVID-19
  • Cardiovascular Diseases
  • Prescription Digital Therapeutics
  • Reproductive Health
  • The Improving Patient Access Podcast
  • Blood Cancer
  • Ulcerative Colitis
  • Respiratory Conditions
  • Multiple Sclerosis
  • Digital Health
  • Population Health
  • Sleep Disorders
  • Biosimilars
  • Plaque Psoriasis
  • Leukemia and Lymphoma
  • Oncology
  • Pediatrics
  • Urology
  • Obstetrics-Gynecology & Women's Health
  • Opioids
  • Solid Tumors
  • Autoimmune Diseases
  • Dermatology
  • Diabetes
  • Mental Health

AI: Beyond the Buzz

Publication
Article
MHE PublicationMHE September 2021
Volume 31
Issue 9

Artificial intelligence is improving healthcare methods and insights. But when used incorrectly, AI is rife with pitfalls.

A patient rests comfortably in a hospital bed. Everything seems fine. Neither the patient nor the doctor or nurses realize he is about to take a life-threatening turn for the worse. Within hours, he will be septic, and his care team will race to begin aggressive fluid resuscitation and antibiotics

Yet, although the patient’s doctor and nurses do not see sepsis coming, someone else does — or, more accurately, something else does. The patient’s electronic health record (EHR) includes an artificial intelligence (AI) tool that detects patterns imperceptible to the patient’s human caretakers. It duly sends an alert to his doctor and nurses. What could have been a shocking deterioration is instead averted.

It sounds futuristic, this kind of thing that AI could do. But algorithms are already detecting hard-to-anticipate conditions such as sepsis in hospitals, and lives are already being saved. In fact, the underlying idea — that a set of data, put together in context, can help predict outcomes — is already very much a part of medical thinking.

“A lot of these concepts, they’re not new to medicine,” says Yasir Tarabichi, M.D., MSCR, director of clinical informatics for research support at MetroHealth, a safety net healthcare system in Cleveland.

AI and machine learning based on huge medical data sets are just the latest iteration of risk stratification, Tarabichi says, which is something as old as medicine itself. Before AI, patients could be given risk scores based on simple scoring tools and scales. AI has created the opportunity to score patients using thousands of measurements not previously available, much less analyzable. “It was simpler before, because it was more of a point-based system,” Tarabichi says. “Now it (has become) a lot more complicated.”

Broad array

With opportunity comes challenges. One of the main challenges with AI is dealing with the onslaught of new products. Prem Thomas, M.D., medical director of data analytics and medical information officer at Yale New Haven Health in Connecticut, said he and colleagues get visits from software representatives the way doctors have historically received visits from drug company sales reps. “It’s interesting,” he says. “We have our weekly medical information officers’ meeting, and typically there’s somebody (from a technology vendor who) wants to present to us during that time.”

The companies hawking their AI wares range from entrenched players such as Epic Systems, the largest EHR company in the United States, which markets a sepsis prediction model, to tiny startups that have little, if any, experience in healthcare. Increasingly, tech giants such as Apple and Google have tried to gain a foothold in the healthcare technology sector, often with an
AI focus.

The utility of AI in healthcare settings extends beyond clinical decision support tools such as the sepsis model. One area Thomas has been examining uses AI to make operating room scheduling more efficient. Currently, the time slots that surgeons request for routine procedures such as appendectomies are often imprecise.

“The surgeon asked for two hours,” Thomas says, “but based on the data for that surgeon and the location where they are going to be (performing the operation), can we be more accurate in how long it’s going to take and, hopefully, minimize things like delays for other patients and other surgeons?”

Houston Methodist Hospital turned to AI when it was preparing to receive COVID-19 vaccines, according to LeTesha Montgomery, M.H.A., RN, the hospital’s vice president for operations and patient access. The hospital knew they would be facing a “tsunami of phone calls” from patients and members of the public asking questions or wanting to sign up for the vaccines, Montgomery says. “With predictions of increased volume reaching 300% to 400%, hospital leaders knew we needed a solution that would manage the flood of vaccine-related phone calls without impacting usual operations,” she says.

The hospital used Syllable, a conversation AI voice assistant, to automate work flow and conserve resources by making it easier for patients to find a wide array of answers without needing to speak to a human being. It also used AI to handle in-person traffic. Visitors to the facilities were screened by the temperature-monitoring platform Care.ai by standing in front of a tablet equipped to scan for fevers. “This contactless technology not only helped with conserving staff resources, but it also speeds up screening measures at point of entry,” Montgomery says.

High stakes

One reason Houston Methodist was able to implement its COVID-19 strategies fast is because it has set up an infrastructure to quickly review and vet new technologies. The Center for Innovation, launched in 2018, has a group called DIOP, which stands for Digital Innovation Obsessed People, made up of roughly half operational staff and half information technology staff. The group is charged with developing strategies for broad-scale implementation or “failing fast” — quickly identifying technology that is not a good fit. “The results of focusing on innovation before it was essential to a pandemic such as COVID-19 enabled our hospital system to embrace new technology, including AI, more seamlessly,” Montgomery says.

Still, the stakes for AI implementation vary depending on its proposed use. A temperature-screening device that stops working might mean a staff member gets temporarily reassigned to thermometer duty. An errant AI system designed to help clinicians make treatment decisions could harm patients, even kill them.

Tarabichi said it is important that the public understands that hospitals do not take AI lightly.

“We don’t just blindly adopt new technology,” he says. “We make sure it’s going to work for us.”

For instance, before implementing the Epic sepsis model, MetroHealth ran the program in the background for an entire year, disabling the alerts but collecting data. Before they toggled it live, the health system wanted to make sure it worked and had a meaningful benefit. At the end of the year, they adopted the technology but limited its use to the emergency department.

Thomas says that kind of slow, evaluative approach is key to AI integration. It means taking the time to understand how an AI system works, getting an idea of the data sets used by the vendor and tailoring the program’s settings to the health system’s needs.

In other words, it takes time and money — and more time. But Thomas says the expense is worth it.

“I would recommend that community hospitals or academic hospitals invest the resources in having a redesign committee review what this model is purported to do for you and then think through how to operationalize it,” he says. Yale New Haven’s redesign committee, for instance, is an interdisciplinary team whose members include intensive-care unit nurses, emergency department doctors, inpatient nurses and other professionals, as well as analytics staff.

But a health system’s ability to carefully evaluate an AI system isn’t just a matter of staff expertise and representation. It also requires having access to enough data and software to fully understand a product. Because AI products are proprietary software marketed by for-profit companies, there can be tension between the company’s desire to keep proprietary information private and a healthcare system’s need to fully understand the tools they are using to take care of patients.

An example of that kind of conflict had a public airing this summer when investigators from the University of Michigan reported findings of their independent evaluation of Epic’s sepsis prediction tool in JAMA Internal Medicine. Their research suggested that the product is less accurate at predicting sepsis than what the company claims in its marketing materials. The findings were controversial, in part because the sensitivity of the AI tool depends on the settings put in place. Epic disputed the findings, saying that it works closely with users to ensure the product meets their needs and to provide access to the data necessary to evaluate the model.

Tarabichi has submitted his own paper, soon to be published, which contradicts some of what the Michigan investigators found. Yet, even as experts disagree about the particular product, this raises important questions about the limits of using proprietary models.

“The increase and growth in deployment of proprietary models has led to an underbelly of confidential, non-peer-reviewed model performance documents that may not accurately reflect real-world model performance,” wrote the corresponding author of University of Michigan study, Karandeep Singh, M.D., M.M.Sc., and his colleagues.

In the case of Epic’s sepsis model, both Thomas and Tarabichi said they were satisfied that they had access to the information they needed to evaluate the product, though they both said information from the company was just one component of their implementation process.

Tarabichi cautions against hospitals “jumping on the bandwagon,” of using popular AI without first validating it and testing it in their own system. In a smaller community hospital setting, validation may not need to be as complicated as it might at a top academic medical center with a diverse patient population.

“Randomized controlled trials are the highest level of evidence, but lower down that hierarchy is pre- and post- assessment,” he says. “Run it for a couple of months and then evaluate. What changed? Can you do better?”

Not always the answer

For Thomas, it is not just about choosing the right AI system but also about being thoughtful about when and where AI is actually needed. He and his colleagues reference a “Mad Lib” created by Michael Draugelis, the chief data scientist at Penn Medicine in Philadelphia. It goes like this, Thomas says: “If I were [a physician, a physician’s assistant, a nurse, an operating room scheduler] in the health system and I knew ____, I would do ___ to change ____. “Before we start on any project now, we say, ‘fill in the blanks,’ ” Thomas says. “Because if we don’t have a good answer for those, then we shouldn’t be investing the time and effort to go through with the project.”

Montgomery agrees, saying technology cannot displace patients’ status “at the center of everything we do.”

“AI isn’t meant to solve all the challenges in the healthcare setting,” she says, “but where these applications make sense, where they provide real benefits to our patients, that means it’s reaching its full potential.”

Jared Kaltwasser, a regular contributor to Managed Healthcare Executive®, is a freelance writer in Iowa.

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.