Artificial intelligence (AI) has largely dominated conversations about the future of healthcare. From improving diagnoses to analyzing EHR data, AI is often touted as the cure for many of healthcare’s technological woes. But there are currently huge barriers to that future.
That’s according to an editorial written in Nature partner journal Digital Medicine, which seeks to temper expectations about AI.
The titular truth in “The ‘inconvenient truth’ about AI in healthcare,” is “that at present the algorithms that feature prominently in research literature are in fact not, for the most part, executable at the frontlines of clinical practice.”
The authors say that while AI has the potential to help fix many of healthcare’s biggest problems, the hype of AI is just that—at least in the current state of healthcare.
“We are so far away from artificial intelligence becoming a tool to improve the way we deliver care despite all the sensational research publications,” co-author Leo Anthony Celi tells Managed Healthcare Executive.
The biggest problem? Celi says “An algorithm that was developed using data from another hospital or clinic cannot be used immediately by another hospital or clinic …There is practically no healthcare organization that has the data infrastructure and skillset to oversee the validation, deployment, and continuous re-calibration of AI algorithms.”
The editorial points out what most in the healthcare industry know all too well: that healthcare data is compartmentalized and often inaccessible across organizations—or often, even within an organization. Without a stable structure with which to gather data, the authors argue, there is little point in bringing in systems to analyze those data—i.e., poor data beget poor analysis. “Simply adding AI applications to a fragmented system will not create sustainable change,” the authors conclude.
This becomes a problem for organizations with an infrastructure too small to optimally train algorithms. If sharing data is impossible, smaller organizations’ use of AI will suffer because they will lack the ability to gain access to adequate amounts of data. This means the algorithms used and trained on those smaller data sets won’t “‘fit’ the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications.”
A lack of resources could also lead to AI bias—for instance, the authors point out that an algorithm fed a sample of primarily Caucasian patients wouldn’t perform the same for minority patients.
The key to healthcare’s data problem, the authors say, requires a broad look at who that data actually belongs to—"who owns health data, who is responsible for it, and who can use it?”
Public discourse and policy intervention, the authors say, are necessary to help answer those questions and move the state of healthcare data forward. Before AI can begin to revolutionize healthcare, healthcare data must be in a state where it can be shared—and controlled—in a way that the public and government sectors approve of.