Existing evidence-evaluation frameworks are not sufficiently targeted toward the specific factors relevant to digital health products, they argue.
A working group of more than a dozen digital health experts from industry and academia is proposing a new framework they say can better enable payers, providers, and other organizations to effectively evaluate digital health interventions (DHIs).
“While digital health (DH) holds promise, current practices in DH have been described as the ‘Wild West,’ with misleading claims being common, and clinical evidence quality often poor,” wrote corresponding author Jordan Silberman, M.D., Ph.D., and colleagues. Silberman is the director of clinical analytics and research at Elevance Health.
This is not the first attempt to develop guidelines for evaluating digital health tools. As the digital health sector has evolved from simple step counters to prescription digital therapeutics, regulators and payers have struggled to retrofit existing evaluative frameworks to the new world of digital health. But Silberman and colleagues said many existing frameworks have significant shortcomings.
“Prior DHI evidence assessment frameworks are typically sections of broader DHI assessment guides, often containing just a few, superficial questions, with minimal evaluation of evidence quality of bias,” they said.
Specifically, the authors argue, current frameworks leave out quality criteria that are particular to digital health, and they also do not specify evidence quality criteria that is critical to passing regulatory muster.
In response to those gaps, Silberman and colleagues proposed what they call the Evidence DEFINED framework, which stands for Evidence in Digital health for EFfectiveness of INterventions with Evaluative Depth. The tool is designed to help health systems, payers, pharmacy benefits managers, and other organizations more quickly and effectively decide which DHIs warrant investment and/or promotion.
After that comes the DEFINED framework, a checklist of 21 items for consideration built specifically for digital health. The items include ensuring that the results are not “cherry picked,” checking that trials were properly pre-registered at clinicaltrials.gov and that results are publicly reported in peer-reviewed journals, and verifying that any modifications to the DHI during or after the trial were clearly documented.
The final step is to make a recommendation based on the checklist items.
The work group said while they were focused on rigor, they also sought to make the assessment faster by avoiding information gathering related to items unlikely to make a significant impact on the ultimate decision. They also sought to develop a framework that carefully assessed evidence and identified key evidentiary flaws that could prove important when a DHI is scaled up for consumer use.
“For some patients, rigorous DHI evidence assessment may mean the difference between medication adherence and nonadherence; between overcoming nicotine dependence and developing lung cancer; between resolution of affective symptoms and chronic emotional struggles,” they wrote.
They warned that because DHIs are scalable, relevant impacts may be magnified.
In addition to helping organizations better vet DHIs, the authors said their framework is also important because it will encourage the developers of digital health tools to hold themselves to high standards.
“We hope this will promote evidence-based decision making, encourage adoption of effective DHIs, and thereby improve health outcomes across a range of conditions and populations,” they concluded.