When addressing errors, researchers focused on the individual who made the error and retraining. Lead author Fatima Laher says attention should be shifted to root cause analysis and process improvement.
In an effort to produce quality data that informs valid clinical trial results and withstands regulatory inspection, trial sites must adhere to many complex and dynamic requirements to be effective.
With that in mind, Fatima Laher, director of the Perinatal HIV Research Unit at the University of the Witwatersrand in South Africa, led a research study describing protocol deviations in South Africa’s largest HIV vaccine efficacy trial. The results were published in June 2023 in the journal BMC Medical Research Methodology.
“We all want quality, but we also know that humans are not perfect,” says Laher. “Mistakes happen anywhere. This is a rare and honest analysis which reveals the truth behind the scenes—errors occur even in the most rigorous and controlled settings of clinical trials.
The authors collected data from the HVTN 702 trial using mixed methods, obtaining descriptive statistics from protocol deviation case report forms collected from 2016–2022, of deviation by participant, trial site, and time to site awareness. They then thematically analyzed text narratives of deviation descriptions, corrective and preventive actions, generating categories, codes and themes which emerged from the data.
“We analyzed data of protocol deviations and the researchers’ narratives of their corrective actions and preventive actions,” Laher says. “The most common issue was missing data or procedures. When crises like lockdowns or climate issues strike, they disrupt trial conduct.”
Thanks to the layers of quality assurance, this particular vaccine trial that was observed had a low rate of protocol deviations which didn’t harm study participants.
“Clinical trials occupy the seat for the highest level of evidence, so the idea that clinical trial data are imperfect can feel upsetting,” Laher says. “I often come across students who are aghast when I ask them to contemplate that ‘perfect’ data is an implausible goal. We can strive toward near-perfection, which still gives a trustworthy analysis.”
A fascinating revelation in the study, she notes, is how researchers addressed errors, which was primarily by focusing on the individual who made the error and retraining.
“There’s nothing wrong with that, but I think it’s time for researchers to lean on emerging fields of knowledge; for example, focusing more on root cause analysis and process improvement, which promise broader solutions,” Laher says.
There’s less error when a trial procedure is required for everyone instead of optional and Laher notes It would help to consolidate the myriad documents that stipulate trial requirements.
“The quality journey starts in trial design, makes its way through trial implementation, and ends with analysis,” Laher says. “In this era, researchers should design for the unexpected: building in a strategy like continuous visit windows, for example, may help. I would also support strategies toward simplicity whenever feasible.”
She also feels researchers should train to think beyond an individual’s contribution to error, toward understanding how the systems and processes we inherit and create contribute to error too.
“It would be great if clinical trials start reporting their data quality metrics,” Laher says. “These results reassure us that existing quality assurance strategies are useful and provide insight to optimize the design and implementation of trials.”