The World Health Organization steps up to the coronavirus plate with what appears to be history’s most ambitious screening experiment.
On Friday, the World Health Organization (WHO) announced a large global trial, called SOLIDARITY, to find out whether any can treat infections with the new coronavirus for the dangerous respiratory disease. It’s an unprecedented effort—an all-out, coordinated push to collect robust scientific data rapidly during a pandemic. The study, which could include many thousands of patients in dozens of countries, has been designed to be as simple as possible so that even hospitals overwhelmed by an onslaught of COVID-19 patients can participate.
With about 15% of COVID-19 patients suffering from severe disease and hospitals being overwhelmed, treatments are desperately needed. So rather than coming up with compounds from scratch that may take years to develop and test, researchers and public health agencies are looking to repurpose drugs already approved for other diseases and known to be largely safe. They’re also looking at unapproved drugs that have performed well in animal studies with the other two deadly coronaviruses, which cause severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS).
The statistics of the trial will be simple on the front end but complicated on the back end. There’s no sophisticated algorithm assigning treatments, just a randomization between available treatment options, including standard of care. I don’t see how you could do anything else, but this will create headaches for the analysis.
Patients are randomized to available treatments—what else could you do? —which means the treatment options vary by site and over time. The control arm, standard of care, also varies by site and could change over time as well. Also, this trial is not double-blind. This is a trial optimized for the convenience of frontline workers, not for the convenience of statisticians.
The SOLIDARITY trial is an expedient compromise, introducing a measure of scientific rigor when there isn’t time to be as rigorous as we’d like.
Want to get more insight? Want to see just how difficult it was to interpret preliminary clinical findings? Andrew Gelman takes a (shallow) dive to see what’s below the tip of this particular iceberg.
Here’s hoping something good comes from this effort. It’s going to be a great case study for statisticians.