The controversial hydroxychloroquine (HCQ) study gave hope to the world. That was about three seconds before scientists found the trial suffered flaws so grievous as to render the results meaningless. Whether the drug helped people heal faster or protected them from infection, this study couldn’t have shown.
Looking over the paper, one finds problems that reach such extremes it’s hard to understand how the paper got published. How could this research have seen peer-review and come out — like this? Therein lies the story.
Ultimately, the study showed us one truth to a dramatic effect: Studies may be so troubled as to defy belief in their publication and still mislead the public as if the results had been pristine.
An ethics committee green-lighted the 14-day study on March 6th, 2020. If the trial began the next day, then March 20th was the earliest date by which they could have finished. Researchers presented early trial results, not yet reviewed by outsiders, for the first time on March 16th via YouTube.
Also on March 16th, the team submitted the findings to the International Journal of Antimicrobial Agents. The journal accepted by March 17th, publishing on March 20th. That leaves a 1–2-day period for the peer-review process, which may take months ordinarily.
The strangest aspect of this paper’s journey to publication may be that once results were uploaded to the preprint servers where anyone anywhere could access them, a more thorough external review appears not to have taken place.
Many of the 40,000+ COVID-19 preprint studies have published quickly, sought review, and published formally a month or two later with the needed corrections.
Peer-review means someone with expert-level knowledge has read and critiqued a paper. If you’re lucky, it will be someone who disagrees with you; they will see every weakness in your arguments. Asking someone outside the field would be like asking a classical pianist what the strange noises from your car mean. Reviewing it yourself would be like grading your own homework.
If peer-review happened as the journal claims, the job was so poor as to be undetectable. That isn’t said metaphorically. The paper submitted for review shows no differences from the original.
Scientists who began reviewing the document before it published, compared the earliest version to the journal’s published version. The earlier draft came from a google drive version that had circulated among scientists before March 16th. They had this to say:
“These versions of the study report were the same as the one we reviewed, indicating no or limited external peer review for the final published version.”
The International Society of Antimicrobial Chemotherapy, stated in April 2020, “ISAC shares the concerns regarding the above article published recently in the International Journal of Antimicrobial Agents (IJAA). The ISAC Board believes the article does not meet the Society’s expected standard, especially relating to the lack of better explanations of the inclusion criteria and the triage of patients to ensure patient safety.”
The ISAC concluded, “it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practice.”
The concrete evidence shows that nothing changed. Was the paper given to someone uniquely unskilled in peer-review? The evidence suggests not.
The Editor-in-Chief of the publishing journal is also an author of the study. The issue featuring the research paper also included an editorial penned by the study authors, including one Editor-in-Chief.
The letter gives a sparkling review, making statements not reasonably made from the results of a single study, even if the drug had helped people.
“It’s possible use both in prophylaxis in people exposed to the novel coronavirus and as a curative treatment will probably be promptly evaluated by our Chinese colleagues. If clinical data confirm the biological results, the novel coronavirus-associated disease will have become one of the simplest and cheapest to treat and prevent among infectious respiratory diseases.”
Had the results confirmed as the quote states, it would not have shown the drug would “become one of the simplest and easiest to treat and prevent among infectious respiratory diseases.”
The study as it was, and not as the authors presented it, never had the potential to deliver firm answers. Partly, the controversy with this study comes from the discussion of two ideas at once: 1) the efficacy of hydroxychloroquine and 2) the quality of this study. The two are distinct issues.
The final draft shows 20 patients receiving treatment, 6 less than at the beginning. Looking at the data analysis, one would assume that the missing 6 patients dropped from the trial. When people drop from a trial, we “censor” the data, which stops dropouts from influencing the results.
That is not what happened.
Three ended up in the ICU, one died, one could not tolerate the side effects from treatment, and the last person left the hospital. The authors analyzed the study as if these critical or dead patients had simply stopped taking the drug.
Effectively all patients that conflicted with the authors’ conclusion ended up excluded from the data. Whether that was the intention is another issue entirely, but it was the result. The percentage of people who died or were hospitalized is almost exactly what one expects to see without treatment, with 15 to 20% severe cases and a 1% death rate overall.
The study bases much of its claims on results taken from the 6th day of treatment, but that was not a planned checkpoint for the study. They switched endpoints, citing the promising nature of the results. That’s a problem.
The PLoS Clinical Trials Editorial Board highlighted of endpoint switching.
“A fundamental principle in the design of randomized trials involves setting out in advance the endpoints that will be assessed in the trial, as failure to pre-specify endpoints can introduce bias into a trial and create opportunities for manipulation.”
The clinical trial registry says they planned to check patients on Day 1, Day 4, Day 7, and Day 14. Six does not appear among the planned days. It begs what happened on day 7. The paper submitted on March 16th, so it’s reasonable to expect they include that in the preprint.
The results appeared as if a control group took part, but one author fairly openly doesn’t believe in controlled trials. Some patients assigned themselves rather than random sorting, and they received treatment at different locations. Any differences might explain why one group fared better.
The paper claims that all patients were older than 12 years, but the report appears to have several under that age. Some patients go negative before positive again. It’s unclear how this test shows anything about the drug’s ability to help either. The authors neglected to fully describe these aspects and show no other factors could explain the fantastic results.
Whether this drug helps matters a great deal, and the consequences reach far beyond the “what have we got to lose?” mentality. The answer is life. That’s what we have to lose.
COVID-19 patients are already at risk for acute cardiac injury, myocarditis, and cardiac arrhythmias — all heart-related afflictions.
In response, Dutch scientists detailed a litany of valid and serious concerns. “Even if a larger randomized controlled trial (RCT) showed that the combination of (hydroxy)chloroquine and azithromycin would be effective in patients with COVID-19, safety would still be an issue.” The criticism isn’t strictly American.
The toxic potential of the drug — in all things, the dose makes the poison — increases the risk of death. While it may not have mattered much in malaria patients, it matters for COVID-19 patients whose hearts may be directly infected.
According to a cardiology report from JAMA, “In patients with coronavirus disease 2019 (COVID-19), cardiovascular involvement occurs frequently.”
The question isn’t, “what have we got to lose?” It’s “How certain are we that a drug with a known ability to harm the heart will help patients infected with a virus that also increases their risk for heart problems?”
The answer is you better be damn sure.
It took less than a day for the too-good-to-be-true claims to catch the eye of elected officials and a world desperate for a cure.
The public heard that a cure was soon to come, leaving scientists trapped in the difficult position of delivering the bad news. Officials and the public may not have understood the problems in the study. This highlights the responsibility scientists have to honestly represent their work, and for politicians to defer analyzing scientific results to qualified parties.
To date, the authors have dismissed studies to the contrary as “crappy”while not applying those same design standards to their study. At least one author has a history of outright fabricating data and insiders have claimedthat fear of that author has led to silence.
The dogmatic claims from the authors, taken up by political figures, quickly polarized public opinion on a situation that even those who study the field would need time to fully understand.
Quickly, many dismissed scientists who raised valid concerns as politically motivated, but the evidence for that has yet to be supplied. On the contrary, a welling sea of evidence shows the fears were justified. One cannot dismiss that which conflicts with their chosen perspective by taking any plausible rationale and passing it off as fact. That appears to be the root of this story.
As Upton Sinclair wisely noted, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
Clinging to the belief that a cure was within our grasp has much more appeal than that the study authors had misled us from the bottom all the way to the top.
Whatever the truth of the study’s results, it no longer mattered once media outlets ran with it. The ill-fated effort to help people understand centered on evidence. Scientists failed to see that opinions that come to pass without evidence likely won’t be dissuaded by more of it.
Monolithic public pressure meant scientists had to repeat the studies ad nauseam. A lack of coordination and centralized response meant “the effort [was] marked by disorder and disorganization, with huge financial resources wasted.”
Whether the drug had been helpful, a single study could not and should not be used to draw that broad conclusion. There are too many potential variables, even with thorough research. For the maybe-wonder-drug hydroxychloroquine, follow-up studies boomed out of control. STAT reported in July 2020 that 1 in every 6 drug trials was chloroquine related.