The COVID pandemic is a great opportunity for fraudsters.
With the desperation for new treatments, a tsunami of research crashes onto preprint servers or is hurried into publication, breaching the dam wall of careful peer review.
Surfing that tsunami, fraudsters have seized the opportunity to make a name for themselves, particularly when it comes to publications that tout ivermectin – the worming medication – as a miracle cure for COVID.
With barriers breached, it has been left to self-appointed data detectives to sniff out fishy science.
Now, in a letter published in Nature Medicine, a group of five are pleading with the academic community to fortify the wall around the pinnacle of evidence-based medicine – the “meta-analysis”. This type of study surveys the field of evidence and has the power to sway medical opinion. One ivermectin meta-analysis, by British statistician Andrew Bryant at Newcastle University and colleagues, has already been weaponised this way. The problem is that this meta-analysis included questionable studies, spotted by the data detectives.
The data detectives’ plea is that as a condition of publishing a meta-analysis, journals mandate that the raw data be made available for scrutiny. They acknowledge it won’t be easy. “We recognise… we are calling for change to nearly universally accepted practice over many decades, but the consequent potential for patient harm on a global scale demands nothing less.”
*
Proving anything in medicine is tough. Look at the back-and-forth debates over cholesterol and statins. To test a treatment, many types of studies are done, among them observational studies, which do not include an untreated control group, and randomised controlled trials, which do.
The randomised bit is crucial! If patients are not randomly assigned to get the treatment or the placebo, the results can be misleading. For instance, in some supposedly randomised trials for ivermectin, the control and treated groups were not equally infected with COVID, which makes any attempt to compare a treatment meaningless.
At the top of the hierarchy of evidence sits the meta-analysis. It is a “study of the studies”. It surveys everything in the field, collecting studies that show no effect (though these tend to be under-reported) as well as significant findings. It gives the greatest weight to the larger studies and then wields a statistical magic wand to divine the message hidden in the data.
The most rigorous meta-analyses are carried out by what is known as the Cochrane Library. Would-be reviewers submit their proposed research question and must pass an intense multi-stage vetting process to ensure rigour, transparency, reproducibility and lack of bias.
“It is a rigorous and onerous process – hence some researchers will publish their systematic reviews in other peer review journals,” explains Andrew McLachlan, dean of pharmacy at the University of Sydney who has been an author of Cochrane reviews.
Meta-analyses have been pushing their weight around since the 1970s. For instance, after a meta-analysis divined that anti-coagulants like warfarin helped people having a heart attack, they became, for a while, standard medical practice. In the case of Sudden Infant Death Syndrome, since the 1990s parents have been advised that the safest position for an infant to sleep is lying on their back. But it’s often claimed that if only a meta-analysis had been done in the 1970s, tens of thousands of infants would not have died because it would have been clear that putting babies to sleep face down increases their risk. In more recent times, it was a meta-analysis that established hydroxychloroquine was doing more harm than good in treating COVID.
But a meta-analysis is only as good as the individual studies going into it. As the saying goes: “Rubbish in, rubbish out.” And while those carrying out a meta-analysis take care to weigh studies based on their quality, they are not on the lookout for outright fraud.
It may sound touchingly innocent, but as Paul Garner, co-ordinating editor of the Cochrane Infectious Diseases Group put it, “everything is based on trust in science”.
*
That’s where the data detectives come in. The fabulous five – including three irreverent Aussies – do not hail from a single institution and have no formal collaboration. Three don’t even have a PhD yet. What unites them is their hobby of sleuthing startling papers. That’s how they all met on Twitter.
“We’re five inherently suspicious people,” explains Kyle Sheldrick – a medical doctor currently doing a PhD on back pain at the University of New South Wales, Australia, and the senior author of the paper.
The two that do have PhDs – Nick Brown, a Brit now based at Linnaeus University in Sweden; and Australian James Heathers, now chief scientific officer at a biotech company called Cipher Skin in Colorado – already enjoyed a reputation as “data thugs” for exposing fraud in psychology papers. One of their sleuthing successes resulted in the retraction of 15 papers.
The other three authors were galvanised by the pandemic: Jack Lawrence, a medical student at the University of London who also moonlights as a “journalist and disinformation researcher”; Gideon Meyerowitz-Katz, an experienced epidemiologist with a master’s degree in Public Health who is now carrying out a PhD in the use of electronic devices in diabetes at the University of Wollongong in Australia; and Sheldrick.
Last November they were mesmerised by a paper uploaded onto the preprint server Research Square. It was by Ahmed Elgazzar, a Professor Emeritus at the University of Benha, in Egypt.
The paper was dazzling for two reasons. It reported that ivermectin reduced mortality by 90%! And with 600 people, it was the largest randomised controlled trial to date.
Though yet to be vetted by peer review, it was clearly a game changer. Lawrence was assigned the paper to analyse as part of his master’s thesis. He accessed the raw patient data from a server by trying the access code “1.2.3.4” and soon noticed glaring problems – such as patients who had died before the study began. So much for informed consent. He connected with Meyerowitz-Katz, a like-minded sleuth he’d met on Twitter, and to the data thugs Brown and Heathers. They found that the Elgazzar data produced means, standard deviations and patient recovery times that were “incredibly unlikely, verging on impossible”.
Lawrence also enlisted the media. Together with Melissa Davey at the Guardian, he sent a list of questions about the data to Elgazzar.
Meanwhile Sheldrick had also been captivated by the Elgazzar paper. When he cast his eye down the table of individual patients, he noticed something bizarre. Their descriptors – age, sex, medical conditions – seemed to repeat in blocks of 18. It was, he says, “audacious fraud”. He joined forces with the other sleuths – Nick Brown had also picked the repeating blocks – and they contacted Research Square.
On 14 July 2021, the preprint server withdrew the article citing “ethical concerns” and launched a formal investigation. Colombian paediatrician Eduardo López-Medina captured the response of many doctors around the world: “I was shocked, as everyone in the scientific community probably were. It was one of the first papers that led everyone to get into the idea ivermectin worked in a clinical-trial setting.”
*
While the Elgazzar paper had a major impact in its seven months online, having been downloaded 100,000 times, it had a second life once incorporated into meta-analyses. At least the preprint carried a warning that it was unvetted. Incorporation into a meta-analysis laundered the data while enabling it to skew the final analysis.
There have been several meta-analyses of ivermectin and its ability to prevent hospitalisations and death.
A Cochrane review, published on 28 July did not use the Elgazzar data – in part because Elgazzar did not include a true control group. Rather he compared a group treated with ivermectin to a group treated with hydroxychloroquine. There were also concerns about whether the study participants had been properly randomised, explains review author Stephanie Weibel. She and her co-authors concluded: “We are uncertain whether ivermectin compared to placebo or standard of care reduces or increases mortality.”
Other meta-analyses did use the Elgazzar paper. One was led by statistician Andrew Bryant at Newcastle University, UK, and published in the American Journal of Therapeutics. Their meta-analysis claimed ivermectin reduced the risk of dying of COVID-19 by 62%. One of the co-authors was Tess Lawrie, a doctor and founder of the British Ivermectin Recommendation Development Group (BIRD). While the authors of the meta-analysis claim they have no conflict of interest, Lawrie has a large online following of vaccine sceptics to whom she spruiks the dangers of vaccines while holding up ivermectin as an effective treatment. Indeed, she’s weaponised the meta-analysis as part of her pro-ivermectin, anti-vaccine campaign – views that are at odds with the impartiality expected of someone carrying out a meta-analysis. As Garner put it: “Cochrane has checks and balances to prevent using authors with a particular positionality.”
In August Bryant told Nature that even if the Elgazzar study is removed, the meta-analysis would still show that ivermectin causes a major reduction in deaths from COVID-19.
Andrew Hill, a statistician at the University of Liverpool, UK, has come to quite a different conclusion. Hill also used the Elgazzar paper in his meta-analysis, which was published in July in Open Forum Infectious Diseases. It reported a 56% reduction in mortality using ivermectin. But in August, Hill published an “Expression of Concern” in the same journal, which concluded: “Subsequently, we and the authors have learned that one of the studies on which this analysis was based has been withdrawn due to fraudulent data. The authors will be submitting a revised version excluding this study, and the currently posted paper will be retracted.”
Hill was caught quite unawares by the Elgazzar paper. A veteran of meta-analyses in infectious diseases, he wistfully recalls the power of the studies he conducted to show the effectiveness of HIV medications whose rapid rollout saved countless lives: “They were gold-standard studies carried out by pharmaceuticals companies.”
His mistake was taking the word of the authors of the papers used for his ivermectin meta-analysis. Each author was sent a checklist to verify whether they had, for instance, randomised the way they sorted their subjects into treated versus not-treated arms.
Now says Hill, “I’ve gone from leading a research group to a detective agency.” He adds: “It’s a tribute to the Australians [Meyerowitz-Katz and Sheldrick] and the other data detectives.”
Based on the revelations of the data detectives, he asked the authors of the papers used in his meta-analysis for their raw patient data. One by one, papers that showed a significant protective effective of ivermectin, including another showing a hefty effect by Iranian scientist Morteza Niaee, have fallen down.
“It seems the whole ivermectin story is a house of cards,” says Hill.
*
The problems with dodgy ivermectin studies are legion. The discredited Chicago-based data company Surgisphere published a preprint purporting to show the beneficial effects of ivermectin in April 2020. Now we know their database was fabricated, but not before it led to the publications of two papers in prestigious journals: a New England Journal of Medicine paper showing that cardiovascular drugs known as ACE inhibitors did not worsen the prognosis for COVID outcomes, and another in the Lancet showing that the drug hydroxychloroquine did. (As it happens, the Surgisphere authors guessed right on these last two.)
It appears there are more retractions to come. In September Buzzfeed revealed problems with a paper reporting benefits of ivermectin by retired Argentinian professor and endocrinologist Hector Carvallo, at the University of Buenos Aires.
So are unvetted preprints the problem?
Hill points out that preprints came to the fore during the Ebola crisis. “We couldn’t wait four months for a publication in the New England Journal of Medicine – the normal publishing methods were no longer fit for purpose.”
Meyerowitz-Katz agrees preprints are necessary. “I think during the pandemic, they’ve become a vital way of disseminating scientific information quickly. For example, the RECOVERY trial, showing the beneficial effects of dexamethasone, came out as a preprint over a month before the paper was published, and that probably saved thousands of lives.”
For the data detectives, the best way to sandbag the integrity of science against the deluge of fraudulent or inept papers is for authors of a meta-analyses to request the raw patient data. It was individual patient data that allowed the data detectives to sniff out the problems in the Elgazzar paper. Likewise, the Surgisphere data provided the means of its own undoing.
Explains Sheldrick: “We won’t be able to detect sophisticated fraud but these frauds were blazingly obvious from the raw data.”
Some worry about the logistical and political difficulties of requesting individual patient data from researchers in different countries who may have different processes around research ethics. McLachlan believes it could add months to completing a meta-analysis. “The broader point here is whether researchers will be able to release individual data, whether they choose to release data or if it is in a suitable form to use in a meta-analysis.”
“It’s a good hypothesis to throw out,” says Paul Garner. “But it’s a little unnuanced – the practicalities and politics of doing meta-analyses are very important.”
In his view, the Cochrane approach of setting aside ‘fishy’ studies is the quickest way.
“The Egyptian study was obviously fake. We have never seen that sort of effect size [90%] with an antiviral. The reporting of the methods was terrible. Studies with this degree of ‘smelliness’ should be put on hold.”
Yet Hill did succeed in his efforts to request the raw patient data from the contributors to his meta-analysis. He agrees with the data detectives. In the current era, he says, “medical journals need to be enforcers.”
Some peer-reviewed journals do require the raw patient data. And following the imbroglio with Surgisphere, the Lancet has made changes. In September 2020, they announced in an article titled ‘Learning from a retraction’ that “editors will ensure that at least one peer reviewer is knowledgeable about the details of the dataset being reported and can understand and comment on its strengths and limitations in relation to the research question being addressed”.
But the battle is far from won, says Sheldrick. “It’s a step in the right direction from one journal, but we need whole-of-system change.”
Meyerowitz-Katz has also found requesting individual patient data to be less onerous than many think. “It would add work on the part of the reviewers, certainly, but having actually gone through this process with ivermectin I think that there’s no reason it would enormously delay reviews. Most of the authors we’ve communicated with who have actually conducted studies respond within 1–2 days, and in many cases we’ve gotten data less than an hour after they reply to the first email.”
Sheldrick estimates working through the individual patient data cost him around 40–120 hours.
One area that everyone agrees will be difficult is requesting data from pharmaceutical companies, because individual patient data is part of their intellectual property.
However, Meyerowitz-Katz says: “Our suggestion would be that if this was standard practice in all clinical trials, it would not be an issue. It’s only a problem because right now no one does it – you have to start somewhere!”