Putting science under the microscope reveals not all is well

This is one of a five-part series from Cosmos Weekly, investigating the peer review process.

For many people, the image of the scientist or doctor in a white coat represents hope, a clear, questioning mind, and also purity and integrity – freedom from contamination.

The cover of international scientific publications – with out-of-this-world images and question marks behind every sentence – also suggests accuracy, insight, veracity and substance.

Every so often, news comes along that shakes the foundations of how we see science, scientific endeavour and the researchers themselves.

One of the most obvious and publicly discussed current examples is the alleged fraud in images used for a pivotal Alzheimer’s paper published in 2006 by Sylvain Lesné, originally uncovered by Matthew Schrag and investigated further by Science last year.

The problem of scientific fraud seems to be globally pervasive, with cases of alleged academic misconduct relating to the social lives of spiders in Canada, microplastic consumption in reef fish in Australia and a “Himalayan Fossil Hoax” by an established Indian palaeontologist, but called out by an Australian geology professor.

In fact, history is riddled with examples of the not-so-innocent exploits of unprincipled scientists, who have allowed personal interests to interfere with their better judgement.

How are academic transgressions exposed and what happens when they are?

Two websites – Retraction Watch and Pubpeer – have emerged  to attempt to plug the holes in the leaky bucket of peer review.

Retraction Watch was co-founded in 2010 as a blog site by Dr Ivan Oransky, distinguished writer-in-residence at New York University’s Arthur Carter Journalism Institute, and Adam Marcus, managing editor of Gastroenterology & Endoscopy News. It was originally prompted by Marcus’s coverage of an internationally infamous case of scientific misconduct and healthcare fraud by an American pain specialist, Scott Reuben.

Ivan Oransky smiling at camera
Ivan Oransky.

“We started trading emails about retractions, and it turned out there were far more stories than we thought there were, and no one was telling them,” says Oransky. “And when we did see the retraction notices, they were often somewhere between opaque and misleading – a big transparency problem for science.”

As of 10 October 2022, the Retraction Watch online database lists 36,104 retracted papers and 2249 expressions of concern (of which 772 have been upgraded to a full retraction). Of these, 475 retractions and 41 expressions of concern (with three updated to retraction), are linked to Australia.

This might sound like a lot, but by Oransky’s reckoning, it barely scrapes the surface. “We estimate that probably one in 50 papers should be retracted. Right now, it’s about eight out of 10,000, so just under one in 1000 is retracted.”

In arriving at that figure, Oransky refers to the work of Dr Elisabeth Bik and other forensic specialists who analyse published papers for areas of concern. These can take the form of duplicated or manipulated images, doctored or misreported data, plagiarism, or poor scientific processes and methods used to gather or analyse the data and form conclusions.

“When we did see the retraction notices, they were often somewhere between opaque and misleading – a big transparency problem for science.”

Ivan Oransky

A microbiologist by training, Bik is now world-famous for her skills in detecting image manipulation in scientific papers. Her journey began when she discovered her work had been plagiarised in an online book. As she looked around, she found more and more instances of blatant plagiarism – papers which had stolen text from others. About a year later, in 2014, as Bik was looking through a PhD thesis with plagiarised text, she found a photograph of a ‘western blot’ – identical to another photo included in a previous chapter for a different experiment – that had been cropped and rotated.

A western blot is often used in research to separate and identify proteins.

“I thought, this has been done intentionally. It’s been put upside down to mislead the reader,” explains Bik. “I was very angry that people were cheating in science, because to me, science is about finding the truth. And so, I checked another 100 papers.”

This grew to about 20,000 papers over the next 2–3 years (work she completed in addition to her full-time job), at which point Bik knew she was onto something big.

Fast forward to 2022. Bik has now reviewed more than 100,000 papers, speaks internationally about her work and still manages to carefully trawl through images and data to detect signs of potential manipulation – a job now made easier by algorithms like ImageTwin, which automate some of the key processes.

Elisabeth Bik at a computer on a dining room table
Elisabeth Bik. Credit: Clara Mokri, Santa Cruz, CA

“It’s a balance,” she says. “I hate that people cheat, but I also have a drive to warn others, to warn people about papers that might have areas of concern.”

Bik typically reports at least one detection a day, often on social media. She hopes this transparency and openness is helping create a more encouraging environment for whistle-blowers and others who notice discrepancies to come forward.

“I think more and more people are becoming aware that this happens. So, when they see manipulation or something else that looks strange in a paper they are reading, they aren’t just thinking, ‘Oh, I’m probably just imagining it’. Now they feel confident enough to raise it as an issue.”

In a 2016 research paper published by Bik, almost 4% of a sample of 20,621 papers published in 40 scientific journals between 1995 and 2014 contained some form of inappropriate image manipulation.

Bik and other so-called “sleuths” (such as Matthew Schrag) have emerged to plug at least some of the peer review’s leaky bucket holes, identifying and calling out examples of shoddy work that undeniably undermine science and the scientific process.

There are some very good reasons why posting findings on social media works.

First, it’s public, transparent and as Bik notes, brings attention to the prevalence of the issue.

“There’s a power in putting these things online,” she says. “Social media has had a role to play in the successes of many campaigns such as Black Lives Matter, Dr David Dao’s forcible removal from United Flight 3411 and other socially important matters and movements.”

Second, sharing concerns publicly enables the scrutiny of many experts globally – far more than just the normal two or three reviewers at the final pre-publication stage.

Pubpeer is another site that has risen organically to address some of the known shortcomings of the publications system, including increasing transparency and giving the scientific community a space to voice their concerns about data or text in articles, and provide and respond to feedback. The article database links to published articles as well as those which are “pre-print” and listed on e-print, pre-review sites such as arXiv. It also has a handy browser plug-in that allows comments on specific articles to be seen by the reader on other journal websites.

Through the site, members of the scientific public – in many cases people like Bik, but also other topic specialists – will comment on a specific journal article, identifying areas in text or images of concern. The comments can be major or minor, negative or positive, and are not moderated for scientific accuracy, “so factual comments conforming to our guidelines may still be wrong, misguided or unconvincing”, explains the Pubpeer website.

Many comments concern research or presentation methods that users consider not to be best practice. Every comment should be evaluated on its merits by the reader.”

Authors are then encouraged and facilitated to respond to comments, in some cases defending their work, providing extra information or being alerted to a genuine accidental mistake (such as a mislabelled image leading to duplication within a journal article, such as this example).

Typing “Bik” into the search bar on the site quickly reveals, however, that many identified errors are unlikely to be accidental. The sheer number and rate of contributions Bik has made to calling out manipulated images in academic journals is a little bit shocking – they’re almost daily discoveries. What’s worse is how many of them are subsequently highlighted by a Pubpeer tag reading ‘Retracted’ or ‘Expression of Concern’.

It’s not a witch hunt, though, stresses Bik.

“I try not to punish the authors. I try to make it about the papers but raising the issues in public makes it much more difficult to sweep the problems under the rug.”

It seems there are several points at which interested parties may participate in surreptitious and subversive “cleaning” processes.

Oransky explains that there’s a kind of reluctance for journals to retract papers, although it’s not clearcut as to the reasons why. “In some cases, journals and publishers are honest about it,” he says. “They say that lawyers are involved.”

It’s not hard to understand why individuals and institutions, whose livelihoods and reputations and research income – depend on publication, might fight accusations of misconduct. Arguably, the pressure on individuals from a system based on a “publish at all costs” mentality has contributed to many of the instances of misconduct in the first place.

“I hate that people cheat, but I also have a drive to warn others, to warn people about papers that might have areas of concern.”

Elisabeth Bik

It also doesn’t look great for a publisher – who trades on the strengths of a peer-review process – to be involved in frequent retractions. “This requires an acknowledgement that peer review is missing an awful lot,” says Oransky. It’s an obvious conflict of interest.

Overall, correcting the record is not high on the to-do list, according to Oransky. “It’s not a priority for journals, it’s not a priority for researchers and it’s not a priority for universities.” Retraction Watch, at least, brings attention to retracted articles and items of concern so they can’t be hidden away.

The publisher problem rings true for Bik, who has sent many letters to editors of various journals highlighting concerns with papers.

“One thing I have learnt,” says Bik, “is that it can take years before an editor even looks at it or starts doing something. In two-thirds of the cases, nothing happens – even after five years have gone past.”

It’s not always clear why responses take so long – that is, if there’s any action at all – but Bik can easily list a number of likely scenarios: a lack of knowledge amongst inexperienced editors, a denial about the reality of manipulation and fraud in science, a general feeling of being overwhelmed by the sheer number of issues being reported and, of course, conflicts of interest.

The problem is, though, scientific fraud affects people, from early career researchers who become scapegoats for their supervisors, to sleuths like Bik who are frequently the target of legal threats and attempts to discredit their own research, to families like the Lonergans, living in false hope for an Alzheimer’s cure “just around the corner”.


This story is part of a five-part series from Cosmos Weekly on peer review. Read the other four:

Next week: Clare Kenyon interviews Elisabeth Bik and Ivan Oransky on Monday. Subscribe to our free daily newsletter to be the first to see it.

Please login to favourite this article.