New publisher creates a revolution in peer review with its focus on importance and scientific rigour.

Cosmos Magazine

Cosmos

Cosmos is a quarterly science magazine. We aim to inspire curiosity in ‘The Science of Everything’ and make the world of science accessible to everyone.

By Cosmos

Merlin Crossley in laboratory
Merlin Crossley

By Merlin Crossley

Deputy Vice-Chancellor Academic Quality at UNSW.

Peer review protects us from bad science. That’s important.

But what about when peer review meets good or even great science?

Then it slows down publication. This is a problem for society and researchers, especially for early career academics working to establish themselves and move off short-term contracts.

The visionary next-generation journal, eLife, is trying to help. They’ve figured something out – when peer review encounters good or great science it tells us which is which. Their “reviewed preprint” assessments are designed to do that and do it quick.

Let’s first reflect on the importance of peer review. When COVID hit, researchers with varying expertise rushed in. Not all were equal to the task of doing flawless science on infectious diseases at full speed. Some posted preprints quickly to public archives. Later peer review in scientific journals swung into action (as did trial by Twitter, during COVID everyone was interested). Myths were busted. Now we know which drugs and vaccines work, and the modes of transmission. Hooray for science and peer review!

But COVID-like crises are rare. Mostly science works in a different way. A lot of science is pretty good or even great to start with.

Really?

Yes. Professional scientists do an undergraduate degree, a doctorate, four or five years of specialist post-doctoral training, then a few start their own labs and keep in contact with colleagues, and peers across the world. They develop expertise and constantly test ideas at conferences.

Many biology projects take about the time of a doctorate or postdoc – three to five years. Manuscripts may have twenty or more authors – all fiercely protective of their scientific reputations. So, when a paper is ready for submission it has been honed to – what the authors believe is – perfection. Of course, scientists, like all humans, are not perfect. Some feel huge pressures to exaggerate their work, some are vain, delusional, or hurried, but in most cases the papers are pretty good.

How does peer review of good or great science work?

The authors send their paper to a journal they feel is “appropriate”. This means two things –a journal that covers their discipline (or all disciplines), secondly one that has the maximal prestige. If the authors think they’ve discovered something big, they go for Nature or Science. If they think they’ve provided a solid brick in the wall of knowledge, but recognise people outside their discipline won’t be interested, they go for a discipline-specific journal (but still one with the highest impact/prestige factor they think matches the importance of their results).

The editors decide whether the authors have aimed correctly. They “desk reject” papers if they think the authors have aimed too high, saying “while your work is sound, it is better suited to a more specialist journal.” These words hurt, but that’s life.

At some point – often after a lot of wasted time – the paper gets to a journal on the ladder where the editors say “yes this looks OK for our journal”. They send the manuscript to expert reviewers. These assess the rigour, suggest any improvements, and offer an independent opinion to the editors about whether the results are solid and significant enough for the journal.

How this process runs depends on the stature of the journal. For lower ranked journals the reviewers may make just a few suggestions – especially if they previously reviewed the paper for a top journal. But increasingly reviewers for the top journals will ask for extra experiments to boost the scope and rigour. They may say “this is true in mice and human cells, but what about actual humans”, or “there are three controls, it would be even stronger with four controls”. Reviewers sometimes find it hard to “out think” twenty peers who have toiled on the project for five years, but it is easy to ask for more work. Often there are multiple rounds of reviewing, and the extra work exceeds any uplift in the value of the paper.

The two big problems with modern peer review are: the slow successive “desk rejections” as the manuscript descends a connoisseur’s ladder of prestige, and the mountains of extra experiments required to prove the work is worthy of increasingly competitive top journals – and as science expands globally it is increasingly competitive.

It’s a nightmare. Top journals often require years more work. Only the best funded labs can compete.

What eLife seeks to do is revolutionary and simple. The reviewers will now provide an “assessment” that rates the paper on importance and scientific rigour. Authors can address the comments to the extent they wish and then they can hit “publish” and let history decide.

Editorial and peer review at eLife will no longer be just about nipping bad science in the bud and setting a hurdle for acceptance. They will provide a non-binary upfront peer rating – not actual stars, but an assessment using formal category wording. This is great because that’s what peer review does when it meets good or great science – it provides a rating of soundness and significance via the proxy of journal prestige!


Read Cosmos feature: When peer review fails people get hurt


Some people think all this will undermine the prestige of eLife as a journal. So what? Well, sadly we scientists care about journal prestige – our star ratings. Not for their own sake but because we are locked in a career-long competition for resources. The prestige of the journals we publish in is used by independent committees who can’t realistically read and appreciate the significance of our very specialist work, work where we, not the committees are the world experts. Now committees can refer to eLife assessments and hopefully eLife will continue to be highly respected.

Some say scientists should just post everything to the open preprint archives and rely on post-publication review. But post-publication review takes too long. This is bad for society if a genie of flawed science gets out of the bottle. It’s also bad for junior researchers building their reputations. If someone posts a flawless manuscript it may well attract no comments!

Conversely, trial by Twitter worked during COVID but in normal times most people just aren’t interested enough or bold enough to provide critiques post-publication and anonymous critiques can get nasty.

Ultimately, I like the eLife system because authors can decide. They decide to go to eLife or not in the first place. Then when they see the assessment, they can hit publish if they are happy, or revise and resubmit, or go back into the conventional publishing system if they prefer.

eLife is trying to provide an alternative in the internet age. I hope it works. Even if it doesn’t, at least it will stimulate more thinking in this challenging area.

Professor Merlin Crossley is Deputy Vice-Chancellor Academic Quality at UNSW. He has published two papers in eLife

This article was first published in Campus Morning Mail.

Please login to favourite this article.