It’s important to keep in mind that scientific publishers are businesses, and money is cccc
I’ve been reading a lot about peer-review lately.
I’ve previously been of the opinion that peer-review is far from a perfect process, and that many (researchers and lay-people alike) are putting undue trust in the system. Arguments in favour of peer review focus on the idea that peer review functions as some sort of ‘quality control’; as a ‘gatekeeper’ or ‘checkpoint’ process that separates the bad research from the good research – of which only the latter is allowed to be set free upon the scientific community to form part of the scientific literature and lay the groundwork for future research.
In a 2013 article on the Laboratory News website, the Vice President of Health and Medical Sciences at Elsevier, Peter Harrison, is quoted as saying:
Publishing in high-impact peer-reviewed journals helps researchers’ careers as it puts a ‘quality stamp’ on their research and helps them get visibility and recognition.
It’s not surprising that scientific publishers endorse the idea of ‘peer review as quality control’. Scientific publishing is, after all, the main provider of the service, and it is the industry that requires peer review the most. Since peer review endows research with a mark of approval of its perceived quality, scientific publishers who wish to appear reputable and as taking their work as guardians of scientific information seriously, must ensure that articles they publish are also peer reviewed.
However, as efforts such as Retraction Watch have shown, peer review is far from a perfect filter, and even the most prestigious and most well-respected journals have failed in their quality control, allowing both since-debunked and fraudulent research to bear their name. However, it is still possible that although peer review is not perfect, that it still manages to keep a lot of sub-quality research away from reputable journals.
Shroter et al. (2008) however report that peer reviewers for the British Medical Journal found very few errors, of both major and minor importance, in papers they had been sent to review – results that suggest that peer reviewers are, on the whole, unable to find all errors in the manuscripts they are sent to review. This finding is not surprising, since peer reviewers are, as the name suggests, peers. Like the authors of manuscripts, reviewing peers are also hard-working researchers, with all the responsibilities that this will bring. It has been estimated that a good peer review effort can take everything from 5-8 hours of a reviewer’s time – which is the equivalent of a large part of a normal work day. It is therefore unlikely that most reviewers, however well-meaning, will be able to catch most errors in the papers they have been sent to review; they simply do not have the time.
So peer review is not perfect, and that is not even going into all the other confounding issues, of author and reviewer bias and deliberate attempts to mislead. This is part of the reason why I have been – and still am – skeptical of peer review: the system is simply not as good as people seem to think it is.
What are the alternatives? Opinions differ. Some argue for a post-publication peer review system, where manuscripts are published first, and reviewed second. I have traditionally been a fan of this system, since it is (under ideal conditions) meritocratic, where only the best papers will prosper. As initiatives like PubPeer have shown, it can also be very efficient at finding questionable conclusions and deliberate attempts to mislead in the scientific literature. Similarly, as physicians’ and mathematicians’ use of arXiv have shown, pre-publication peer review can also be a very constructive way of improving papers and disseminating scientific results prior to formal publication. These kinds of systems however require a culture that is open to them, and I worry that perhaps the life sciences are not.
Nature Publishing Group trialed a pre-publication system of peer review in 2006, but found that enthusiasm was low. While there was sufficient interest for Nature to trail the system, once it was up and running, it failed to deliver as potential reviewers seemed reluctant to review papers, or indeed leave comments to improve the manuscripts, and comments that were left on the manuscripts were of ‘limited use’ according to editors.
In some sense, the results Nature obtained from their experiment make perfect sense. As with the problems with peer review mentioned above, academics are busy. Peer review might work in its current form because manuscripts are sent by the editors to specific authors, who thus are ‘assigned’ the job of reviewing the paper. Being personally given the responsibility to review a paper, a reviewer with some time and/or altruism to spare may be more incentivised to perform a proper review of the paper, especially when the editor is capable of sending them reminders, nudging them to get on with the review. Conversely, in an open system, these direct assignments no longer exist, and all of a sudden the job of reviewing a paper becomes ‘someone else’s job’, and a problem that can be pushed aside for individual, potential reviewers. In an open review system, where anyone can be a potential reviewer, who is going to step up to the task? Very few, would have been my guess, and, indeed, this is what Nature found. When peer review is everyone’s responsibility, it becomes the responsibility of none.
Similarly, I think this is part of the reason why peer review is seen as a method of quality control. Even if there is little evidence of its success in acting in such a capacity, it is a comforting thought. It is easier to open your article of choice — and rest assured that its contents have been peer reviewed, and just assume that everything that it contains is reasonably close to the truth — than to read every article with an unerringly critical eye. It is draining to question everything, especially for long periods of time. In that sense, the idea of peer review as a gatekeeper of truth, may be more comforting than true.
Peer review is not perfect, and, so it seems, neither are the alternatives. So where does that leave us? Where does that leave me? As I mentioned at the beginning of this post, I am traditionally very skeptical of peer review, and I want to believe in the merits of pre- or post-publication efforts to source reviews from a larger pool of peers. It would be great if peer review was a community effort – to the same degree that publishing seems to be.
When I first went into science, this was the kind of idealised view I held; thinking that science was something that scientists did together, where everyone contributed and received their due. With more experience, I now realise how naïve such idealisations are. And I think ideas of peer review are similarly so. Peer review in its current form is not perfect, because scientists are busy, and crowd-sourcing efforts are going to be brought down for the same reason: scientists are busy. It’s easier for a busy person to deal with one paper being added to their to-do list than for a busy person to find the time to contribute to a never-ending pool of scientific results awaiting review.
I’m tempted to think that more honour is needed in science, but this, too, would be unrealistically idealistic. As much as I want to propose a new system of peer review – a perfect one – many such ideas have already been proposed — and they all fall foul of the same assumption: that a good idea is all that it takes to change the culture and the peer review process. Instead, I will conclude with the insight that, having read more about peer review, and the more I know about the process – the more I realise that I don’t know how the problems with peer review should be solved.
But I think that a good place to start would be for the consumers of scientific literature to be educated about the pitfalls of peer review and its imperfections; to make the consumers of scientific literature understand that peer review is not a mark of unerring quality. Second, peer review should worry less about impact, and more about the methodological soundness of the work. This is what PLOS ONE does, but I am also aware that there are mixed feelings about this in the community, with some people worrying that without some kind of editorial predictions of impact and importance, the literature will be flooded with inconsequential papers. However, there are two problems with this view. One, there is no information on how many ‘bad’ papers peer review are keeping away from the community; if you want to publish your work, you’ll always find a journal willing to publish, so this argument doesn’t hold any water. Second, there seems to be some redundancy to the argument of what’s ‘important’, on one hand, peer review is supposed to identify important papers and bring them to attention, whereas, on the other hand, citations are supposed to identify important papers and bring them to the attention of the community. And then there’s the additional consideration of what’s important? Is everything new important? But what about replications? – they are important too. And what one scientist finds ‘boring’ is the topic of choice for another.
In fact, I think my readings about peer review have made me realise that when we speak of peer review, its problems and how to fix these – that we’re actually asking the wrong questions. It’s not the peer review system that is flawed; it’s the publishing system itself. The problems with peer review are the symptom, not the cause. Instead of seeing charities and Research Councils spending money on new journals and new initiatives to improve peer review, what I would like to see is an initiative that’s trying to fix the problems at the bottom of it all. And, the way I see it, journals are part of the problem, since they obsess over ‘importance’ and ‘impact’ and worry more about impact factors than about negative results and attempts to replicate previous findings (or so it seems).
It’s important to keep in mind that scientific publishers are businesses, so we cannot expect them to prioritise the needs of the scientific community over their own monetary bottom line. It has been noted that high-profile journals are in the business of publishing ‘impactful’ papers — even though these are not necessarily the most scientifically rigorous. Flawed papers will also attract citations — and incidentally, increase the impact factor of the journal. For example, the ‘arsenic life’ paper has over 400 citations on Google Scholar, despite having been shown to be erroneous.
Indeed, what I would like to see is a system that gets rid of journals altogether. Imagine, and this is a very crude sketch, that there was a centralised system where manuscripts (results) were deposited, and where they could be retrieved by other researchers since this is what constituted ‘the scientific literature’. Here, there would be an emphasis on results: this is what these researchers found. When other researchers accessed these results, they could not only be prompted to share their view (do these results make sense? Have you replicated these results yourself?) and comments, so that results become less a section-header in stand-alone manuscripts, and more building-blocks in a larger community endeavour, where individual results are shared and used between researchers and research groups.
Because, naïve as I am, I like the idea that science is about sharing, so everyone can learn a little bit more about the world, rather than the idea that science is defined as a process of scientific career advancement — which it seems to be right now.