Can open science fix the system?
The fragile state of peer review
Six months she had been waiting for a response from the academic journal. Six months of checking her mailbox and hoping her paper on aging in zebra finches would be accepted. If so, that would be the first time a novel interpretation of the data was shared with the world.
But when Marianthi Tangili, a PhD student focusing on the evolutionary biology of aging, finally got her answer, it devastated her. ‘I was so upset. I was actually crying about it’, she recalls.
The person who had reviewed her paper had some valid points, she has to admit. However, the wording they chose was extremely harsh. ‘They were so mean about it.’ It made her doubt everything about her work. Was she crazy for thinking this was something worth publishing? Were she and all her co-authors stupid?
Her supervisor advised her to step away for a couple of days and then look at the comments again with fresh eyes. That helped, she says. ‘You just have to not take anything personally and have trust in your work and that it will find its place. Even if it’s not in a top journal, it will be out at some point.’
Still, she says, the reviewer went about it the wrong way. You should be honest, of course, but you can also be nice. ‘When the first author is a PhD student, being mean just isn’t the way to go.’
Flawed process
It’s just one of many issues with peer review, where researchers evaluate the work of their peers – most often anonymously – and determine whether it’s sound science and worth publishing in whatever journal they are reviewing for. The idea is that feedback from other experts encourages authors to increase the quality of their work, thus resulting in better science. But the process is also severely flawed.
Julien van der Ree, a PhD student at the Zernike Institute for Advanced Materials, can still feel the sting of what happened when he submitted his second paper for publication. Even though one reviewer was quite happy with the paper and simply asked about future work, the second one had much more to say.
When the first author is a PhD student, being mean just isn’t the way to go
‘He was angry’, Van der Ree says. ‘You could feel the poison in his language. He was commenting on all our language issues. I lived in Canada for a few years and my English feels native to me, but this reviewer had over forty complaints on the English alone.’
Van der Ree did address all points in his revision, but the reviewer kept complaining about the language. ‘He was even more venomous and started saying we were making claims without proof. We were asking ourselves who this reviewer might be.’
They then realised there was one researcher mentioned in their paper whose results they criticised as ‘a bit questionable’. ‘So we asked the journal: please don’t let them do the review.’
Van der Ree still doesn’t know whether it was this person. However, the paper did get turned down. ‘The editor saw one review that was in favour of publication and one against and decided to reject. It felt awful.’
Controversies
But are the hurt feelings and the many months of waiting for a response at least worth it? Does peer review really make science better?
Rink Hoekstra, an associate professor at the Faculty of Behavioural and Social Sciences, is not so sure. Much of his work is focused on the robustness and quality of science and its underlying processes. ‘Both the general public and scientists attribute automatic trust to the peer review process’, he says. ‘The assumption seems to be that peer review is something good, so we don’t question it too much. But the essence of being a scientist is to criticise everything, including our own processes.’
Peer review is supposed to prevent poor quality research from being published, but it’s failing at doing this, Hoekstra points out. There are many cases where a paper that came through the peer review system turned out to have serious flaws. ‘Some studies tried to replicate published work, but were unable to do so.’
One famous example of peer review failure is that of Dutch social psychologist Diederik Stapel. In 2011, it came to light that the highly esteemed researcher had made up data in over fifty of his papers, all of which had been reviewed. Stapel, who also worked as a professor at the UG for several years, subsequently relinquished his doctoral title at the University of Amsterdam.
Or take the research of social psychologist Daryl Bem on ‘feeling the future’, which was published in a top journal. Afterwards, it was heavily criticised for its flawed methodology and the study could not be replicated. The controversy prompted a wider debate on the validity of the peer review process.
Stamp of approval
The problem, Hoekstra believes, is that peer reviewing doesn’t protect science as a whole; it just protects one particular journal at a time. ‘Because when an author is rejected by one journal, they just go to the next one, and the next one.’ As many as it takes to get their work published.
It doesn’t help that editors can do whatever they want, he says. They are the ones who choose the reviewers, and authors are incentivised to do whatever reviewers say.
When an author is rejected by one journal, they just go to the next one
Still, to the community – inside and outside of academia – peer review is regarded as a stamp of approval. ‘But that doesn’t necessarily mean the research is good. It just means a couple of people looked at it and didn’t find any flaws’, Hoekstra argues.
Even those who are aware of the system’s flaws believe there is little choice, says psychiatry postdoc Maurits Masselink, board member of the Open Science Community Groningen (OSCG). ‘They feel it’s the only thing we’ve got and want to make it better. But we don’t even know if it works at all.’
The trust people have in the public institution of science is in jeopardy, Masselink and Hoekstra – who is also on the OSCG board – feel. And open science, they believe, is the answer to that problem.
More transparency
Open science is not just about making papers publicly available through open access; it also advocates for more transparency in peer review – disclosing identities, publishing reviewer comments, and enabling public commenting after publication – and proposes open data sets and consequences for scientific dishonesty.
If you know your review will be public, you will probably write it differently
The idea behind such a public discussion is that it would broaden the scope of peer review to encompass more of a back and forth with a general community. With an open review system, it would have been clear if there was in fact a conflict of interest with the reviewer who went on and on about Van Ree’s English. And it might have given pause to the researcher who felt it was okay to address Tangili so harshly.
It could also be the answer to another problem in academia: negative findings and replication studies may be very valuable to share with other scientists, but they hardly ever get published.
It’s not about overthrowing the current peer review system, though, says Hoekstra: it’s about improving it. ‘The idea with open science is that we judge the quality of research in a broader, fairer, and more transparent way.’
Negative impact
But he realises it will be harder to find people willing to review if their comments are made public – something that is supported by research. Already, a mere 10 percent of reviewers are responsible for 50 percent of all reviews. So you have to find a compromise.
‘If you know your review will be public, you will probably write it differently’, he explains. ‘And if you’re an early career researcher, you might fear a negative impact if you were critical of a prominent colleague.’
Masselink thinks this could be undercut by making reviews public, while letting the reviewers decide for themselves whether they want to sign them.
A more extensive review process might also be beneficial to research as a whole, he feels. ‘An author sends in their research plan with the introduction, methods, research questions, hypothesis. Journals can then provisionally accept it, meaning they will publish regardless of the findings.’
When the researchers then send in the final article, the journal editor only checks if they followed the plan. ‘Openness makes it possible to investigate what happened and whether it improved quality. Now, we don’t know what happened and we don’t know what works and what doesn’t.’