Boost your ranking

Fraud 2.0

A reviewer from Wageningen was reprimanded for secretly pushing her own articles. Is the reviewing system in need of a serious overhaul?
By Christien Boomsma / Translation by Sarah van Steenderen / Illustratie Kalle Wolters

Another research scandal. A researcher from Wageningen was recently reprimanded because she had engaged in citation pushing. Whenever she was offered an article for anonymous review, she would recommend her own published works as articles the author should definitely check out.

Her h-index – a metric that calculates the impact of articles – shot up, and with it, her reputation and her chances of getting a grant.

Reprehensible, the university’s integrity committee said. She was not formally reprimanded, as much as the committee had wanted to. The reason was not that she hadn’t done it, but that the Netherlands Code of Conduct for Scientific Practice has – as yet – no rules about this new type of fraud.

Accumulation strategy

It is ‘symptomatic of the extreme quantity emphasis in science’, says RUG statistician Rink Hoekstra. ‘It’s all about external things such as impact factors and h-index. People just start behaving accordingly. Especially if their moral compass is a bit off.’

He can’t name any instances of citation pushing from his own field, but he and his fellow RUG researchers know exactly what the word means. It is part of the emphasis on top journals and rankings that dominates science. ‘Young researchers have to come up with some sort of accumulation strategy if they want to make it in this world’, says social geographer Bettina van Hoven.

She, too, knows what it’s like to send in an article for review at a journal, only to receive improvement suggestions that seemed a little off. She is not the only one. ‘Some suggestions just seem very forced’, says neuropsychologist André Aleman. ‘Although you can never really be sure. It is a blind review, after all.’

‘Sometimes you can just sense that someone is pushing their own work’, says evolutionary biologist Gert Stulp. ‘The suggestion will have sort of a vague introduction, which says enough.’

Grey area

But that’s the problem: it’s just a ‘feeling’. The Wageningen case came out because that researcher wasn’t the only one opportunistically commenting on other people’s work. A whole network of pushers was involved with the Earth Matters journal, where she published a lot. So people started to notice.

But if a single researcher does it? ‘That usually doesn’t actually help all that much’, geneticist Lude Franke says. ‘You need an enormous amount of citations if you want to boost your rankings.’

More importantly, the whole thing is kind of a grey area. After all, the reviewer was selected because he publishes in the same field. As an expert, it is his job to point out mistakes and gaps in the research he is assessing. ‘I can imagine you easily end up using your own articles’, Van Hoven explains.

And besides, reviewers are only human. ‘You know more about your own papers than about those of others’, says Gert Stulp. ‘And it’s only natural to feel that your own work is relevant.’

Pretty controversial

But the question remains: when do you push yourself forward too much? Is that when you recommend one article? Or ten? There are no formal rules, which means each researcher decides their own limit. Some refrain from mentioning their own work altogether, to keep their hands as clean as possible. Others limit themselves to a single reference, but only if they feel they have no choice.

But these are honourable researchers, who are reticent about making suggestions at all. Nevertheless, they, too, can be easily influenced by less honourable colleagues who assess their articles. In most cases, they see no reason to refuse a citation when the reviewers ask for it.

‘You have to pick your battles’, says Jojanneke Bastiaansen. She has been publishing about her research into depression for years, but also studies more overall subjects, such as scientists’ tendency to keep referring to (limited) research that supports their theorems. ‘It can get pretty controversial. That’s when you have to fight your reviewers and take care of those things first.’

The main agitator is the pressure to publish. After months or even years of work, a researcher is under great pressure to publish in a journal whose citation index is as high as possible. ‘And people are willing to jump through many hoops to get published’, says Hoekstra.

Needlessly mean

He is worried about how arbitrary the reviewing process can be. ‘Sometimes you have to jump through endless hoops. And sometimes there are so few comments you wonder if they read your article at all.’

This, combined with other grievances – articles that don’t get reviewed for ages because the reviewers want to do some experiments themselves first, reviewers making ridiculous demands or issuing personal attacks – leads to the question of whether blind peer reviewing is in need of a major overhaul.

As far as Hoekstra is concerned, the answer is a resounding ‘yes’.

For this reason, Hoekstra always signs his name under every article he reviews. ‘If you push your own work then, it comes at a price. You just look ridiculous pushing yourself!’ he says.

Each and every one of his colleagues agrees. ‘It’s a great solution’, Stulp agrees. ‘Besides, it forces reviewers to put more effort into their work and stops them from being needlessly mean.’

Open review

It may also restore the peer reviewing process to what it was originally intended to do: furthering scientific knowledge together.

Several journals have started experimenting with different types of open review.

‘The British Medical Journal, for example, publishes the entire reviewing process’, says Bastiaansen. Van Hoven points to Social Geography, where reviewing is an interactive process and the reviewers respond online. And then there is PLOS ONE, which has André Aleman as its editor. It offers reviewers the opportunity to include their name. ‘Although they tend to only do it when their review is positive.’

But the most interesting development may be the publishing of preprints, which natural scientists have been doing since 1991, using the ArXiv platform. As soon as you’ve finished an article and submitted it to a journal, you also upload it to ArXiv. Not only does this make the results available to the scientific community, but it also leaves no doubt as to who was ‘first’. A definitive, reviewed version is then published in a regular journal. Every month, approximately 10,000 articles are put online.


The life sciences got a similar platform in 2013: BioRxiv. Lude Franke loves it. ‘I upload all my articles to that archive!’ he says. ‘It’s a great way to bypass the whole reviewing process. And if the work is no good and doesn’t get published, then something is obviously off.’

That publishers don’t care about preprints being put out is evidenced by an article that a colleague uploaded to BioRxiv two years ago. ‘It had been quoted a thousand times before it was eventually published in Nature.’

As far Franke is concerned, these archives are the future. ‘The internet is a great means of self-correction. I think there’s a lot that’s about to happen. But I’m hesitant to predict how it’s going to turn out.’


Notify of

De spelregels voor reageren: blijf on topic, geen herhalingen, geen URLs, geen haatspraak en beledigingen. / The rules for commenting: stay on topic, don't repeat yourself, no URLs, no hate speech or insults.


0 Reacties
Inline Feedbacks
View all comments