Girls, Cats, and Problems With Peer Reviews

Image Credit: Kennet Kjell Johansson Hultman, CC0 1.0, Image Cropped

Last week saw International Day of Women and Girls in Science. It’s an important day, recognising the importance of a strong female presence in the scientific community, and how far all scientific disciplines have to go to achieve gender equality.

So naturally the journal Biological Conservations decided to release a paper entitled Where There Are Girls, There Are Cats*. It’s an ill-informed, ill-conceived paper that essentially blames women for free-ranging cat populations. It is insulting to women, and quite frankly insulting to any scientist who has had a paper rejected in the last year (yes, I’m bitter). It’s also kind of hilarious in all the wrong ways. As such, there was justifiable mockery and jaw-droppage on Twitter. Yet as with the recent #PruittGate debacle, most of the community has veered away from directly attacking the researchers. They’ve been focussing on the real problem here – in this case the peer review system.

At this stage I would normally not dwell on the paper and move onto the issue in science that it brings to light, but I want to iterate exactly why this paper was the instigator. Plus, it’s too much fun not to dwell on for a little bit. So here’s three major issues (and please believe me, there are more).

  1. Sample size. The paper suggests a relationship between cat density on 30 University campuses in Nanjing, China, and the proportion of female students on said campuses. The cat density estimation (mark-recapture modelling based on cameras recorded on transects) might be fine; it’s not my wheelhouse. But a sample size of 30!?
  2. Methodical ambiguities. The researchers conducted socialisation tests which suggested that cats responded better to females than males. This involved only 16 students, seven of whom were female, and the data from their interaction with only 27 cats was included in analysis. Sample size aside, things get murky here. It’s unclear as to whether these 27 had contact with all 16 students, or with one male and one female. That might not seem like an important distinction, but it’s the difference between a very low sample size (fine if it’s presented as a qualitative study I guess, which this is NOT), and a non-existent one. It’s one of several things that are unclear in this paper.
  3. Conflation. Many papers identify trends on one scale and will often then try to link them to larger scale events. There’s nothing wrong with that, usually. However, for a paper to basically say that women might be responsible for increasing outdoor cat density in one city in one country and then go on to relate that to increasing outdoor cat density being responsible for the extinction of many species and huge population reductions of others is insulting at best and… well honestly I’m not qualified to say what it is at worst.

The authors obviously need to shoulder the blame for some of this. But surely the journal that published the article needs to take some responsibility. The methods should have been more thoroughly questioned, and the actual concept should have been reviewed.

So why weren’t they?

What is the Peer Review System?

When a research manuscript is sent in to a journal, a journal editor will then forward the manuscript to two other ‘peers’ – other academics who have relevant experience in the study system or the ecological discipline that the paper deals with. They will provide feedback and a recommendation as to whether or not the paper should be published, which the journal editor will take into consideration when making the final decision.

These peer reviewers are generally encouraged to review the aspects of the paper they are familiar with. If a part of the paper that they are unfamiliar with strikes them as odd though, they should of course raise concerns. Yet none of the questionable methods were picked up by this paper’s reviewers, nor was the actual question it was asking disputed.

Ok, but this is one paper. Surely some things will always get through the cracks?

I wish this were a relatively isolated incident, but people have been picking holes in the peer review system for years. In the interview linked below, Richard Smith, former editor of the British Medical Journal, describes how he sent a paper into which he had deliberately introduced eight errors out to 300 peer reviewers. Half of the reviewers only identified two errors or less, and no-one found more than five.

Slay peer review ‘sacred cow’, says former BMJ chief

One of the problems with the peer-review process is that the reviewers work on a volunteer basis, and are often already overworked. I was offered the chance to peer review a paper last year, and although the timing was not ideal, I didn’t want to pass up on the opportunity to add to my experience. This process is especially infuriating when it is put into practice by journals which keep their research behind paywalls, considering their healthy profit margins: read more about our general frustration with paywalls at this link.

There is also the possibility of sabotage – while people who are familiar with a field are better qualified to review a manuscript, there are claims that some reviewers will reject papers out of hand if they feel that the research could contradict, or make redundant, their own research. A study by Stefan Thurner and Rudolf Hanel showed that even 10% of reviewers acting selfishly could significantly decrease a journal’s average paper quality.

So What’s The Alternative?

Herein lies the rub; I just don’t know.

While there has been a lot of discussion about the trouble with the peer review process over the last few years (probably longer, I’ve spent a relatively small amount of time on the scene), there don’t seem to be any clear-cut solutions. Smith actually suggested abandoning pre-publication review, letting people make judgements once the research has been published. This seems flawed, as it give questionable scientific practices an air of authenticity among the general public. General understanding of the scientific review process is already low, and many people out there are prone to trust any research that has been published.

While the problems with peer review are generally acknowledged, an alternative has been difficult to come up with (Image Credit: Dineshraj Goomany, CC BY-SA 2.0)

While the problems with peer review are generally acknowledged, an alternative has been difficult to come up with (Image Credit: Dineshraj Goomany, CC BY-SA 2.0)

Stricter standards for editing have also been floated. This is good in theory, but a study by Rafael D’Andrea and James P. O’Dwyer suggested that when confronted with dodgy reviewers, editors can do little to raise average paper quality. Their study did however show that the real problem seems to be reviewers who accept low quality papers.

I would like to see better training for reviewers, at the very least. Heading into my first paper review, I was uncertain of where to start, and only had a few helpful blog posts (which I’ll link at the end of this article) to go off. Offering courses for early career scientists I feel would be a great help.

Lastly, I should add that there are many out there who feel that the peer review system does not need fixing. I agree with Georgina Mace to some extent, who pointed out that there are plenty of examples of peer review working well, and that good reviewers are  limiting factor. The idea behind it is certainly solid. Experts are there to decry issues and flaws, while applauding good science and methods. It’s one of the reasons I believe training young academics on how to correctly review a paper is a good place to start.

Paper Withdrawn

I’m happy to say that Biological Conservations has (at this point in time) temporarily withdrawn the paper in question, “pending further evaluation”. They’ve been accused of posting academic clickbait before, and hopefully this incident will be a wake-up call.

But discussions of the peer-review system need to become more widespread. Journals have proven themselves willing to try out new approaches, with double-blind reviews and Open Access two trends that are having an influence on the scientific community. The mantra of “it’s the best we’ve got” can’t be all we have to fall back on.

In the meantime, I hope that we don’t see any more papers this gloriously befuddling published.

The below studies look into the impact of reviewers and editors on journal paper quality:

Peer-review in a world with rational scientists: Toward selection of the average by Stefan Thurner & Rudolf Hanel

Can editors save peer review from peer reviewers? by Rafael D’Andrea and James P. O’Dwyer

*I of course am not suggesting that the journal went out of their way to insult female scientists. But, c’mon, timing.

Sam Perrin is a freshwater ecologist currently completing his PhD at the Norwegian University of Science and Technology who has had way too much coffee today to make up for a lack of sleep. You can read more about his research on his Ecology for the Masses profile here, and follow him on Twitter here.

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s