Some thoughts about measuring the goodness of peer review…

A few weeks ago I attended a postdoc training about responsible conduct in research. A major focus of the event was an emphasis on being unbiased and avoiding any conflict of interest when reviewing a manuscript or a grant. Naturally, that state seems very much desirable. However, some of the case studies we discussed left me with a bad aftertaste: it seemed as if the concern about conflict or bias was massively outweighing the fact that peer review can also provide added value to science. In my – limited – personal experience with peer review, I have found reviews that were comprehensive and thoughtful (even if they were negative) much more valuable and constructive for my research, than any of the 3-liners declaring my paper to be excellent. This dichotomy got me thinking about the purpose of peer review and it’s relationship to science and the publishing process. Here a couple of points I’ve come up with: Continue reading

Chekhov’s gun

a.k.a. applying the Russian method to scientific writing

About 150 years ago, there lived a man in Russia, whose name was Anton Chekhov. He was a doctor by training, but also happened to write some really amazing fiction (and also non-fiction, actually). Furthermore, he formulated a dramatic principle called Chekhov’s gun, which states: “remove everything that has no relevance to the story. If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off. If it’s not going to be fired, it shouldn’t be hanging there.”

According to the principle of Checkov's gun you should not introduce an element into a narrative, unless it is necessary for the story to proceed.

According to the principle of Checkov’s gun you should not introduce an element into a narrative, unless it is necessary for the story to proceed. Picture courtesy of luckyfish @ flickr.

The last few weeks I have been thinking about this principle a lot, while I’ve been rewriting a paper for review. Continue reading

Added value: how a corrigendum should be

It has been a low-key week: with deadlines rapidly approaching I been busy writing applications for postdoc fellowships. And while I have been developing some ideas for longer, multi-part blog posts, they are not yet ready to be published. But, while I was going through the literature for my applications, I actually came across a rather heart-warming example of a great… corrigendum. OK, this might sound strange, but I think thorough follow-up of critique to a scientific paper can be just as essential to a paper as the initial results. Being wrong or making mistakes is OK, but not acknowledging this, is not. After all, an essential part of science is peer-review. And this includes not only the peer-review  associated directly with the publication process. A really good example of really bad, unscientific behaviour, is the “bacterium that can grow on arsenic” story, which was widely contested (see here and here), but – I believe – the authors never officially retracted the paper (?).

So, the post today is about a corrigendum, which is just like my vision of how a corrigendum should be like.

The story started about two years ago, when Matthias Selbach’s group in Germany published a research paper in Nature. They described a rather straightforward experiment to test how protein abundance is related to mRNA abundance, and how transcription and translation rates influence this relationship. In essence, they measured the abundance of proteins by mass spectrometry, and the abundance of the respective mRNAs by RNA-Seq in cultured cells. Moreover, by labeling newly synthesized proteins and RNAs they also measured transcription and translation rates. Yet, while theoretically straightforward, the experiment was technically challenging (made possible only through recent advances in sequencing and mass spectrometry technology), and definitely very timely, because previous, similar studies had only ever looked at a selected subset of genes. The Selbach paper found that ~40% of the variance in protein levels could potentially be explained by their mRNA levels, but including data of translation rates could very much increase the predictive power, indicating that translation rates have an important role in determining protein levels. As part of their analysis they also calculated absolute protein copy numbers. For this, they used proteins of known concentration, mixed them with their sample, and included them in the mass-spectrometry measurement. They then used these known amounts of proteins to calibrate “iBAQ intensities” (units obtained in mass spec), and subsequently converted the iBAQ values of the cellular proteins to molar amounts based on this calibration.

So far, so good.

However, in March this year, they published a corrigendum/erratum. It starts with: “Mark Biggin […] contacted us, noting that our mass-spectrometry-based protein copy number estimates are lower than several literature-based values.” How wonderful! No wishy-washy beating-about-the-bush. Instead: a precise explanation of the problem. Next, they explain how they checked their published data and identified a mistake in their calibrations (they used the calibration values from an unrelated experiment in their analysis pipeline). But it gets even better. They state: “To further validate copy numbers” and describe two more tests they performed for validation, one based on comparing band intensities on Western blots and another mass-spec approach, called selected reaction monitoring.

So, all in all, I think that’s pretty decent, and – at least for this week – it restored my faith in the scientific community. I wish I would always perform two additional experiments, when someone points out a potential flaw in my work. Luckily, their error did not influence the major findings of their original paper, but I hope they would have been so open about correcting their mistakes, even if it had.