The Future of Online Content in Science?

Last month I attended my first ever unconference about the Future of Content. It was a great meeting about various forms of online media, from blogging to podcasting to infographics. The meeting wasn’t aimed at researchers and/or science communicators at all, but there seemed to be a lot of parallels between issues in the media world and issues in the science world. Intriguingly, the media world seems to have come up with a lot of cool ideas to tackle some of these issues, so thought I’d share a couple of these* here. Continue reading

Added value: how a corrigendum should be

It has been a low-key week: with deadlines rapidly approaching I been busy writing applications for postdoc fellowships. And while I have been developing some ideas for longer, multi-part blog posts, they are not yet ready to be published. But, while I was going through the literature for my applications, I actually came across a rather heart-warming example of a great… corrigendum. OK, this might sound strange, but I think thorough follow-up of critique to a scientific paper can be just as essential to a paper as the initial results. Being wrong or making mistakes is OK, but not acknowledging this, is not. After all, an essential part of science is peer-review. And this includes not only the peer-review  associated directly with the publication process. A really good example of really bad, unscientific behaviour, is the “bacterium that can grow on arsenic” story, which was widely contested (see here and here), but – I believe – the authors never officially retracted the paper (?).

So, the post today is about a corrigendum, which is just like my vision of how a corrigendum should be like.

The story started about two years ago, when Matthias Selbach’s group in Germany published a research paper in Nature. They described a rather straightforward experiment to test how protein abundance is related to mRNA abundance, and how transcription and translation rates influence this relationship. In essence, they measured the abundance of proteins by mass spectrometry, and the abundance of the respective mRNAs by RNA-Seq in cultured cells. Moreover, by labeling newly synthesized proteins and RNAs they also measured transcription and translation rates. Yet, while theoretically straightforward, the experiment was technically challenging (made possible only through recent advances in sequencing and mass spectrometry technology), and definitely very timely, because previous, similar studies had only ever looked at a selected subset of genes. The Selbach paper found that ~40% of the variance in protein levels could potentially be explained by their mRNA levels, but including data of translation rates could very much increase the predictive power, indicating that translation rates have an important role in determining protein levels. As part of their analysis they also calculated absolute protein copy numbers. For this, they used proteins of known concentration, mixed them with their sample, and included them in the mass-spectrometry measurement. They then used these known amounts of proteins to calibrate “iBAQ intensities” (units obtained in mass spec), and subsequently converted the iBAQ values of the cellular proteins to molar amounts based on this calibration.

So far, so good.

However, in March this year, they published a corrigendum/erratum. It starts with: “Mark Biggin […] contacted us, noting that our mass-spectrometry-based protein copy number estimates are lower than several literature-based values.” How wonderful! No wishy-washy beating-about-the-bush. Instead: a precise explanation of the problem. Next, they explain how they checked their published data and identified a mistake in their calibrations (they used the calibration values from an unrelated experiment in their analysis pipeline). But it gets even better. They state: “To further validate copy numbers” and describe two more tests they performed for validation, one based on comparing band intensities on Western blots and another mass-spec approach, called selected reaction monitoring.

So, all in all, I think that’s pretty decent, and – at least for this week – it restored my faith in the scientific community. I wish I would always perform two additional experiments, when someone points out a potential flaw in my work. Luckily, their error did not influence the major findings of their original paper, but I hope they would have been so open about correcting their mistakes, even if it had.