Archive for: August, 2015

Whose problem is the "reproducibility crisis" anyway?

Aug 18 2015 Published by under Uncategorized

One of the absolute best things about Twitter is that there are a bajillion scientists on it, and if I have a random question - what's your favorite cfos antibody, what's the best statistical test for ___, what do you think this blob on my micrograph is - I can just throw it out into the ether and am likely to get a number of super helpful responses almost immediately.

Today, however, my random question started quite the firestorm! Let's go to the highlight reel:

(BTW, the reason that the y-axis seems excessively high is that I am comparing the correlation in this experimental group to that of another experimental group whose y values do go that high, and non-matching axes are one of my biggest pet peeves)

I got lots of helpful responses from my lovely Twitter friends, which you can see if you click through to the conversation. My real question was, "do I include or exclude this data point that is 6 SDs away from the mean from analysis?" but a young mosquito scientist had this recommendation:

After which the following exchange occurred:

Letting alone the fact that in my mind, a single funky data point out of almost 60 is not a "result," but a...data point...the answer is no, I do not "usually" replicate. Look, I get that in some labs it's super easy to run an experiment in an afternoon for like $5. If this is the situation you're in, by all means replicate away! Knock yourself out, and then give yourself a nice pat on the back. But in the world of mammalian behavioral neuroscience, single experiments can take years and many thousands of dollars. When you finish an experiment, you publish the data, whatever they happen to be. You don't say, let's spend another couple of years and thousands more dollars and do it all again before we tell anyone what we found! So I thought, OK, this guy runs an insect lab, maybe he doesn't know what's involved.

Well, to borrow from Don Draper,

Screenshot 2015-08-17 21.18.39

Jason later doubled down, responding to @neuromagician:

"CORRECT." I cannot even with this. What does it mean? His answer was blissfully, elegantly circular in its logic:

OK, then. Let's leave the tweets there (but grumpy subtweets this way lie).

Here's how I see it: different science fields have different experimental conventions that depend on all kinds of things. The "reproducibility crisis" may be real, but asking scientists to fix it by doing everything twice (or more) is naive at best and intentionally, self-righteously ignorant at worst.

As I've said before, the data are the data. You report them, along with the methods that you used to get them and the stats you used to determine your confidence in them. Someone else reads them and maybe decides to build off of them in their own work, which necessitates trying to replicate yours. Maybe their results are the same, maybe they're not. If they're not, are you a bad scientist? Are they? Obviously not. Part of what makes science exciting and fascinating is figuring out why some things don't always replicate. When we define the intricacies that drive our data, that's when we're truly making progress.

 

 

67 responses so far