Fuck "descriptive"

Jun 13 2014 Published by under Uncategorized

It's been a shitty few weeks. After a gazillion paper rejections and two funding "fuck you"s, all whilst working to make a June grant deadline, I do not have the energy to write my full on manifesto right now. But I would absolutely love it if someone could post in the comments a compelling argument for why "descriptive" is considered an a priori flaw in scientific inquiry. Isn't all science descriptive?

If you have ever uttered (or written) the words "You have to have a hypothesis. It doesn't matter if you're right or wrong," then what you are saying is that it is valuable to know the outcome of the experiment being proposed regardless of what that outcome is. And if that's the case, then why does there need to be a hypothesis in the first place? Why is it not enough to say, "I want to see what happens to X when an animal does Y" or "I want to know how groups A and B are different on measure C"?

A hypothesis is supposed to be an educated guess, right? But if there's no data out there to base that guess off of, what's the point of making something up?

45 responses so far

  • Can you call it exploratory in your field? That sucks!

  • I'll take a stab at it. First I agree, it often feels silly and forced to try to generate a hypothesis when what you really want is to just know stuff. So sometimes it feels false since reviewers and submitters alike really just want to explore, but Because hypothesis-driven-research, everyone plays the game. That said, a clear hypothesis does set up the conditions for statistical inference, which should at least allow a somewhat clearer evaluation of the role of variability in the result. Even if there is no data on the question at all, a hypothesis can be based on theoretical grounds, or on related results. At best the hypothesis reader is looking for facility with concepts and previous literature as explored in the discussion of the hypothesis. At worst they are just using 'descriptive' as a hammer to criticize the proposal that they dislike for other reasons.

    Yeah, so I wouldn't call that the "compelling argument" you were looking for, but some thoughts nonetheless.

  • drugmonkey says:

    Search me, sister. All of science is really just "this happened under these conditions". The rest is just waving your hands around at velocity.

  • iGrrrl says:

    I'm still snickering at DM's comment, and I'm not sure I can give you a compelling argument. I do know that 'merely descriptive' is a damning comment in review. But without such 'look-to-see' work, we don't have a body of evidence from which to generate hypotheses.

    For a grant application (which I steadfastly view as a good tool to actually think through the work you want to do), a focusing hypothesis can make it easy for the reviewers to follow your thought processes. 'Look-to-see', or descriptive work, gets viewed as a fishing experiment, and easily dismissed as not worth the investment. This usually happens because the reader doesn't see clearly that anything productive will come of it, or sometimes even why the applicant wants to do it.

  • FitAcademic says:


    - Signed,
    Also has a descriptive paper she can't get published.

  • and here I thought this was primarily just a problem in Ecology and Evolutionary Biology.

  • LincolnX says:

    I've definitely published descriptive papers, but I've never gotten a grant funded with a purely descriptive aim. I've just struggled with helping my graduate student formulate hypotheses from what was basically a very descriptive thesis project.

    I generally agree with Michael Carroll regarding the rationale for framing hypotheses.

    I would modify your definition of hypothesis, though. Rather than "educated guess" I'd call it a "testable educated guess". In grant proposals I'd never have a purely descriptive aim - if there is a descriptive piece I'd combine/subsume it in an aim with a stronger hypothesis driven component.

    Sounds like you're in a rough patch. It will pass, but seek help from senior investigators you trust. Get them to do a round of review before you submit to make things as tight as possible.

    • Dr Becca says:

      Are you saying that descriptive experiments aren't testing anything?

      Take my example above: I want to know if A and B are different in a given measure C. What would probably constitute a "hypothesis" is "C is bigger in A than it is in B." But if there's no reason for me to think that C would be bigger in A than B, any more than it could be smaller, why isn't it enough of a hypothesis to say "I hypothesize that C will be different in A than it is in B?" Is that any less testable?

      • iGrrrl says:

        If I were reviewing your document, I'd ask, "Different how?" I would also ask why we need to know if A and B are different given C. (When I review a grant, I am 3 years old--"Why?")

        • Dr Becca says:

          I have no problem explaining why we need to know why A and B are different. But why do I need to guess the direction in which they are?

          • iGrrrl says:

            Not guess, but rather have sufficient preliminary data to form a hypothesis. ...and this brings up the 'need to have already done the work before submitting the application' problem.

          • katiesci says:

            I'd say you don't. That's the point of a 2-tailed test - to encompass both possibilities.

        • zb says:

          I guess I'd say that the reason you might need to know the direction is that I'd expect the "why" to include a model of the system that you think A, B, and C participate in. So, given C (attention focused on one A, as an example), I expect the neural response to A to bigger than the neural response to B. Without that kind of statement, one, that instead, that something will just change about A & B the underlying mechanisms you think connect all of the pieces together isn't being made explicit.

  • I never understand this either. You seem to need so many preliminary data that you might as well not get the grant because you already did all the things.

  • Kate says:

    OK I'll bite. If your project is purely descriptive then it's along the lines of stamp collecting or bean counting or whatever... which is to say you end up with a bunch of things (possibly quite valuable things) but it's not really clear what they *mean*.

    I agree though that all scientific disciplines have descriptive studies at their base - describing living things led to taxonomy, and thence to Darwin's theory of evolution, etc. But a healthy discipline moves quickly past description to theory, and then descriptive data are collected not in a "let's see what's out there " way so much as a "let's see if our model of how this all works is correct" way.

    I'm sure it's still possible to publish a descriptive paper or fund a descriptive study but I think you need to have a really strong case as to why those data will be useful and how they might inform future theorizing. There are an infinite amount of data out there and funders need to feel secure that the data you propose to collect will take us forward in some way. Science isn't about cataloguing, it's about *understanding*.

    Sorry, I know I'm a mouthpiece for the congealed establishment view 🙁 But I've seen too many grants that didn't have a clear hypothesis and one was left at the end feeling unsure what step forward we would have made for all that money.

    • Dr Becca says:

      If your project is purely descriptive then it's along the lines of stamp collecting or bean counting or whatever.

      I don't know what field you're in, but there are a lot of things in biomed that could be called "descriptive" that are quite far from cataloging for the sake of cataloging, and that provide important foundational data that's necessary for the field to move forward. In neuroscience, we use lots of different animal models, for example. Wouldn't you say it's important to understand the differences between these models' brains? And yet, people are happy with their current model, and so an attempt to understand another one better is "descriptive" and therefore useless.

      • zb says:

        "Wouldn't you say it's important to understand the differences between these models' brains?"

        Maybe, maybe not. It depends on whether the differences one is proposing to study might be important for the model/theory behind our understanding of the brain based on those different species/animal models.

      • Kate says:

        "Wouldn't you say it's important to understand the differences between these models' brains?"

        I agree with zb - it depends on how much your data will advance our understanding.

        To take an example from my own field (I'm a neuroscientist working in rodent models of spatial encoding) - if you just propose to, say, compare and contrast a rat brain and bat brain then that probably comes into the category of fundable but not competitive - the results may be useful but they may not be. If on the other hand you propose to try and find out why rats have theta and bats don't, by completely characterising the septo-entorhinal-hippocampal networks in the two species to find out how they differ, then that would be stronger - the results may lead to increased understanding of how things work as well as a simple documentation of what is different.

        Even then, reviewers might still grumble that this is a fishing expedition - what if you don't find any differences, or you find differences but it isn't clear that they are relevant? So your strongest proposal would be to already have a hypothesis, perhaps based on previous work. For example maybe someone has found evidence that there might be differences in the long-range septal GABA projections which could explain the difference and you propose to follow this through, figure out the comparative anatomy and perhaps test the findings with some kind of intervention - now you have something that looks much more convincing. You aren't just comparing/contrasting for the sake of it, your work is theory-driven, and there will be a definite outcome (and to be competitive it's important not to have one of your possible outcomes be "no progress made").

        If you really really want to do your descriptive project then all you need to do is to make a convincing case that the end result *will* be (not just *might* be) useful - so find a theoretical framework to situate it in. If you really can't find one, then you need to think hard about why you want to do this at all.

        Hang in there - it's easy to get cynical about the review process, especially in the early stages (I used to grumble bitterly) but my experience on grant panels has actually been very positive. People do want to see things funded, and reviewers are intelligent (believe it or not) and generally conscientious, but it needs to be clear that all that money will definitely produce a step forward in understanding.

    • zb says:

      Nice summary. I've been trying to explain this idea to people who are coming up with Google Science Fair projects along the lines of "I'll measure whether people are happier when I wear blue versus when I wear red." I called that 'marketing' research. It might be interesting to know, because if people really were happier when we wore blue, we could all wear blue and contribute to the happiness of the world. But it is an experiment without an explanation, and, as science, only useful if the next step is to figure out why we are happier when they wore blue.

      (Now mind you that this 'experiment' has a tailed test in it and a prediction does not make it science, rather than marketing).

  • Isabel says:

    Kate, you make some good points (especially in your third paragraph), but please explain how exploratory or descriptive scientific research is like stamp collecting or bean counting.

    There is no exploration needed, no unknowns, no challenges, no mystery at all in stamp collecting- it is just collecting something that is already completely known. No new data are generated. There are already reference books and everything- you just collect the stamps!

    I am really sick of hearing this hackneyed term used to disparage important and underfunded fields like taxonomy. Which is NOTHING like collecting man-made, completely referenced and characterized objects. Please come up with an analogy that works.

  • Spiny Norman says:

    Descriptive: most grants in X-ray crystallography. The entire human genome project. The ENCODE project…

  • ecologist says:

    Here are a couple of considerations that may help. First, I have seen many descriptions of projects confuse the scientific hypotheses and the statistical hypotheses of a study. The statistical hypotheses ("I will test the hypothesis that the treatment has no effect on the outcome") are trivial, and of interest only to the extent to which they shed light on the scientific hypothesis ("I hypothesize that this outcome is determined by these factors in this way..."). Many times, studies that could be denigrated as mere description are, in fact, collecting data that shed light on scientific hypotheses.

    Second, and somewhat related point. Hypothesis testing is usually a waste of time. Much more scientifically important is parameter estimation. Knowing that an effect is significantly different from zero is useless; knowing how big the effect is tells you something. Even better is to know the function describing the response of the outcome to different levels of the factors --- that's what is really needed to connect the results to theory. And it's not, except in the most trivial sense, "hypothesis testing" in the naive statistical sense of the word. Lots of recent developments in statistical methodology are devoted to model selection as a genuine alternative to hypothesis testing.

    All of which is to say that there are alternatives to what many people think of as a sharp dichotomy between hypothesis testing and description, but it may take some careful explaining to present those in a convincing way, given the current ideas on the subject.

    • iGrrrl says:

      "I have seen many descriptions of projects confuse the scientific hypotheses and the statistical hypotheses of a study."

      Yes, yes, yes.

  • becca says:

    There are four reasons I know of why "descriptive" gets thrown about as a pejorative:
    1) There actually *is* enough data to know, or at least rationalize a guess, that adding this treatment will increase bunny hopping, or HOW your specialized treatment impacts bunny hopping (*insert molecular signaling linear pathway figure, i.e. LIES, here*), and the reviewers know more than you. Ok, this is actually a theoretical reason- the reviewers never know this much more than you.
    2) The reviewers want you to demonstrate that you can produce hypotheses, irrespective of what your ideas are about. This is common in hazing during prelims/candidacy/comp exams for students.
    3) The reviewers are tired, they can understand the proposal with the linear figure (of LIES) better, there isn't enough money for both of you, and so even though your experiments would be awesome, they won't bother because there is a Stock Critique available with which to shoot down your proposal.
    4) The reviewers are tired, and Really Annoyed by the genome/proteome/interactome/connectome fishing expedition grant that exceeds the modular cap, but is written by such a BSD that it will get funded no matter what, and they want to take that frustration out on you.

    TLDR: all of @bobchickenshit's grants have hypotheses.

  • DrugMonkey says:

    One problem is that "descriptive" and "mechanistic" hinges on level of analysis. This makes it inherently subjective and annoying. Because there is *always* a lower level of analysis.

    So when someone is criticizing you, you can always criticize their preferred level of analysis and say "aha, you haven't shown the mechanism for how it *really* works either!".

    So a lot of it comes down to a set of technical preferences. Not any real and objective difference in "descriptive" versus "mechanism". A subset of this is a criticism that really boils down to "do *more*...of something. Anything."

    And of course there is the "I do all this stuff so you should too" which comes in to play.

    • thorazine says:

      Well... yes and no. To me, "descriptive" (when used properly) means that you are working at a given level of analysis and there really is no effort made to go deeper (or, alternatively, the data are so incomprehensible that there is no WAY to go deeper). Whereas "there's no mechanism" basically means that you haven't reached the predefined lower level the reviewer has in their head, which generally they aren't even willing to make explicit.

      Basically, "descriptive" can be a legit criticism, whereas saying there's not enough mechanism is basically always bullshit.

  • Imaginary conversation with an adoring reviewer:

    You: I want money for a descriptive project
    Reviewer: I LOVE descriptive research! Tell me more, and then I will have the details I need to write a review of your proposal with all 1's in every category.
    You: We want to observe bunnies hopping for weeks and weeks at a time!
    Reviewer: Awesome!! There's a non-zero possibility that those observations will yield something cool, right?
    You: I think so.
    Reviewer: Can you just think of an example of something cool that you might observe?
    You: Well, we don't know yet.
    Reviewer: Just one thing, doesn't have to be super likely.
    You: We might observe foo-bar, and that would be cool, but that's kind of a stretch.
    Reviewer: Good enough for me. Instead of "a stretch" we will say "high risk high payoff"
    You: Most other scientists think we won't see that.
    Reviewer: So you Hypothesize you will see foo-bar, and the Alternative is that you won't. I realize you are writing a descriptive proposal, but let's just reframe it so it's hypothesis driven, and then the higher-ups at NIH wont have even the slightest excuse to push back.

  • girlparts says:

    I'm with Kate here. I've seen student research projects that ask stuff like "how many of the pregnancies in our clinic are unintended?" They generate data, and answer their question, but it is not clear how this in any way produces generalizable knowledge, and I think this is what the "there is no hypothesis" critique is getting at. The project seems likely to generate results, but not conclusions.

    Of course, this may be true whether or not the proposal has the magic words "our hypothesis is....", and some reviewers just look for the magic words.

  • meshugena313 says:

    I completely agree, as I've said before over at DM's place. "Descriptive" as a pejorative means that the reviewers don't like the technical depth at which you've investigated the question, which is insane since as DM notes above there is always something deeper... I think some of it (at least in my corner of the universe) has to do with whether or not you perturb the system in some way to see what happens. In that case, you may find "mechanistic insight" (which is the other bugaboo that I hate). But what happens to the important work in describing when, where, and in what other biological context something happens? What happens if one can't perturb the system, or if perturbing the system doesn't give interpretable, or any, results?

  • AcademicLurker says:

    I tend to think that "descriptive" or "lacks mechanistic insight" often basically mean "I liked some other grants in my pile better, but I can't mention those in my critique and I have to write something..."

  • Jason says:

    Descriptive = thorough. And thorough studies typically reveal that the world is far more complicated than we can comprehend (not cool). The trick is then to do a whole whack of (descriptive) pilot experiments to determine which drug doses, which behavioral tests, which drug-test intervals cure cancer and then publish only those specific instances. Then you get a nice mechanistic answer that pleases everybody (but is impossible for anyone to replicate and is of little long-term relevance for understanding the nature of the disease).

    I think the push to produce high-impact mechanistic work is at the expense of producing work that is actually believable and relevant. For the sake of grant reviewers etc, mechanistic hypotheses could be warranted. For publications though, maybe we all have to be more comfortable with publishing the big picture, the parametric tests, the other conditions where things don't "work out". Would be helpful for all.

  • Leigh says:

    NIH grants: Focus on questions, not hypotheses

    I contend that the insistence of the US National Institutes of Health (NIH) on hypothesis-driven projects in grant proposals could be a factor contributing to irreproducible research reports.

    Isaac Newton argued that “hypotheses … have no place in experimental philosophy”, a view echoed by mathematician Roger Cotes: “Those who assume hypotheses as first principles of their speculations … may indeed form an ingenious romance, but a romance it will still be”.

    Such criticisms recognize the risk that scientists may filter data through their hypotheses, discounting results that do not validate the hypothesis as evidence that the experiment did not work — rather than as evidence that the hypothesis is false.

    The NIH’s funding criteria should instead ensure that a pertinent research question is being asked, and that the applicant has the means to answer it.

  • Dave says:

    Ugh!!! iGrlll and Kate demonstrate what you are up against tbh!!!

    I think the word 'descriptive' is thrown around willy-nilly these days, and is often used to describe studies that DO have a clear hypothesis but that might be more 'correlative' and lack, dare I say the magic word, 'mechanisms'. This is quite unfair, because these studies are typically valuable and are often done in higher animals and humans where interventions etc are limited.

    Regardless, you will never change it. It has become fashionable, a key word to make reviewers sound smart, and it is essentially an instant kill for any grant and/or paper. Just find ways to get around it. Throw in some primary culture work, some interventions, some gene knock-downs blah blah blah. In short, just play the game, whether you like it or not.

  • anonymous postdoc (shrewshrew) says:

    Hang. In. There.

    You're in a tough spot. I think some of the most valuable work is descriptive, in the most positive sense-telling us about some difference that we didn't even realize could be a difference. Mechanisms are for people who think everything has already been described. Mechanisms are also bullshit because they are rarely proving necessity and sufficiency, just carefully disguised correlations, daisy chained together to look like a mechanism.

    Just remember that the rejection doesn't mean your ideas are bad, that the data is bad, that those assholes are right, all it means is someone else snuck in under the wire, and happened to sing to these particular reviewers the way they like. There are probably other venues for funding and publishing that will be more friendly to your take, and just keep rolling things out until you find them.

  • zb says:

    I would be unhappy with a project in which the proposer said "I want to see what happens to X when animal does Y," even though that's where many of my experiment ideas start. There are an infinite number of X's I'd like to measure along with an infinite number of Y's that an animal can do. There has to be some reason why this X and this Y is important, and that's what I'd need to see. It could be important for clinical significance (i.e. "opiate use increases when people are addicted to caffeine") or it could be important for explaining the system, but there has to be something more than just the procedure of an experiment that will measure something.

    I am guessing that you didn't just write a proposal that said, I want to see what happens to x when animal does y, and that there was more there, but if that was all there is, I would be worried, unless the field is very new. At first, all the fMRI experiments sounded like that "I want to see if the brain is different when a person does x instead of y" -- and, mind you, the answer to that has to be pretty much yes. I saw those studies as being about method and not science (which, I guess, might be another justification for the x/y experiment).

  • zb says:

    PS: Sorry it's been a terrible week. Venting is fine, and, I think, a necessary first step. Then, if you want to stay in the game, you have to turn towards listening to the people who can help you make the papers and grant for the ultimate goal. The alternative, of changing the system, probably isn't in the cards, not until you've acquired a lot more power, and probably, not even then.

  • rxnm says:

    The only glam paper I have is literally "describing" one cool thing. We just used the word mechanism a fucktillion times, and the one reviewer who kept saying it was descriptive was eventually beaten down.

    It's a proxy criticism that means nothing. Reviewers are about as aware of why they like/don't like a paper as humans are at any other task: not very good. They are just very good at selecting from a limited number of post hoc rationalizations.

  • My impression is that descriptive or non-mechanistic is used to describe experimental paradigms in which you manipulate the inputs to the biological system and then measure effects on the system and its outputs, as opposed to paradigms in which you experimentally manipulate internal components or parameters of the biological system and then measure effects on the system and its outputs. So if the system you are studying is the isolated kidney tubule, then varying the flow rate and measuring sodium reabsorption is descriptive, while doing the same experiment with a tubule from a NKCC knock-out mouse is mechanistic.

    Not saying whether this is or isn't a logically coherent distinction, nor whether the latter type of experiment is more important than the former, but I think this is what people mean when they say "descriptive".

  • rxnm says:

    I think CPP's example is the kind of circumstance in which the "descriptive" stock critique is trotted out... but whether you looked at a mutant or a drug or gave some part of the brain epilepsy with channelrhodopsin or not has very little to do with whether or not a reviewer WILL deploy this particular critique. It has to do with whether or not the reviewer wants the paper accepted.

    That is the function of stock critiques...they are so vague that there is always one or more that applies. They are "fuck this thing for free" cards for reviewers.

  • mytchondria says:

    Having had the pleasure of reading your paper in one of its iterations Becca, I'll do the unthinkable here and tell you what I really think. In public and what not. You have an unsolvable problem. You have fan.fucking.tastic data about a finding. A finding NIH is interested in. And, DAMN YOU, you actually go further than describing it....you have a biological basis of a change. Is it THE cause? Who the fuckke knows?
    I do signaling shitte and can turn stuff up and down and tell you how the cells feel afterwards and it sure isn't THE cause. I know there are a lot of intervening things to get my output.
    When you go into neuroantomical features of behavior we are flipping retarded as a discipline. But I'd wager we are a lot fuckken better off than we were when we were funding RNA and genome wide screening. Those were 'fishing' exercises and the only ones who got them published were people who were already BSD.
    The thing that fustrates me the most is that any Academy member could put a gold star on your crap and it would be snatched up by the second layer of publishing cream. I've been here and done this. I left my BSD lab and had my own awesome finding. And oddly when my advisors name wasn't last, interest waned. A lot.
    The thing I did to get my next papers into high impact journals was suck it up on my first paper, publsh it and write 3 opinion papers and reviews on it. BioEssays is a really fuckken good place to lay your claim. Its IF is high enough to get you noticed and I sent my paper there to all the BSDs 'thanking' them for looking at earlier versions which helped me craft my final working model which forms the basis of my grant. And that BioEssay paper better be a champ and you need to suck it up and thank anyone and everyone whose work influenced your thoughts.
    I read your stuff and if its killer. Do not settle for a step down the rung. And get pissed and fighty. Your shitte is worth fighty for. And swear more. The lack of fucks in earlier posts was freaking me the fuck out.

  • This piece of writing is genuinely a pleasant one it assists new the web users, who are wishing
    for blogging.

Leave a Reply