Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Fratricide and the Ecology of Proposal Reviews

May 4th, 2010 / in Uncategorized / by Ran Libeskind-Hadas

A friend of mine from Field X once served as a program officer at a major research funding agency. (Names changed to protect the innocent.) As part of a quality assurance scheme, he was asked to review the proposal process for Field Y. He was surprised that every proposal he looked at, whether funded or not, was rated very high. He asked the program officer for Field Y how proposals could be ranked if they were all rated so high. He was told to pay no attention to the rating, but to look at what the reviewer said. So my friend looked at a number of highly-rated proposals. He found one where the reviewer said the proposed research had already been done and the results published by a different investigator, concluding, “This is not a good proposal, but this is no time to reduce funding to Field Y.” (Field Y receives considerably more funding than most fields, and has for a long time.)

This story contains a lesson about the ecology of review processes. Reviewers rate proposals to determine which proposals to support, but that’s not the only use for their ratings. Leaders of funding agencies do not allocate funding to fields by reading all the agency’s proposals and reviews. They use summary measures. One of these is “proposal pressure,” meaning the number of highly-rated proposals within a field that cannot be funded because the field’s budget is too small. A field with more highly-rated proposals than it can support is “under-funded.” Right?

We in the computing research field often eviscerate the proposals of our colleagues during proposal reviews. Why are we so fratricidal? Is it to demonstrate how tough we are? If so, we’re hurting ourselves. People from other fields are happy to have fratricidal computing researchers in competition for interdisciplinary grants because there will be more funding for everyone else!

There are two kinds of responsibility in proposal review. One separates good proposals from weak proposals to ensure that good proposals are funded. The other ensures that computing research holds its own in funding with other fields. The computing research field gets better when we criticize weak proposals and recommend improvements. The field does not get better when our criticisms of each other are so harsh that that computing researchers get less of the pie.

(Contributed by John Leslie King, University of Michigan)

Fratricide and the Ecology of Proposal Reviews

6 comments

  1. henningschulzrinne says:

    One (completely unsubstantiated) theory: In the computing field, we have evolved towards a culture of extremely selective conferences, publishing only one in ten submitted papers. It is not surprising that the same attitude and mindset gets carried into proposal reviews – after all, it feels like a technical program committee meeting…

  2. billfeiereisen says:

    There is an interesting paper by Virginia Walbot on a paper reviews in biology (http://jbiol.com/content/8/3/24). Do her observations point at similar underlying review issues?

  3. davidnotkin says:

    As an EIC of a journal, I have been asking my AEs and reviewers to actually review the paper, rather than to state how they would instead do the work. I suspect in many cases, we do the opposite, reviewing proposals and papers in light of how we would do it. But that is not the question we are asked, nor should it be.

  4. ruzenabajcsy says:

    I very much agree with the comments with one caviat:
    at NSF the reviews are only advisory so the program director has the flexibility to reverse the recommendations.
    Too few choose to do so in part because they may not be confident in their own judgments. It is easire to say the panel said so and so..The implictaion si that we need to send highly competent,active researchers as program directors .