Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


“Yes, Computer Scientists Are Hypercritical”

October 7th, 2011 / in Uncategorized / by Erwin Gianchandani

We’ve talked about the notion of hypercriticality in computer science in this space before (see here and here), and now Jeannette M. Wing — the former National Science Foundation (NSF) Assistant Director for Computer and Information Science and Engineering (CISE) and current Department Head of Computer Science at Carnegie Mellon University — has written about it with some hard numbers over on the Communications of the ACM Blog:

Jeannette M. Wing, Carnegie Mellon UniversityAre computer scientists hypercritical? Are we more critical than scientists and engineers in other disciplines? Bertrand Meyer’s August 22, 2011 The Nastiness Problem in Computer Science blog post partially makes the argument referring to secondhand information from the [NSF]. Here are some NSF numbers to back the claim that we are hypercritical.

 

This graph plots average reviewer ratings of all proposals submitted from 2005 to 2010 to NSF overall (red line), just Computer & Information Science & Engineering (CISE) (green line), and NSF minus CISE (blue line). Proposal ratings are based on a scale of 1 (poor) to 5 (excellent). For instance, in 2010, the average reviewer rating across all CISE programs is 2.96; all NSF directorates including CISE, 3.24; all NSF directorates excluding CISE, 3.30.

 

Average reviewer ratings of all proposals submitted from 2005 to 2010 to NSF overall (red line), just Computer & Information Science & Engineering (CISE) (green line), and NSF minus CISE (blue line) [image courtesy BLOG@CACM].

 

The bottom-line is clear: CISE reviewers rate CISE proposals on average .41 points below the ratings by reviewers of other directorates’ proposals. The difference is a little better (.29 points) for awards and a little worse (.42 points) for declines.

 

Wing goes on to describe how hypercriticality hurts us…

In foundation-wide and multi-directorate programs, CISE proposals compete with non-CISE proposals. When a CISE proposal gets “excellent, very good, very good” it does not compete well against a non-CISE proposal that gets “excellent, excellent, excellent” even though a “very good” from a CISE reviewer might mean the same as an “excellent” from a non-CISE reviewer.

…to offer hypotheses for why we do this to ourselves…

I have three hypotheses. One is that it is in our nature. Computer scientists like to debug systems. We are trained to consider corner cases, to design for failure, and to find and fix flaws. Computers are unforgiving when faced with the smallest syntactic error in our program; we spend research careers on designing programming languages and building software tools to help us make sure we don’t make silly mistakes that could have disastrous consequences. It could even be that the very nature of the field attracts a certain kind of personality. The second hypothesis is that we are a young field. Compared to mathematics and other science and engineering disciplines, we are still asserting ourselves. Maybe as we gain more self-confidence we will be more supportive of each other and realize that “a rising tide lifts all boats.” The third hypothesis is obvious: limited and finite resources. When there is only so much money to go around or only so many slots in a conference, competition is keen. When the number of researchers in the community grows faster than the budget — as it has over the past decade or so — competition is even keener.

…and to provide suggestions for what we should do about it:

As a start, this topic deserves awareness and open discussion by our community. I’m definitely against grade inflation, but I do think we may be giving the wrong impression about the quality of our proposals, the quality of the researchers in our community, and the quality of our research. For NSF, I have one concrete suggestion. When one looks at reviews for proposals submitted to NSF directorates other than CISE, while the rating might say “excellent” the review itself might contain detailed, often constructive criticism. When program managers make funding decisions, they read the reviews, not just the ratings. So one idea is for us to realize that we can still be critical in our written reviews but be more generous in our ratings. I especially worry that unnecessarily low ratings or skimpy reviews discourage good people from even submitting proposals let alone pursuing good ideas.

 

It’s time for our community to discuss this topic. Data supports the claim that we are hypercritical, but it is up to us to decide what to do about it.

Read Wing’s entire blog post here — and share your thoughts about the issue below.

(Contributed by Erwin Gianchandani, CCC Director)

“Yes, Computer Scientists Are Hypercritical”

Comments are closed.