Skip to main content
Rebecca Wirfs-Brock’s Blog and Informal Essays wirfs-brock.com

Is it Noise or Euphony?


The book Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony, and Cass Sunstein has me thinking deeply about noisy decisions.  In this context, noise is defined as undesirable variability in judgments. They explain two different kinds of noise—level noise (variability in the average level of judgments by different people) and pattern noise. Pattern noise is further broken down into the unique noise individuals bring into any decision and occasion noise—noise caused by the particular context surrounding particular decisions. Occasion noise can be influenced by our mood, the interactions with people we’re deciding with, what we ate for dinner last night, or even the weather.

So when is noise worth reducing?  And what can we do to reduce that noise? How do we know our efforts at noise reduction have the desired effect?

Are there situations where variability might be desirable? I haven’t found a name in the literature for such desirable variation. Perhaps euphony—a harmonious succession of words having a pleasing sound—is one possibility. In these situations we’re favoring finding some euphony over conforming to a noise-free rigid standard for our judgments.

I’ll use the review of conference submissions of papers, talks, and workshops as an example of where both noise and euphony play a part in our decision-making, as it is one I am quite familiar with.

One major source of variability is when new reviewers join a review committee. Newcomers often look at submissions differently than experienced reviewers. But not all variability is noise. If some variability is welcomed, expected, and encouraged, the review process greatly benefits from fresh perspectives. This kind of variability adds spice.

And yet, there may be standards (whether formally written down or more loosely held) we’d like uphold for what we consider a worthy submission. One way to reduce level noise in reviews is to ensure that reviewers understand these expectations. One way to convey this information is to hold a meeting where we discuss and present examples of submissions and exemplary reviews (reviews from prior years are a good source). Newcomers can learn what a reasonable proposal is and what is expected in a review. They also get to know their peers, ask questions and, in effect, “calibrate” their expectations for reviewing.

But this meeting is insufficient to remove another major source of noise—occasion noise caused by group interactions. Kahneman, Sibony, and Sunstein state: “Groups can go in all sorts of directions depending in part on factors that should be irrelevant. Who speaks first, who speaks last, who speaks with confidence, who is wearing black, who is seated next to whom, who smiles or frowns or gestures at the right moment—all these factors, and many more, affect outcomes.” Group dynamics introduce noise.

But there are several practical ways to further reduce the noise in group decisions. Oscar Nierstrasz wrote a set of patterns called Identify the Champion for reviewing academic papers. I encourage anyone running a conference to consider a review process along the lines of what Oscar introduces. I’ve adapted these patterns and process to non-academic conference reviewing with only a few minor tweaks.

The key ideas in these patterns are the roles of champion and detractor, and a structured process for discussing submissions. Champions are strong advocates for a submission who are prepared to discuss its merits; detractors disapprove of a submission and are prepared to discuss its weaknesses.

Submissions are discussed in groups according to their highest and lowest scores. Care is taken to identify proposals with both extreme high and low scores, and to not to rank submissions numerically. If a submission has no champion, it isn’t discussed. It is rejected. Ranking and then discussing submissions one-by-one in order would only add level noise (actually I find we get numbed by reviewing and tend to reject “lower ranked” submissions without enough consideration).

The review committee is asked to suspend final judgments until all championed submissions are presented. The champion is first invited introduce the submission and explain why it should be accepted. Then, detractors are invited to state their reasons. At the end of all presentations, discussion is opened for all and the committee tries to reach consensus.

In practice, following this discussion protocol, it is easy to accept outstanding submissions—they typically have plenty of champions. This leaves the bulk of our time to dig into the strengths and weaknesses those championed submission that have mixed reviews.

The Identify the Champion process forces me to hit “pause” on my judgments and to not jump to premature conclusions. And the first thing we hear about any submission are its positive aspects. When detractors speak, I get a richer understanding of that submission. Although I might have had some initial impressions, I find they can and do change.

Sometimes I warm up to a submission. At other times, detractors’ perspectives grab my attention and make me revisit whether the submission is as strong as I had initially thought.

The cumulative weight of all this discussion has an even more profound effect. I find I am much more accepting of the outcome: what will happen will happen. Yes, there is unpredictability in this decision-making process. But we’re all trying to make reasonable decisions as a group. I end up actively engaged in making the outcome the best it can be and supportive of our collective decisions.

Although the Identify the Champion review process still has noise (it is hard to eliminate noise caused by group dynamics entirely), I believe it to be less noisy than most other review processes I’ve participated in.

One downside, however, is that it can be exhausting. To avoid having some occasion noise creep back in, it’s good to ensure that reviewers get sufficient breaks to meet their personal needs, and not get too tired or cranky or hungry.

One place I’ve applied my adaptation of the Identify the Champion pattern is for Agile Alliance experience report submissions. Experience report submissions are “pitches” for written experience reports. Only after a submission has been accepted does the actual writing begin. So as reviewers, we’re not only judging the topic of the pitch but also whether the submitter will be able to write a compelling report. Champions of experience reports also commit to shepherding the writing of the reports. These shepherd-champions commit to reviewing and commenting on drafts of reports are as they are written over a period of several weeks. Now that’s real commitment! Frequently we have more championed submissions than room in the conference program. So our judgments come down to some difficult choices.

Before we hold our review meeting, we ask reviewers to give us two lists: submissions they’d like to shepherd and an optional list of submissions they’d like to see on the program (but do not want to shepherd). At our meeting, we then have a lively discussion where champions forcefully advocate for their proposals and gain others’ support. Once again, I find we spend most of our time discussing those submissions that have mixed reviews. But we also spend time a lot of time listening to champions and then as a group making tradeoffs between submissions (remember we have more good submissions than we have capacity to accept them). The message we convey to all reviewers is that that if you really want to shepherd a submission, we as a group will support your decision to be a shepherd-champion. But let’s discuss first.

We can’t guarantee the quality of any final report. We base our judgments on both what the submitters have written (in many cases, there has been a back and forth conversation between submitters and reviewers that we can all see that has led submitters to reshape and refine their proposals) as well as the convincing arguments of champions.

Judging conference submissions is subjective. Our process acknowledges that. We accept the risk of selecting a less-than-stellar report proposal over missing an opportunity for a novel or insightful report.

Is it our goal to eliminate noise in our decision-making? Where we can, yes. But, that isn't our only goal. If we tried to eliminate it entirely we might end up establishing standards for experience report submissions that would inadvertently filter out newness or novelty. In our search for a bit of euphony we stretch out to accept a submission if there is a convincing champion. Consequently, we accept a little variability (and unpredictability) in our decision-making. However, at the end of our review process, reviewers are generally happy with the proposals we accept, happy with their shepherding assignments, and eager to begin working with their experience report authors. An important aspect of our process, which cannot be understated, is that we also work hard to make good matches between each champion-shepherd and prospective authors. Not only do reviewers buy into the review process, they also commit to being ongoing champions.

Noise reduction is important in many situations, especially group decisions. Paying careful attention to how the group is informed, discusses, and then decides can reduce noise. Paying attention to the voices of champions is one way to turn up euphony. By tuning your decision-making processes you can achieve these goals.