Skip to content

We’ve all seen things that aren’t there.  The stick that looks like a snake.  Shadows that move in the darkness that look eerily like the monsters of our childhood.  Sometimes, we’ve also failed to see what is clearly there.  We’ve missed stop signs and warnings that can keep us out of danger.  Noise: A Flaw in Human Judgment is a study of the things that we do see, those we don’t, and how we can get to a more reliable understanding of the world around us.

Two of the authors of Noise, Daniel Kahneman (Thinking, Fast and Slow) and Cass Sunstein (Nudge), are highly respected, including by me.  It was in an email conversation with Kahneman that I realized that Noise had been published and that I had to read it.

Bias and Noise

A lot of attention has been focused on bias in recent years.  We look for bias in artificial intelligence, in our hiring practices, how we promote, and a million other ways where we may subtly (or not so subtly) prefer outcomes.  Bias, from a statistical point of view, is a systemic deviation from the truth in a direction.  Noise, on the other hand, is random scatter around the truth.

What makes noise interesting is that, in some cases, it may account for more of the overall error than bias.  This means that if we want to move towards better justice, we may be better served to address the noise than attempt to correct the biases that we may be facing.  That isn’t to say that biases aren’t important and we shouldn’t seek to eliminate them, but our experience is that biases are remarkably persistent, and it may be easier to reduce noise than to try to address bias.

Put Out the Fire

Finding the Target

Perhaps the most challenging places where noise appears are where we have the greatest trouble defining the target.  For instance, psychiatry is notoriously noisy.  While the American Psychological Association (APA) has invested great efforts into the development of the Diagnostic and Statistical Manual, now in its fifth edition (DSM-5), the criteria for diagnosing a disorder are sufficiently vague that it’s difficult to get agreement between psychologists on a diagnosis.  More disturbing, it’s difficult to get the same professional to make a consistent diagnosis when presented with the same facts.

While there are techniques that can be used to systemically reduce noise, the lack of a clear target will always involve noise.

Predictions are Noisy

Other places where professional judgement is involved are also necessarily noisy.  In Superforecasting, Phil Tetlock explains how forecasts (predictions) are difficult to get right and the factors that allow some forecasters to be more effective than others.  This is because predictions, by their nature, don’t involve clear criteria, and cause-and-effect relationships are inherently noisy.  Some will over-prioritize factors and will therefore swing their projections too abundantly in response to that factor.

Errors Don’t Cancel

In The Wisdom of Crowds, James Surowiecki explains how, in many situations, the errors (noise and bias) cancel themselves out.  From the weight of a bull to the number of jellybeans, the average of the guesses often is very close to the actual number.  Certainly, Enrico Fermi’s techniques for breaking down a problem into numbers that can be easily guessed or estimated stands as a testimony that people can work together to come up with accurate answers.  His class at the University of Chicago famously predicted the number of piano tuners in Chicago.  (See How to Measure Anything for more.)  However, this stands in contrast to the Drake equation.  The Drake equation is designed to estimate the number of detectable extraterrestrial intelligent species.  Because it’s the straight multiplication of a large number of factors that are not knowable, the results are very wildly divergent, from countless extraterrestrial intelligences to zero.

The key thing to recognize about noise is that it often does not cancel itself out – and certainly doesn’t cancel itself out when a single, noisy decision is unjust to the person affected.  Consider mental diagnosis (or lack of diagnosis), child protection, child custody decisions, and criminal judgements.  Each decision impacts the individual or individuals in irreversible ways.  There’s no solace for the convicted criminal (who is innocent) that some other guilty criminal has been set free.  Though a mathematical average is right, it’s not fair to the incorrectly convicted criminal, nor the victim of the person who goes free.

The Noise Audit

In recent years, with the advent of big data, statistics, and computing power, we’ve seen more and more datasets get processed to observe the noise and biases that have gone undetected.  Daniel Pink in When explains that you want to come in front of a judge after lunch rather than before, because you’re much more likely to be paroled.  While these studies often operate at the scale of massive data sets, it’s possible to do a more focused examination of individuals’ behavior at different times or how one individual compares to others.  Once you get “enough” data, you can see how one individual may be overly harsh or overly compassionate compared to the average.  (How to Measure Anything is a good resource for knowing how much is “enough.”)

Jerry Muller called it The Tyranny of Metrics, yet metrics and measurement are the only way that we can know what is and what is not working.  The Heart and Soul of Change laments the lack of quality and consistency in psychotherapy largely due to a lack of consistent measurement.  So, while it’s possible to overdo the desire to measure what is happening, it’s often the opposite problem that people find themselves fighting.

Systems and Cognitive Biases

Sometimes the systems that we build and our cognitive biases play into our inability to detect noise.  Even the detection of noise represents a conflict.  After all, if there are multiple perspectives on the same situation, there is necessarily disagreement – and conflict.  An increasing number of people are conflict avoidant – particularly when in groups and committees.  Collectively, these forces push us away from getting the data and awareness that we need to discover and minimize the noise in our decisions.

We build systems and metrics that lead us away from an awareness of where our opinions differ and in ways that minimize the data that could surface the fact that there is noise in our systems.  Metrics that are easy to measure and evaluate are selected, because to pick difficult metrics just means they won’t be collected or, when they are collected, evaluated.

The Uncomfortable Truth

In How We Know What Isn’t So, Thomas Gilovich explains the persistent delusions that we all have.  Whether it’s an impossible number of college professors who believe they’re better than average or the students who believe in their leadership abilities more than they should, we systematically believe we’re better than we really are.  Our ego actively deflects the feedback that could allow us to calibrate and reset our expectations.  Believing that we’re better than we are allows us to feel safer in a world of volatility, uncertainty, complexity, and ambiguity (VUCA).  As Alan Deutschman explains in Change or Die, there is always the possibility that the world as we know it will be wiped out by an asteroid – but we don’t think about it, because to do so would immobilize us in fear.  (See Emotion and Adaption for how fear develops – and, indirectly, why a fear of asteroids destroying the planet is hard to avoid.)

Values, Perceptions, and Facts

Much of the problem with building systems to detect noise comes in the form of confusion about what causes conflict and why it can be a good thing.  Conflict, in my opinion, is caused by only two reasons.  The first reason is that the person holds a different set of values.  The second reason is that the other person has a different perspective.

There are many ways of assessing the values of another person.  However, Steven Reiss’ work on the sixteen basic motivators, as shared in Who Am I? and The Normal Personality, provides a way of seeing what others value.  A more fundamental and basic model for motivation in morality comes from Jonathan Haidt’s work in The Righteous Mind.  It’s the interaction of these factors that can lead us to different conclusions even if we have the same data.

The second reason is our perspective, which is shaped by our experience and what we pull up to be relevant or salient to the topic.  These perspectives aren’t facts, but we often trust them like facts because we hear them in our own voice.  We believe that we wouldn’t lie to ourselves – but we do.  In Telling Lies, Paul Ekman draws the conclusion that we must know something is wrong for it to be a lie.  When we’re talking to ourselves, we don’t know what we’re lying.

Of course, there are some verifiable facts – things that can’t be refuted, like the Sun rises in the East.  Unfortunately, these irrefutable facts are few and far between.  We often find conflicts where the perspectives are different, but both perspectives are treated like facts.  Values, too, can be treated like facts – like universal constants – when everyone’s values are different.

Even parents find that the values that their children hold are different.  Some of those differences are likely generational (see America’s Generations for more).  However, many of these differences are due to the experiences the child has irrespective of the parent’s guidance.  (See No Two Alike and The Nurture Assumption for more about the impact of parents and others on a child’s values.)

I Contain Multitudes

In “Song of Myself,” section 51, Walt Whitman says, “Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.)”  However, most people don’t recognize their own contradictions.  They’ll decide A in some circumstances and B in other circumstances – but when faced with the same data.  Objectively we should make the same decisions irrespective of the time of day or the degree of our hunger, but in reality, we don’t.  Instead, we make ad-hoc decisions based on little more than whim, and when asked, we will justify them.

When the bridge between the two hemispheres of the brain – the corpus callosum – was surgically severed people would see things in one eye but be unable to consciously explain what they saw – however, they’d still act upon this information and make up stories about their actions to explain them.  (See Incognito for more.)

The Wisdom of One

Even though noise doesn’t always average out, sometimes it does.  The conditions under which the wisdom of crowds works best – primarily independence – isn’t always necessary.  It’s even possible for one person to harness Whitman’s multitudes and make two guesses that reduce the noise.  This is the same sort of result that Phil Tetlock found with Superforecasting.  The best forecasters intentionally looked at problems from multiple points of view so they could average out the errors in their own estimates.

So, it turns out that with the right prompting – and even without complete independence – it’s possible to get better answers.  It may be that people who have a range of skills – the foxes – are better at this than others.  (See Range for more on foxes vs. hedgehogs.)

Integrating Information

The more expertise you amass, the more you believe that you can integrate information – but experts are “distressingly weak” in this regard.  At some level, this makes sense.  If you think about Gary Klein’s work and the awareness that we build mental models in which we simulate our situations, we can see that as long as the information we’re taking in is congruent with our model, all is well.  (See Seeing What Others Don’t and Sources of Power for more on Klein’s work.)  Efficiency in Learning calls this way of processing information “schema.”

While many experts believe that they’re good at integrating information, we have to recognize that most are not – it’s only those who focus on remaining open to new ideas and new perspectives that can continue to integrate new information – and adapt when things change.

Frugal Rules of Mechanical Aggregation

It was the year 2000, and I was a small part of an effort to improve care for patients with diabetes.  Primary care providers weren’t specially skilled in how to take care of them, so care was spotty at best. The solution I developed took a set of rules and did risk stratification of patients and went so far as to recommend actions for the providers based on best-practice thinking.  It wasn’t complicated, and it didn’t have any artificial intelligence in it.  However, it made a statistically significant reduction in the key lab metric for diabetes care – it worked.

The rules were “frugal rules,” simple guidelines and thresholds that could guide behavior without being overly complex, and they worked.  Research shows that mechanical rules are better than clinical judgement in most cases.  No one wants to trust the computer to predict the best care – but it’s what they should do.  This isn’t to discount the advanced AI techniques – it’s to say that you can get close to the best results with some simple guidelines.

Even individuals armed with simple guidelines and checklists perform better.  It’s not the automation that does it, it’s removing some of the ambiguity around the correct thresholds and actions.  (See The Checklist Manifesto for more about the value of checklists.)

Similarity and Probability

When we’re estimating the probability of something happening, in many cases, our brains are silently transforming the question for one that’s easier to process.  (See Thinking, Fast and Slow for more on how System 1 does this substitution blindly so that we’re not even aware.)  We trade the question of how probable is something for how similar are the conditions to something else that happened – and what happened in those circumstances.  The result is we systematically make errors when we’re asked to predict probability instead of looking for similarity.

I Don’t Like It, So I Won’t Believe It

“I reject your reality and substitute my own” is a popular quip in modern culture most recently associated with Adam Savage of MythBusters.  It’s what happens when someone doesn’t like the reality that they’re presented with, and as a result, they refuse to believe it.  While on the surface, it sounds ludicrous, it happens more often than one might imagine.

It’s hard to believe that people believe the Earth is flat – and yet that’s exactly what the International Flat Earth Research Society believes.  They’re founded on the premise that we’ve all been lied to, and the Earth is really flat – not round.  There are a number of things that you have to start to believe for this to be truth.  They are, however, things that members of the society seem to have no struggle doing.

Many believed all sorts of crazy stories about the COVID-19 vaccines.  Everything from magnetism to superpowers and tracking devices were supposedly associated with the vaccines.  As of this moment, none of these things have been proven true – though I’m looking forward to super-strength if that particular story turns out to be true.  The point is that people will so firmly hold on to what they believe that no amount of dissuasion will break them free of their beliefs.  (There are still plenty of people that believe that ivermectin and hydroxychloroquine are treatments for COVID-19 despite having been thoroughly disproven by research.)

Diagnostically, people who refuse to accept reality can be classified as having a psychosis (detachment from reality) or schizophrenia (different interpretation of reality).  Neither of these is helpful when you’re trying to have a rational conversation about how to reach a common understanding.

Decision Hygiene

Noise ends with a call to decision hygiene based on six principles:

  • The goal of judgment is accuracy, not individual expression.
  • Think statistically, and take the outside view of the case.
  • Structure judgments into several independent tasks.
  • Resist premature intuitions.
  • Obtain independent judgments from multiple judges, then consider aggregating those judgments.
  • Favor relative judgments and relative scales.

In short, use the structure of the way you approach decisions to help reduce noise – rather than create it.  The first step is to find a place to study the Noise.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: