Skip to content

Book Review-How We Know What Isn’t So: Fallibility of Human Reason in Everyday Life

It’s what we know that ain’t so that gets us in trouble. Whether you prefer that Artemus Ward quote from To Kill a Mockingbird or you attribute the saying to Mark Twain, the sentiment is the same. Knowing something rarely gets us in trouble. Thinking that we know something we don’t can have bad to tragic consequences. Understanding how our thinking goes wrong is at the heart of How We Know What Isn’t So: Fallibility of Human Reason in Everyday Life. Thomas Gilovich’s work has come up in several places in my reading. He’s always associated with outlandish claims, like 94% of college professors felt like they were better than their average peer or 70% of high school students thought their leadership was above average. So, reading his work was a walk through the crazy path of how we see ourselves and how we tend to see ourselves in distorted ways.

Believing What We Want – Until We Can’t

The funny thing is that, unbounded by reality, we’ll believe some crazy things. Without measurement, we can believe we’re the best physician, architect, developer, or whatever career we’re in. Without some specific, tangible, and irreputable evidence, our ego can make up any story it likes. We’ll emphasize the characteristics we’re good at and ignore the ones we don’t feel like we excel at. We’ll use whatever reference point makes us feel better about ourselves. “At least I’m not as bad as they are” is a common internal monologue. (See Superforecasting for more about the importance of preselecting measures.)

In the land of beliefs, we fall victim to numerous cognitive biases and errors. The fundamental attribution error causes us to simultaneously judge others more harshly and to explain away our failures based on circumstances. (See Thinking, Fast and Slow for more on fundamental attribution error.) We can make up our minds – and be completely incorrect. Incognito exposes some of the ways we fool ourselves with some simple and effective optical illusions.

The truth is that we will believe what we want to believe – about ourselves and others – until there is some sort of inescapable truth that forces us to acknowledge that our beliefs were wrong. Even then, we’re likely to minimize them, as explained in Mistakes Were Made (But Not By Me). We are amazed by the wives who report that they trusted their husbands only to realize after the fact that their blind trust was misplaced. They had all the reasons to suspect there was a problem, but they refused to see it – with disastrous consequences. (See Trust => Vulnerability => Intimacy, Revisited and its references for a detailed conversation about trust.)

Put Out the Fire

Benevolent Dolphins

Sometimes the problems of our rationality are driven by the biased way that we receive information. If someone is led back to shore by a dolphin, we have evidence that dolphins are benevolent. However, if a set of dolphins were to lead someone out to their death, they won’t be alive to report about it, and therefore the information we get is biased. This is just one way that our rationality about things can be skewed or distorted. We also are more strongly aware of things that happen than things that don’t.

I have a weird dream and it seems oddly like a premonition. I know that dreams are most frequently recombinations of the things I experienced in a day and the brain’s natural systems working to impose order and sense on my world. But suppose in my dream there’s a helicopter crash. The next day, the news reports that there was one – and it causes me to remember the dream. If there’s no crash, I don’t remember the dream, thus I don’t consider it a failed premonition. So even just due to the vagaries of my memory, I can manufacture a higher perceived rate of success and start to believe in my premonition capabilities.

I suppose that this might explain why more people believe in extrasensory perception (ESP) than believe in evolution – they get positive hits for the random premonition and few memories of failures. There aren’t many opportunities to see evolution in our daily lives.

Similarly, the ability for astrologers to make seemingly accurate predictions has led to 20 times more astrologers than astronomers. (See the Forer effect in Superforecasting for more about the perception of accuracy in astrology.)

Molehills and Mountains

We tend to believe that things come from other things that are the same relative size. While we can accept that an oak tree grows from an acorn, we generally expect that something large came from something large. (See Bohm’s On Dialogue for more about acorns being the aperture through which the oak tree comes.) Judith Rich Harris explains in No Two Alike how even small differences in twins may get amplified over time. These small differences become large differences in the way that two cars heading off in slightly different directions can end up a long way apart if they travel long enough.

That’s what happens with self-fulfilling prophecies. They create their big effects by subtly shaping the results bit by bit. Say that you believe a child is good at math, so you encourage or praise them just a bit more. Over time, these small biases add up to a very large difference. Self-fulfilling prophecies can’t start from nothing. There must always be some kernel of truth, something to get the process rolling. Once the process has started, it will start to reinforce itself.

As a result, when we emphasize molehills, they become mountains. Instead of them staying small or getting smaller, they get larger as we continue to create more and more bias towards them being bigger, annoying, or challenging. Consider a married couple; the wife becomes progressively more frustrated at her husband for not refilling ice cube trays. In the grand scheme of their lives together, is filling or not filling ice cube trays important? Probably not, but many arguments have started over sillier things – like which way the toilet paper roll should be placed on the holder.

Everybody Wins

I’m willing to make a bet with odds that aren’t quite one-to-one that your favorite candidate will win the next presidential debate – at least in your mind. That’s what researchers found when they asked who won after several of the presidential debates. The actual performance of the candidates didn’t matter. What mattered was whom someone supported before the debate started. Two rational people (if we want to call people rational) listen to the same arguments and come up with opposite points of view on the same events by the nature of their beliefs. They feel like their favorite candidate nailed key issues. The opponent revealed the weaknesses of their positions.

This is sort of the way that gamblers rewire their brains to think about losses as “near wins.” There’s always a reason. They could have drawn another card, or someone else could have done things differently. While they may accept their wins, they’re not good at counting their losses with equal weight.

Listening to the Opposition

While it’s commonly believed that we fail to listen to opposing points of view, that’s not always true. Often, we will listen to the opposing point of view more carefully than we listen to the points of view that support our position. The opposing points of view are scrutinized more carefully. We’re looking for flaws in the arguments. We’re looking for something that just isn’t right. When we find it, we latch on to it like it’s the only thing that matters – though, in truth, it may not matter at all.

We treat our desired conclusions differently than we treat those conclusions we oppose. For our desired conclusions, we ask ourselves “Can I believe this?” and for those we oppose, the question is “Must I believe this?” The standard of evidence is much, much higher for those things that you must believe. So, while it may appear that we don’t look at opposing points of view – and that is sometimes true – the truth is that we often pursue them with greater fervor looking for reasons to justify why we don’t have to believe them.

Hand Me Down Stories

So much of what we know about the world today isn’t through direct experience. What we know of the world today is shaped by the experience of others. As stories are retold, the evidence that isn’t consistent with the theme is removed or neglected. We do this in our own minds to produce the consistency that we desire. (See The Tell-Tale Brain and Incognito for more.) The more powerful version is what happens when others recount a story for us.

Take for example some of the experiments we’ve heard about. There’s The Marshmallow Test, which was supposed to predict future success through the skill of delayed gratification. That’s likely true – but what most people don’t recognize is that marshmallows were only one of the sugary treats used to entice the children. Nor is it generally recognized that the recognition of the results wasn’t originally planned.

There’s been a great deal of discussion about Milgram’s experiments on obedience to authority and the willingness of a person to subject another to what was perceived as a potentially fatal shock. What’s not often recognized is that, when the experiments were run in a place that wasn’t Yale’s main campus with all its trappings of authority but instead in an office complex off the main campus, the results couldn’t be replicated.

Take the Stanford Prison Experiment. Phillip Zimbardo’s famous experiment is supposed to show how the setup of prisons inherently lead to inhumane treatment. He’s personally made a career out of this experiment and the conclusions. However, many people are questioning how well controlled the experiment was. If the “guards” were coached to be cruel, or the reports about the “inmates” were not factual it makes the whole thing fall apart. (For more on the experiment, see The Lucifer Effect, and for more about the criticisms, see “The Lifespan of a Lie.”)

That brings us to the more obscure but formative story of Little Albert which shaped the profession. In 1920, John Watson and Rosalie Raynor wrote an article “Conditioned emotional reactions” in the Journal of Experimental Psychology. The research was widely quoted in psychology textbooks – and attributed conclusions and outcomes of the experiment that aren’t supported by the research. However, the problem is that most authors of the textbooks didn’t read the original research: they relied on someone else’s work, and they further summarized and simplified it. (If you’re interested, a separate recounting of this research is available in “Studies in Infant Psychology.”)

The result of the way we acquire information today is that we often reach the wrong conclusions, because essential details and limitations of the rules have been removed. (This is one of the reasons why I’m so particular about reading source materials when I can find them – and why I go out of my way to find them.)

Improvements in Longevity

Medical science is amazing. Our estimated lifespan is now roughly double what it was 50 years ago. Paired together, these statements make it appear that our improvements in medicine are responsible for our increase in life expectancy. However, that’s not the case. Most of the increase in life expectancy is the result of sewage disposal, water purification, pasteurization of milk, and improved diet. Our chances of surviving childhood are now much greater – admittedly in part due to the advance of vaccines – and someone who would have died as a child but instead lives changes life expectancy dramatically.

What we believe about the change in life expectancy – that it’s been a slow climb as we allow people to individually get older than they used to – isn’t correct. Nor is our assessment of the reasoning for those increases. For all the great things that medical science has done, it pales in comparison to the advances we’ve made in systemically providing isolation from the factors that would kill us in childhood.

Blaming the Victim

Faith healers may be your only option when traditional medicine gives up. However, these faith healers come at a cost – and it is more than the money they charge. They often transfer the blame for their failure to you or to your god. The advent of statistical measures in healthcare has improved delivery. Non-traditional healers may stand up to statistical validation – or they may not. We don’t know, because there’s been very little study of these types of healing scenarios. However, what we do know is that there is a propensity of faith healers to explain their failures away either as a lack of faith on the part of the patient or as God’s will being against it.

The first is a direct blame of the victim of the illness. If they just had more faith, they could have been healed. The second is a more indirect blaming of the victim. In the conception that God wants ill for you only when you’ve done something wrong, you land at either approach, costing you belief in yourself.

Converting the Improbable to the Inevitable

Statistics don’t come easy for humans. While we develop rules of thumb based on our experiences, when it comes to a hard understanding of the facts, we fail to recognize how even the improbable becomes inevitable given enough time. Though flipping a coin in the air and getting 100 heads is highly improbable, if you have enough people tossing coins for enough time, it’s certain to happen.

How many people would need to be in a group to have a 50-50 chance for two of them to share the same day of the year as their birthday? It turns out that with a group of 23, there are 253 different pairs – so a better than even chance someone in the group would share a birthday. Of course, this mathematical fact doesn’t feel right to our brains.

It also doesn’t feel right that we know things that aren’t so – and, as Mark Twain said, that’s what gets us in trouble. If you want to avoid trouble, How We Know What Isn’t So is worth the trouble to read.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: