Skip to content

Mindreading – it’s the stuff of comic books and science fiction. At the same time, Dr. Paul Ekman struggles with the implications of his discovery of micro-expressions and the emotions they reveal (see Nonverbal Messages and Telling Lies). All the while, Jonathan Haidt believes our ability to read others’ intentions is the point at which we became the truly social and cooperative species we are today. (See The Righteous Mind.)

Somewhere between the superhero capacities and the reality of our evolution lies questions. The question is, how does it work? How is it that we have any capacity to read another’s mind? What is it that allows us to “know” what is in someone else’s head? This is the question that plunged me into the academic writing of Mindreading.

Making Models

Other than Steven Pinker, I don’t know anyone who claims to know exactly How the Mind Works. In his book with the same title, Pinker attempts to walk through the topic, but my initial journey through the material was called on the account of boredom. It’s back on my list to try to read again, but I can’t say that I’m looking forward to it. The neurology books I’ve read can describe the firing of neurons and their structure – but not how they work together to produce consciousness. (See Emotional Intelligence, Incognito, and The End of Memory.)

Psychology has its problems too. Science and Pseudoscience in Clinical Psychology and The Heart and Soul of Change are both clear that psychology doesn’t have all the answers for how the mind works. The DSM-5 is a manual of all the manifestations of problems with psychological development without any understanding of what’s broken or what to do about it. It’s sort of like a categorized list of all of the complaints that people have had with their car when they take it to an auto mechanic. Warning: Psychiatry can be Hazardous to Your Mental Health speaks of the rise of the use of drugs – with limited, if any, efficacy – and how we still don’t effectively know how to treat mental health problems.

With all these problems, one might reasonably wonder why we bother making models at all. The answer lies in a simple statement. The statistician George Box said, “All models are wrong, but some models are useful.” The fact that each model moves us closer to an approximation of reality is why we make models. Much of Mindreading is spent exploring the author’s model of mind reading and comparing it to the models that others have proposed – and how the author’s model builds on the models of others.

Put Out the Fire

Telling Lies

I ended my review of Telling Lies with the idea of stealing the truth. That is, how detection of lies could be used to steal the truth from those who wished to keep the truth secret. This is, for me, an interesting moral dilemma. Our ability to read minds – to have shared intentionality – allowed us to progress as a species. It was an essential difference, just as was our ability to use tools. At the same time, we believe that we should have a right to keep our thoughts private.

Mind reading, or shared intentionality, has been one of the greatest factors in our growth as a species and at the same time we struggle with what it means.

Understanding Beliefs

Show a small child of one or two years old what’s in a box, and close it. Watch as their playmate enters the room, and ask them what their playmate will believe is in the box, and they’ll confidently explain the item you showed them. Of course, the playmate has no idea what’s in the box. Young children are unable to comprehend that the beliefs that they have aren’t the beliefs that everyone has. They believe the illusion that their brains are creating. (See Incognito for more.) However, somewhere around three years old, if you revisit this test, you’ll find the child identifies that their perceptions and those of their playmate are not the same.

There’s a transition between the belief that everything is the same for everyone to a more nuanced understanding that your beliefs and others’ are different. However, differentiating between you having a belief and someone else not having it – or having a different one – doesn’t help to understand their desires.

Reading Desire

Understanding different desires is something different entirely. It’s one thing to understand that someone else doesn’t know what’s in a box but something entirely different to understand that not everyone loves brussels sprouts. Young children tilt their heads like a confused puppy when you tell them that you don’t desire something that they do.

Soon after they’re able to accept the principle that you don’t have the same desires they have, they start to try to figure out what your desires are. They begin the process of looking for markers in behavior that either confirm or disconfirm that your desires match theirs. They look for whether you take the brussels sprouts from the buffet.

Desire is inferred from behavior or lack of behavior more or less like adults assess others’ desires. The models that we have in our head and the number of markers that we’re able to use expands, but, fundamentally, it’s the same process. Where we get off track is more frequently reading intentions.

Reading Intention

“Fundamental attribution error” is the name that Kahneman gave the tendency to attribute intentions to others. (See Thinking, Fast and Slow). It’s our tendency to leap to conclusions. It’s our tendency to reach out and make the wrong leap about what other people were intending.

When it comes to leaping, Chris Argyris has a ladder. His Ladder of Inference describes how we make assumptions and conclusions about other people and what is going on inside of them. Most of the time, when we talk about the Ladder of Inference, we’re talking about the problems that it causes. (See Choice Theory.) We’re talking about where it misses the mark. However, the inferences we can make to read the intention of someone else is a marvelous piece of mental machinery.

Consider Gary Klein’s work in Sources of Power and Seeing What Others Don’t, which lay out the mental models we use to simulate the world around us. Reading intentions means that we model the mental processing of other people. This sort of box within a box has been mastered by virtualization software, but wasn’t popular for the first several decades of computer technology. We know that a mind can simulate the processing of another mind – but how?

What’s the Harm in a Thought?

Research has shown that thoughts can be harmful. They can lead to stress responses and harm. (See Why Zebras Don’t Get Ulcers.) However, a thought or belief can rewrite history. People struggle with the curse of knowledge (see The Art of Explanation for more). We simply don’t see how people couldn’t realize that the round wheel is best. Our awareness of the current state shapes our perception.

Andrew Carnegie is perhaps my best example of a man who understood the power of a thought. In his time, he was called a “robber baron.” He was reviled. However, through his gift of public libraries, he shaped people’s perceptions of him – for generations. The thought that he is a benefactor of public knowledge pushes out the incompatible robber baron thought.

Thoughts are substantially more powerful than we give them credit for. They can change our biology. They can change our world, and, ultimately, they can change the world. Incompatible thoughts wage a war inside our heads, duking it out to see which one gets to survive. Einstein described “genius” as the ability to hold two incompatible thoughts inside our head at the same time.

The harm in a thought can be how it pushes out other thoughts – necessary thoughts. (See Beyond Boundaries for more on confirmation bias.)

Possible World Box – The Heart of Simulation

At the heart of our ability to project the future and to simulate situations is the possible world box. In this box, the bounds of our perception of reality are weakened. We copy our thoughts and expectations into this box from our belief box – but inside the possible world box, anything is possible. We can overwrite our beliefs. We can change our world view – at least for a moment. The possible world box is where we simulate. We simulate the future. We simulate other people and other situations.

Without the possible world box (or some equivalent), we would not be able to simulate at all. We’d be limited to the experiences that are directly within our perception. With a possible world box, we can create flights of fancy and any sort of world or simulation we might like – including what might be going on inside another human.

It’s this ability to simulate that is unique to our human existence, and it’s one fraught with problems. Many of these problems revolve around the challenge of cognitive quarantine.

Cognitive Quarantine

It’s great that we have a possible world to run simulations in, but what do we do with the results of those simulations? If we had complete cognitive quarantine, there would be no way to migrate the output of our simulations into our belief system. So, we clearly need to take things from the possible world box – or the output of the simulations we run in the possible world box to our beliefs. This gets us into trouble.

Suddenly, it’s possible to get things from the possible world box – which aren’t constrained by reality – into our belief system. The mental mechanisms that regulate this process are far from perfect. In fact, we know through research that the introduction of information into a simulation can bleed into beliefs about the real world.

I wonder whether schizophrenia as we understand it is really a failure of the mechanisms designed to limit, regulate, and control the flow of information out of the possible world box in such a way as the possible world leaks into our real world and our real beliefs. Once that happens, it becomes fascinatingly hard to loosen the belief. (See Change or Die for more.)

Displacing the False Belief

Let’s say you are placed in a situation of seeing a set of suicide notes – some fake and some real. You’re asked to sort them into fake and real. You’re told that your sorting is very good – much larger than chance. Then later, you’re told that the feedback was wrong. In truth, all the suicide notes were fake. The whole experiment wasn’t about sorting suicide letters. It was about persistence of beliefs. And then you were asked whether you were good at sorting suicide notes between the fake and the real.

Your perception will have changed. You’ll believe that you’re good (or better than average) at sorting real suicide letters from the fake. You’ve been told, by the same researcher that told you were good, that they were lying. You should – if you’re completely rational – not hold any belief about your ability to sort suicide letters. However, the research shows that you will. You’ll hang on to the lingering belief that you are good at this sorting.

In this very controlled experiment, you received direct evidence that you are not good at the task, and yet the belief persists. What does this say for the beliefs that leak out of the possible world box? How difficult would it to be to displace a bad belief if you don’t have direct, disconfirming evidence? Would it even be possible? In many cases, it isn’t.

Inference Mechanisms

We’ve got finely-tuned inference engines. We ascribe our thoughts to others. In fact, this is something all young children can do. Shortly after they discover object permanence – that is, that an object doesn’t disappear when it moves out of their field of view – they start to expect that what they know is something everyone knows. If they see an object move behind another until it’s hidden, they expect that other children who didn’t see the object get hidden will know where it is. They infer that, because they know it, then everyone should know it.

As we get older, our inferences get more complex. We move from being able to identify the number missing in a series to being able to infer what someone else believes based on their behaviors. We test possible beliefs in the possible world box until we can find a belief set that could create the behaviors we’re observing.

Behavior Prediction

In many ways, our mental systems evolved in ways that allow us to predict the behaviors of others. That is, we want to know what to expect out of others. We predict behaviors, because, as social animals, we know that our safety is dependent upon how others behave.

Our behavior prediction engine is fed information through play and through our experiences. (See Play for more on the role of play.) As we amass more data, we expect that our ability to predict others’ behaviors improves. We do this because, by predicting the behavior of others, we can learn to work together and stay safe.

Failure of Prediction

Though we’re good at predicting other people’s behavior, our failure to predict their behavior is inevitable.

The more certain we are of how we believe someone will behave, the more hurt and betrayed we feel with they don’t meet our expectations. (See Trust => Vulnerability => Intimacy for more clarity.) In evolutionary history, we needed to know how someone would behave, because it quite literally could mean the difference between life and death.

Kurt Lewin tried to expose a simple model for behavior prediction. Behavior, he said is a function of both person and environment. So, it’s not possible to predict behavior without considering both the person and the environment. Folks like Steven Reiss have worked to characterize the personal factor of behavior by isolating and identifying the 16 basic motivators – sort of like a periodic table of elements for motivation. (See The Normal Personality.) Others have proposed other ways of categorizing people to make the explicit prediction of behaviors easier. (You can find more in The Cult of Personality Testing.)

Despite all of these tools and models, we still fail to predict others behavior. Caesar Agustus asking “et tu Brute?” is perhaps the most historic example of a betrayal that cost a life. The good news is that every failure to predict isn’t a life or death situation. Sometimes it’s trivial.

Pretense – Something and Not at the Same Time

Have you ever picked up a banana, held it to your head, and started to talk into it like a phone? Or have you seen a child pick up a block and talk into it like a cell phone? These are examples of pretense. It’s the basic forerunner of our ability to simulate the mind of others and the start of the possible world box. We can simultaneously accept that what we’re “talking on” can’t make calls – and at the same time pretend to be doing just that.

The interesting part of this is that we can imbue the attributes of the target item, the phone, to the source item, the banana, while at the same time recognizing that the banana is still a banana. This bit of cognitive distinction is why the possible world box makes so much sense. We can pin our beliefs into a possible world and recognize our beliefs that are “real.”

So, we start by pretending one thing is another. And we end up with a way that we can read other people’s minds. It may not be the stuff of comic books. However, Mindreading is pretty cool – and something worth learning more about.

2 Comments

  1. […] “Did you lock the door?” can be a simple transactional question or a question saturated with meaning. It can be a simple check to see if I need to go lock the door or whether it’s already done. It can be an accusatory question that contains in its sub-context, “You never do anything around here to help keep us safe!” It can also be an offer to go lock the door, so the other party doesn’t have to. One question with three – or many more – meanings. How can we, as humans, know which question is really being asked and whether there’s an embedded bid in it? In short, we don’t know. We must guess or try our hand at mind-reading (see Mindreading). […]


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: