Skip to content

Book Review-Complications: A Surgeon’s Notes on an Imperfect Science

Complications impact every aspect of our life. We believe that we’ve got life all figured out, but then come the pesky complications to our orderly, perfect world. Atul Gawande speaks about medical complications in Complications: A Surgeon’s Notes on an Imperfect Science while simultaneously exposing the inner struggle that surgeons – and, indeed, anyone who provides care to another person – must struggle with. I’ve reviewed two of Gawande’s more recent books The Checklist Manifesto and Being Mortal – both are good and different from each other. They’re the reason I picked up Complications.

Imperfect Science

We’re wired by our nature to crave understanding of our world. We want to believe that we have it all figured out – or at least, if we don’t have it all figured out, someone else does. Someone else who will tell us the answers to the questions that we don’t even understand yet. In this yearning, we’re willing to overlook what we know to be reality.

Medical errors are the third leading cause of death in the United States. (See commentary about the research on NPR.) Let that sink in for a second. Heart disease and cancer are more likely to cause death, but nothing else. Given our concern for healthcare-associated infections, it’s interesting to me that the researchers didn’t include healthcare-associated infections in their list. Though they may be unwanted, they’re not considered 100% preventable and thus didn’t make the list – though they would add about 100,000 more deaths and move the needle from 250,000 deaths to 350,000 every year, all based on medical error.

However, most of us don’t think of this when we go to the doctor to ask them to evaluate our condition, adjust our medicines, or operate on us. We don’t consider that the chances are good that there will be some sort of an error if we stay in a hospital – whether inconsequential or not, it’s likely to be there. Gawande pushed for a solution to some of these errors in The Checklist Manifesto and made a compelling case that aviation doesn’t suffer from the same failure rates as medicine. Terri and I wrote a chapter in Information Overload that speaks to the unmanageable level of information that nurses must cope with.

The problem is that we speak of medicine as a practice and rarely pause to think that this means everyone is practicing. They’re practicing becoming good, but they aren’t good to start. (See Peak for more on how to become the best at anything.) Medicine isn’t nearly as much science as we’d all like it to be. The complexity of human systems and how to best support people isn’t always easy.

Put Out the Fire

Systems

Just as we pass over practice and rarely pause to consider that it means no one has mastered it, we similarly toss out the word “system” in medicine like it’s well understood. We have the cardiovascular system, the digestive system, the endocrine system, the nervous system, and more. As we zoom into any one of those systems, there are a set of loops that keep the system running. Some of those are internal, and some of those are provided by outside systems. For instance, the nervous system relies upon the circulatory system to provide the neurons with glucose and oxygen. These, of course, come from the pulmonary systems and digestive systems. Technically, glucose is managed by the endocrine system, which is, in turn, fed by the digestive system.

I think you see the point. Even if you were to fully understand one of the systems, which would be a feat in and of itself, it’s unrealistic to expect that anyone would fully understand every system – and the interaction between the systems. There’s great discussion on the fundamentals of systems in Thinking in Systems and more unsolvable problems, called “wicked problems,” in Dialogue Mapping due to inherent instability in systems. We do a lot of writing about how systems are unknowable and uncontrollable – but we still expect that they are both knowable and controllable. (Something that is covered in more detail in The Black Swan and Antifragile.)

Systems are a simplification (which we are as humans prone to do). They allow us to manage the fact that we can’t everything in our head to be able to simulate everything. We use our understanding to create schemas which allow us to simplify our thinking into ways that (hopefully) we can manage. (See The Art of Explanation for more on schemas)

Rational Decisions and Irrational Intuition

Gary Klein’s work with firefighters helped me see that everything we know isn’t rational and explicit. Instead, we have intuition that is developed from seeing things and making models in our thinking. (See Sources of Power and Seeing What Others Don’t for more on his work.) Works like The Paradox of Choice and Lost Knowledge helped me to realize that teasing out some tacit knowledge is difficult and potentially disruptive to the professional that has the knowledge.

However, converting tacit knowledge into explicit knowledge dramatically increases the usefulness. Codifying what is and isn’t best practice through research and validation makes it possible to leverage the hard work and learning of a few and allow it to apply to the many. It was 2001 when the research was published for a study that I supported. The Diabetes Advantage Program, as it was called, was a grand experiment to see if a set of agreed-upon standards could inform the care of patients in a primary care setting. (The research published as “A Systematic Approach to Risk Stratification and Intervention Within a Managed Care Environment Improves Diabetes Outcomes and Patient Satisfaction”.) My responsibility was to take the protocols that were finally agreed upon and put them into a system that would identify opportunities for improvements in care for the primary care physicians to choose whether to implement or not. The system created a pretty report with model orders that the physician could accept or reject. The nurse typically made recommendations on the report before handing it to the physician, and, after appropriate consideration, they often signed it, allowing the nurse to complete the required orders.

The project was successful. I believe a large part of this was striking the right balance between physician intuition and systematic support to help the physicians make the right decisions based on the available research.

Accepting Input

Strangely enough, there’s a central paradox as it comes to physicians. They must acknowledge that they’re sometimes wrong – and simultaneously be confident that every decision they make is the right one. On the one hand, they know that they are just as fallible as anyone else. On the other hand, they must behave as if they know the right path forward. President Truman famously said, “Give me a one-handed economist.” Dealing with uncertainty is never fun, and, when it comes to medicine, there’s always uncertainty.

The problem is that the patients and their families want – or perhaps need – to feel like they’re doing the right thing. That means that the physician must appear confident even when they’re not. You might be protesting, but sometimes they give options, and they don’t know which way to go. Even then, they must appear confident in the diagnosis or possible diagnosis and the list of options they put on the table.

Gawande’s next book, The Checklist Manifesto, speaks of the power gradient that exists in healthcare, with the physician leading and the rest of the team following. More importantly, he explains how the humble checklist and a prior agreement about how things will work are designed to subtly shift the balance of power back to a more neutral state where physicians – and particularly surgeons – are still in power but not so much that the rest of the team can’t verify and even question the course being charted.

Beyond the land of questioning what is currently going on and what is about to happen is a place where a physician can accept input about the case, alternatives, and their performance. The morbidity and mortality conferences that are regularly held in most acute care settings are designed to gently remind physicians of the mistakes that are made and what can be done to mitigate them in the future. This is feedback from peers that isn’t intended to be done in a shaming way. Its design is to create the expectation that you’ll accept feedback – and somehow remain confident at the same time.

Great Idea, But You Go First

There are other conflicts in the medical system. We know that, in general, the more experienced physicians have better outcomes. We also know that to get that experience they must have the chance to practice – thus some people must accept the care of relative novices so that they can learn. There are, of course, protections built into the system so that these less experienced physicians are guided, mentored, trained, and supported by more experienced physicians. However, it’s not the same, and everyone knows it. When your loved ones go into the operating room, do you want the 20-year old veteran or the resident? Most would say they want the experience.

That’s the rub in this conflict. We know that to improve overall, we need to get more and better practice. However, when it comes to our loved ones, we get a bit squeamish. How would we feel if they make a mistake and there are consequences – including death – that could have been prevented?

Eliminating Humans

We know that human beings are finicky creatures. We have systematic biases that prevent us from seeing things clearly. (See Thinking, Fast and Slow and Predictably Irrational for more about our biases and their predictability.) For instance, Willpower explains that, if you are up for parole, you want a hearing in the morning instead of the afternoon, because it doubles your chances for parole. It shouldn’t matter whether your case is in the morning or the afternoon; after all, the facts are the same. However, somehow it does. Judges – who are prided on their ability to be impartial – feel and act differently in the morning than in the afternoon.

So, if humans are the problem, why not eliminate the humans? As my above story illustrated, I don’t believe that’s the answer. Instead, I believe the answer lies in supporting the humans with systems designed to identify for the physician what may be wrong or what may be indicated by best research.

EKGs are a frequently-used test to assess heart functioning. It works by measuring the electrical currents that drive the heart. It’s the familiar bobbing line that we all expect to see thanks to television medial dramas. It’s also very difficult to read. However, with practice experts can tell a healthy heart rhythm from one that is a signal for danger. The problem is that, even nearly 30 years ago, computers could do it better. Fed with enough data about what was right and what was wrong, a computer better identified issues than a top cardiologist – by 20 percent.

Even with nearly 30 years since the publication of the study and new research supporting that computers can evaluate various data sets, including EKGs, with better detection accuracy, they’re rarely used. Even devices that can continuously monitor patients and devices designed for patients to purchase themselves (e.g. Kardia) aren’t in widespread use.

We’re resistant to the idea that computers can do our jobs better than we can – even when there’s evidence that this is the case. We can’t accept that we’re turning our fate over to a machine – and we shouldn’t. However, at the same time, it’s foolish to not leverage the tools that we must improve our own performance. Instead of eliminating the humans, perhaps we can find strategies that allow the humans to focus on the things that are more consistent with our unique capacities that machines currently cannot – and may never – do. “Doctors,” comments Gawande, “can be stubborn about changing the way we do things.” The stubbornness can be an asset to fend off new fads and have the confidence to keep doing what works – but it can have tragic consequences when it prevents us from moving forward.

We Know So Little

Gawande moves from topic to topic like a gazelle, quickly explaining using stories how we don’t understand pain or vomiting before moving on to more ethical issues like the degree of control that a patient should have in their care. In all of it, there’s the clear sense that we know so little. We can only see into the forest as far as our flashlight will shine. We may get stronger and stronger flashlights in terms of technology, but, fundamentally, we will always only see so far.

Behind every certainty, there seems to lurk complications. Behind every diagnosis of a cause of death is an autopsy to contradict. (In 40% of cases, research says.) In life and medicine, it seems like we should be prepared for Complications. Maybe reading a surgeon’s notes on an imperfect science can help.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: