Skip to content

Book Review-SuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed

It’s time to draw the line between the dots.  SuperCooperators: Altruism, Evolution, and Why We Need Each Other to Succeed is the final missing piece that connects the dots between Richard Dawkins’ The Selfish Gene, The Evolution of Cooperation, Does Altruism Exist?, and Adam Grant’s Give and Take.  It’s the bit that explains how givers – cooperators – can end up on both the top and the bottom.  It’s the part that explains how defectors can get the best of cooperators – or be rooted out by the cooperators depending upon the conditions.

Mutation and Selection

All the way back to Darwin, we’ve believed that the survival of the fittest that drove evolution is based on a set of twin ideas.  On the one hand, we have mutation – that is, changes from a single standard into multiple variants.  On the other hand, we had selection pruning away those variants that weren’t the best, most adaptive, and most effective in a given environment.  Where mutation diverges, selection converges.  It’s an elegant expression of a fascinatingly complex process that takes place over generations – but it’s incomplete.  If we leave only these two forces, then we’re stuck with Dawkins’ Selfish Gene.  There’s no room for cooperation.

That’s why we need to accept that cooperation is a third principle that is added to the first two.  It drives evolution as well but in a subtle way.

Put Out the Fire

Survival of the Fittest Group

To explain how evolution might have favored cooperators, we’ve got to think on multiple scales.  We must think that groups of cooperators will succeed or fail.  We start with the prisoner’s dilemma and understand that the best scenario is for both parties to cooperate with each other.  From there, we must admit that the defector has the upper hand when dealing with a cooperator.  In that case, eventually, the defectors will populate a group well if not detected and removed by other means.

Consider two groups: one consisting of mostly collaborators, where the defectors have been mostly discovered and removed (expelled); and another, where the collaborators didn’t develop this capacity and were therefore all but eliminated.  The overall productivity and capacity of the group that has an abundance of cooperators will likely win a competition against a group of defectors because of their enhanced capacity.  It’s a case of to the victor go the spoils.  (See Human Capital for more.)

Detection

This, of course, relies on the idea that the cooperators have learned how to detect cheating.  As I mentioned in Does Altruism Exist?, the odds for learning to detect defectors may be long but they’re not impossible.  There are two ways that this detection can function.  The first is memory, and the second is reputation.  Direct reciprocity requires that players remember who has defected on them and who has not, so they can make a prediction about whether the other person will defect again.

Reputation requires a social capacity where someone can learn about another’s reputation – that is, the aggregate of their interactions with others.  If I can assess reputation, then I can use that as a proxy for my prediction of the other person’s behavior.

It’s important to pause here to say that these reputational forces are woven into humans deeply.  They’re at the heart of Diffusion of Innovations and the power of social marketing (see Guerrilla Marketing and The New Rules of Marketing and PR).  Since we’re using this information to predict behavior, we can’t ignore the ability for people to manipulate our prediction processes, as explained in Predictably Irrational, Noise, The Hidden Persuaders, and Influence.  Detection is hard because the defectors get better at hiding their defection.

Punishment

However, there’s another evolutionary issue that must be addressed.  That is, once a defector has been detected, they must be punished.  In the indirect sense, their reputation does that.  It prevents them from taking advantage of others, but that’s not enough.  For that, we need to recognize the research around the ultimatum game, where two people are given $10 to split.  The first one gets to determine the split, and the second one decides whether both parties will – or will not – receive the money.  Consistently, when the first person splits the money unevenly at about 7/3 or 8/2, the second person decides to punish the first’s greediness by denying both the money.

From a strictly economic standpoint, this makes no sense.  However, it makes perfect sense from an evolutionary perspective where defectors – the greedy – need to be taught a lesson.  It’s generally accepted that the punishment to cost to punisher ratio needs to be about 3:1 – which lies between these two splits.

With detection and punishment, we have the possibility of preventing defectors from overtaking a generally generous group.  Vengeful punishment can pave the road of amicable cooperation.

The Makeup of Groups

This all presumes some makeup of groups.  First, there must be groups rather than one big mass of interacting actors.  Second, the groups must be sufficiently long-lived to allow for memory and reputation to take hold and defectors to be punished appropriately – that is, until they change their ways or are expelled from the group.  The key here is that cooperators need to be able to defend themselves against the defectors.

It’s also necessary to recognize that being a defector isn’t necessarily a persistent trait.  It can be that the punishment of the cooperators can convert a defector into a cooperator – exactly as we’d expect with reinforced behavior modification.

Mistakes and Generosity

The problem in any real-world situation is that we can never be sure of the other person’s intent, nor can we always assign reputation to the right parties.  We assign character traits to the other party when they were just learning.  In short, in the real world, we have to tolerate mistakes – our own and the other party’s.  As it turns out, even in the purity of computer simulations, you’re better off occasionally forgiving an offense.  Generous tit-for-tat is better than tit-for-tat and other strategies, because it occasionally forgives someone who defects against it.  It will never forget a cooperator but will occasionally forgive a defector.

The simulation result of this is that it prevents “death spirals,” where the two programs alternate between being generous and being a defector.  By occasionally giving an extra bit of trust, it stops the cycle and allows both parties to get the greatest benefit.

Simple Math, Complex Concept

The simulations and work on mathematical formulas revealed one consistent truth.  It says that when the ratio between the benefits of cooperating divided by the cost is greater than one plus the group’s size divided by the number of groups – then and only then cooperation will flourish.  Let’s tear that apart.

The ratio of benefit to cost must be greater than one as a baseline.  It’s got to have some innate value to cooperate in the first place.  In a traditional prisoner’s dilemma, with the following truth table, the ratio of benefits to cost is 1.2.  This can be calculated based on the total of 12 for years (for both parties) based on both possibilities of the other party compared to 10 total years for cooperation.

Cooperate Defect
Cooperate 2/2 5/1
Defect 5/1 3/3

What this says is that cooperation should flourish when the ratio between group size and number of groups is less than .2.

While all of this is quite abstract, it says that when group sizes are small, and there are many groups, the benefits of cooperation will likely cause it to flourish – in part because finding defectors is easier and because there are opportunities for inter-group competition.

Virus in our Genes

Evolution isn’t tidy.  In fact, it’s quite messy.  If we go back the primordial soup that existed on the planet Earth, there were plenty of building blocks from which things could start to replicate into patterns – that is, until those building blocks were consumed.  This required a different kind of replication approach – one which was more complicated.

The line between inanimate and animate life in the course of replicators isn’t clear.  However, we do know that the formation of the sort of programming language of genetics – RNA and eventually DNA – crossed us over into the place of individual cells, which contained all the pieces they needed to replicate on a whole new level.  The leap at this level required several different components of different replicating molecules to come together to work together and we’re not exactly clear how that happened.

It’s presumed that more replicators found themselves working together – because cooperation was good for their ability to survive and continue replicating.  These eventually became bounded inside of a membrane that we would today think of as a cell.  While we think of viruses as invaders today, it could be that these very same chunks – or ones just like them – became a part of us and the rest of the animal kingdom.

Bacteria in our Bodies

Most of the cells on the planet – and even in our bodies – are bacteria.  The truth is that our bodies aren’t pure human.  Our bodies are constantly trying to keep the bacteria in check in a delicate dance of cycles, rhythms, and defenses.  This is one of the reasons why stress’ tendency to turn down or turn off our immune system often spells disaster.  When the natural systems that we have to help us maintain the balance gets out of whack, it’s very difficult for us to recover.  (See Why Zebras Don’t Get Ulcers for more on the impacts of stress.)

Many of the most challenging diseases of our times are those that are classified as autoimmune.  Those are the ones where our own immune system starts attacking parts of the body, and the results are devastating.

Optimum Mobility

One of the challenges in defining the success of cooperators is their ability to address or avoid defectors and that requires a level of mobility that is neither too low – where they’re trapped – nor too large – where they cannot discover who the defectors are.  There’s a delicate balance between too much and too little.  It’s much like Richard Hackman’s explanation in Collaborative Intelligence in the need for groups to have a certain level of permeability – but not too little nor too much.

Levels of Religion

One of the most fundamental premises of evolution is that evolution operates at multiple levels.  Cooperation is beneficial, so it’s no surprise, given Richard Dawkins’ discussion of memes in The Selfish Gene, that the world’s religions are by-and-large recipes for creating greater cooperation.  They encourage us to work together and help us to become better SuperCooperators.

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share this: