What does a Michigan computer modeller have to say to superpowers, the UN and warring Yugoslav factions? Quite a lot it turns out –– and it's not all good. Peter Hulm reports on the work of Robert Axelrod.
Robert Axelrod's The Evolution of Co-operation (1984) was one of the classics of Cold War political science.
The Michigan Professor of political science found himself on the US National Academy of Sciences Committee on International Security and Arms Control, working with a counterpart Soviet Group to explore new initiatives.
"One of its primary motivations was to help promote cooperation betwen the two sides of a bipolar world," he explains. "My hope was that a deeper understanding of the conditions that promote cooperation could help make the world a little safer."
"On the Soviet side," he recalls, "several senior defence intellectuals and scientists involved in arms-control policy reported that they read the book with interest, and had passed it around to their friends."
A political science colleague told Axelrod his book helped her negotiate with her husband during their divorce.
Then in 1987 a biologist showed how sticklebacks used the strategy Axelrod outlined to achieve cooperation with other fish. It was a finding that struck at the heart of biological thinking of the time. E.O. Wilson's fashionable theories of sociobiology suggested we must look in our genes for the guide to our behaviour. Axelrod offered a strategic rather than genetic explanation.
In 1988 and 1989 Axelrod took part in meetings on conflict reduction in Uzbekistan and Estonia. In 1990 he received the NAS's new Award for Behaviorial Research Relevant to the Prevention of Nuclear War.
Then in 1995, as Yugoslavia fell apart, he addressed representatives of the warring factions at the invitation of the United Nations.
The reason for this sudden fame and personal recognition is what Axelrod did with a classic conundrum in Game Theory known, appropriately for the Cold War days, as the Prisoner's Dilemma: what to do if you are one of two friends arrested and held incommunicado, then told you will get a reduced sentence by confessing to a crime first and implicating your friend, but double the punishment if your friend caves in before you?
Axelrod extended this conundrum to explore answers to the question: when should a person co-operate in a relationship.
This is known as the iterated Prisoner's Dilemma because the same challenge faces both people repeatedly.
"Should a friend keep providing favours to another friend who never reciprocates?" Axelrod asked. "How intensely should the United States try to punish the Soviet Union for a particular hostile act, and what pattern of behaviour can the United States use to best elicit co-operative behaviour from the Soviet Union?"
The answer surprisingly, was "TIT FOR TAT": "cooperating on the first move and then doing whatever the [other] did on the previous move."
The originality of Axelrod's procedure was to challenge economists, psychologists, sociologists, political scientists and mathematicians to develop a general rule. "To my considerable surprise, the winner was the simplest of all the programs submitted, TIT FOR TAT."
The mathematician Anatol Rapoport of the University of Toronto won the tournament. It was tested again through a second round. This attracted 62 entries from computer hobbyists, evolutionary biologists, physicists and computer scientists. But all failed to beat Rapoport's formula.
Axelrod developed his research into seeing how co-operation emerges without a central authority, then he collaborated with a biologist on the implications in the life sciences, producing a paper which won a prize from the American Association for the Advancement of Science.
He reported on co-operation that developed without friendship or foresight in the trenches of World War I and showed how it operated among bacteria.
He also set out five recommendations to promote cooperation:
1. Enlarge the shadow of the future: that is, extend the relationship.
Why? To reduce the risks of exploitation of your cooperation in short-term relationships. "As the shadow of the future becomes smaller, it stops paying to be cooperative with another player – even if the other player will reciprocate your cooperation."
2. Change the payoffs (even prisoners arrested once might not confess if they are part of a gang that punishes defections). This is part of what governments do when they pass laws to punish those who do not pay their taxes or honour contracts.
3. Teach people to care about each other: "Without doubt, a society of [...] caring people will have an easier time attaining co-operation among its members, even when caught in an iterated Prisoner's Dilemma."
4. Teach reciprocity. But note that this is not the same as the Golden Rule of "do unto others as you would have them do unto you", which seems to recommend always cooperating.
5. Improve recognition abilities. Birds, he noted, can develop non-conflictual relations with several other birds because they can distinguish themselves from a group of others through bird song.
The Evolution of Co-operation was the sort of book you loaned out and never got back. Richard Dawkins, biologist author of The Selfish Gene, praised it for showing how "deep selfishness, pitiless indifference to suffering, ruthless heed to individual success" could lead to "something that is in effect, if not necessarily in intention, close to amicable brotherhood and sisterhood".
He advised anyone who would listen: "The world's leaders should all be locked up with this book and not released until they have read it."
For those who wanted morally based foreign policy, however, TIT for TAT was shocking. It had, Axelrod admits, a "slightly unsavory taste" because the strategy was so close to an eye for an eye. For others, it offered the bracing prospect that no moral issues were injected into the discussion.
It also offered a foreign policy rationale that was not based on getting an advantage over others. In the tournaments, the strategy never won any more points for its user than the other player. "Indeed, it can't possibly score more than the other player in a game because it always let the other player defect first, and it will never defect more times than the other player does. It won, not by doing better than the other player, but by eliciting cooperation. [...] TIT FOR TAT does well by promoting the mutual interest rather than by exploiting the other's weakness."
And the lesson for moralists was finally cheering: "A moral person couldn't do much better."
Nevertheless, TIT FOR TAT had some built-in limitations. In situations where a central authority could enforce community standards, there are alternatives to TIT FOR TAT, and once a feud starts it can continue indefinitely. In the real world, Axelrod recommended looking at a more forgiving formula than an eye for an eye.
He even offered four simple suggestions for doing well in TIT FOR TAT situations:
1. Don't be envious.
2. Don't be the first to defect.
3. Reciprocate both co-operation and defection (you choose the rate).
4. Don't be too clever.
Another shortcoming of TIT FOR TAT is that the strategy applies only in a two-person game, whose relevance to real life is not always apparent. Axelrod knew this and used it to provoke ideas rather than prove theories about how processes work.
One of the major questions the formula failed to tackle was: how do you read the signals from the other side. Or as Axelrod put it: "What happens when a player might misunderstand what the other did on the previous move or might fail to implement the intended choice."
Some theorists said strategists should be generous. Others said players should demonstrate contriteness (a variation of the earlier concern with automatic response). Still others suggested switching strategies to reward and punishment.
Running his computer models with a postdoctoral Chinese student, Axelrod showed that generosity or contrition worked better than a Pavlovian stragegy.
Perhaps the key problem with TIT FOR TAT was the reverse side of its advantages: its mathematical underpinning. True, this showed how a simple formula could beat all other strategies in repeated tournaments, and indicated how cooperation could develop without anyone taking any conscious decisions or developing a long-term strategy, no matter what political journalists say about the problems of "failed states."
But TIT FOR TAT's emphasis on mathematical/logical/rational "motivations" ignored the social issues that Axelrod remembers running up against at 14 in carrying out a school project on cross-training of police officers and firefighters in Evanston, Illinois.
Told by a firefighter at the local station, "I don't know much about it, but we're all against it down here," Axelrod recalls, "the response made a deep impression on me: it was possible to have an opinion based upon social influence rather than independent analysis."
Facing up to these issues has been a major stream of Axelrod's work since 1984. He developed a "norms game" for more than two players that allowed them to punish individuals who did not cooperate. "It turned out that another twist was needed, lest all the cooperators be tempted to let someone else be the one to bear the costs of disciplining the non-cooperators," Axelrod reports. He set off on a wide-ranging study on how to promote norms.
Working on his dissertation in the late 1960s, Axelrod was struck that Italian coalition parties wanted to work with others who were similar to themselves. Two decades later, this suggested another form of cooperation: "choosing sides based upon affinity rather than strategic advantage." The model he developed with a graduate student predicted both how European countries aligned in World War II and how computer companies took sides in developing standards for the UNIX operating system.
He also grew interested in how cooperators can be willing to give up almost all their independence – whether multicellular organisms or large businesses. Why people become more alike so that they find it easier to work together also puzzled him, leading to a study of social influence and the emergence of shared culture.
Axelrod, who spent his honeymoon in Yugoslavia in 1982, developed a "protection racket" model (my term) for studying groupings that live by extortion (perhaps a sign of the times).
In this "tribute model" (his term) players with the same amount of initial resources can basically tell neighbours: pay or fight. They then collect tribute or fight and win or lose resources (the computer rule is that a player will resist if it would cost less than paying the extortionist).
Clusters then develop of associated players around those who demand tribute and those who pay. What Axelrod found is that quite different histories resulted from the simulations run under the same conditions: from no fights after 200 of 1,000 years, a major collapse into multiple battles after the loss of resources one time in five, with between two and five dominant actors from the original ten players.
It's a result to make those who see only inevitability and superpower domination as the key elements of world history think twice about such assumptions.
What else did he find?
Once he developed this model, the most complex of all his simulation programs, Axelrod tweaked it to answer some "what if" questions.
He found that the model does not settle down even after thousands of years of simulation: "Even in runs up to 10,000 years there are repeated large wars, and strong actors continue to rise and fall."
He also found some better decision rules. By adding a constraint that players would not make demands of a target that would find it cheaper to fight than pay (is this the Yugoslav situation?), he found that individuals would do better than the others, and if everyone except one player uses the rule, the non-cooperating player fares worse than the others (perhaps an explanation for the growth of non-violent relations).
When everyone can interact with everyone else, however, there tends to be one dominant actor (as in the growth of 19th century sea power, Axelrod suggests). And if demands for tribute are made without regard to wealth, that is – if everyone fights (Yugoslavia again?) – two distinct clusters develop, one with almost all the wealth. But if two adjacent players just make a 10 percent commitment to each other, they develop complete commitment (if they have no others), and both tend to prosper (as Ricardo suggested in economics).
Axelrod's toughest battle to get his ideas across came with his simplest model of social interaction. He tried to study how culture spread on the basis of a standard theory about diffusion of innovations: that transfer of ideas occurs most frequently between individuals who are similar in certain attributes such as beliefs, education, social status and the like.
This might seem unexceptional stuff. Nevertheless, his simulation of the idea that cultural features will spread from one individual or group in relation to the number of features they already have in common offered some surprising results (so surprising that Axelrod thought he might have made a computer programming error).
It also gave Axelrod numerous problems in getting it published by political scientists, sociologists or even conflict resolution specialists. A political reviewer complained of Axelrod's model: "No one makes choices. No one seeks to influence anyone else. Change has no costs, politically or eocnomically. Cultural change occurs all-together in a community, with no leaders or laggards. In sum, politics are absent. It is [a model] that political scientists were educated to hate."
For Axelrod, of course, this was the whole interest of the simulation. Almost all previous models, mathematical or sociological, treated each feature of a culture independently of the others. He also worked on a model that did not require top-down cultural enforcement (such as the church, Webster or Napoleon).
Unfortunately his results did not produce more encouraging forecasts. Starting with units that had little in common with their neighbours, he found them settling down into a few (usually three) uniform regions that had nothing in common with the others. In retrospect, the origins of these "stable regions" (I think uniform rather than stable is the best description of these regions) could be seen early on in the simulation.
However, it was impossible to predict which of the many cultural regions would survive to term. In 14 percent of the runs he ended up with a single uniform region, and in 10 percent the result was more than six.
The most surprising result was that large territories end up with fewer regions than moderate-sized territories, and this has nothing do with the existence of boundaries.
Also, larger regions are more likely to "eat" smaller regions than the other way round (while in his earlier models, a small non-cooperative group can play havoc with cooperative arrangements unless the society practises TIT FOR TAT).
Even increasing the number of territories and time scale of the simulation showed the number of uniform regions declining from 10,000 to two.
Most of the time the struggle was between many compatible cultures for survival within their individual zones. Likewise, even when two cultural practices are equally attractive, the less common one is likely to vanish over time.
This pessimistic result does not require any theory of conscious effort at cultural domination to produce its figures. It also offers a caution against arrogant assumptions about surviving cultural practices: "The mere observation that a practice followed by a few people was lost does not necessarily mean either that the practice had less intrinsic merit or that there was some advantage in following a more common practice."
Another major counter-intuitive result was that the number of uniform regions decreases with the greater variety of cultural features (i.e. some seem doomed to extinction).
Axelrod showed that polarization occurs even when the model's mechanism for change is convergence towards a neighbour. "Likewise, when cultural traits are highly correlated in geographic regions, one should not assume that there is some natural way in which those particular traits go together," he observes. "Intuition is not a very good guide to predicting what even a very simple dynamic model will produce."
Axelrod has finally solved another puzzle facing enthusiasts of his approach for the past 14 years. Four years after publication, his work had inspired more than 250 works, but Axelrod's further work was noted only in a 1988 paper. In fact, most of his articles appear in widely varying professional journals and specialist books, and as he observes, few people keep up with cutting-edge biology, computer algorithm, management and political science magazines all at once.
At last, Axelrod has brought his articles from 1986-1996 together. He has provided a beautifully organized introduction and commentary, and extensive help for those who want to do their own modelling. "It includes an analysis of strategies that evolve automatically, rather than by human intervention," he explains. "It also considers strategies designed to cope with the possibility of misunderstandings between the players or misimplemenation of a choice. It then expands the basis of cooperation to more than a choice with a short-run cost and a possible long-run gain. It includes collaboration with others to build and enforce norms of conduct, to win a war or to impose an industrial standard, to build a new organization that can act on behalf of its members, and to construct a shared culture based on mutual influence. [...] It includes the conflicts between violators and enforces of a norm, the threats and wars among nations, competition among companies, contests among organizations for wealth and membership, and competing pulls of social influence for cultural change."
The book is The Complexity of Cooperation: Agent-based Models of Competition and Collaboration (1997) Princeton, ISBN 0-691-01567-8 (US$18.95 but cheaper at amazon.com). A Website offers source code and documentation for the models (in Visual Basic and Pascal) at the University of Michigan.
As the title indicates, Axelrod is now working with the models of behaviour that form the new mathematical theory of complexity (the Santa Fe model). In contrast to his earlier book, this collection is more technical, but as always, Axelrod writes a marvellously clear explanatory prose (as the quotations here show) and the math rarely gets in the way of an innumerate reader.
Short of hiring him to work on your own organization's approach to conflict and cooperation problems, you could get a long way towards working on the issues yourself with the inclusion of his "short course in agent-based modelling in the social sciences" in the book, and a guide to computer simulation of such modelling.
Axelrod even offers some exercises for extending the social influence model, such as looking at whether early geographic differences led to a North-South dichotomy (are things really different in the South?), and studying why Arabic numbers were more likely to be adopted by people using Roman numerals than the other way around. "I do not have good answers to the questions," he admits.
Office: 409 Lorch Hall
Phone: 1(734) 763-0099
Fax: 1(734) 763-91811
Robert Axelrod took his bachelor's degree in maths at Chicago and his Ph.D. in political science at Yale. He was named a MacArthur Foundation fellow.
Back to Top