Free Choice + Punishment = Cooperation
Making participation voluntary may discourage cheating
To get people to cooperate in a venture, make participation voluntary. That’s the advice from researchers whose recent study offers a solution to one of the oldest problems in game theory: How can cooperation develop if individuals can do better for themselves by cheating?
In a community garden, for example, the lazy gardener who does nothing may reap as big a share of the produce as the hardest worker.
Such antisocial behavior is reduced if cheaters face consequences. An industrious gardener may deny the slacker his share of the harvest, for example. But that raises another issue. Gardeners who pitch in but don’t punish freeloaders may get just as much produce as those who punish, without the risk and trouble of punishing someone.
Short-term self-interest seems to encourage an individual either to cheat or to cooperate but not to punish. In the long run, however, everyone is better off if most people both cooperate and punish. Then cheaters don’t profit, the burden of punishing is light, and many people reap the benefits of cooperation. The challenge is at the beginning: How can collective ventures get started if people can’t rely on one another to cooperate?
Karl Sigmund of the University of Vienna and his colleagues have now shown that if participation in a joint venture is voluntary rather than mandatory, the odds are higher that individuals will benefit by cooperating. They published their findings in the June 29 Science.
Sigmund and his team created a computer simulation in which computer “agents” act as individuals trying to maximize their profits. Each agent begins with a pot of money and then receives a small fixed income at each step of the game.
Agents may either participate in a risky cooperative venture or sit out. At each round, every agent that participates contributes a set amount to a common pool. The program then adds up the total, increases it by a certain percentage, and splits the money equally among the participants. The catch is that the simulation also allows agents to “cheat” by contributing nothing yet still receive a share of the pool. Another agent may punish the cheaters by forcing them to pay a fine to the computer. However, the agent imposing the fine incurs some expense in doing so.
At the beginning of the game, the researchers randomly assign each agent to be a cheater, a punisher, a cooperator who doesn’t punish, or a non-participant. At each new round, the computer again assigns each agent a role. The general strategy is for each agent either to continue with its previous strategy or to imitate others who are faring better, but occasionally the computer will give an agent a randomly chosen new strategy.
Over time, the researchers discovered, cheating becomes more and more prevalent and ruins the investment for everyone. Nearly all the agents stop participating.
But from this state of near-total non-participation, a few agents will occasionally begin to cooperate simultaneously, with no freeloaders. These groups start making more money than everyone else, and their success leads the non-participants to imitate their strategy. The small groups grow, producing a large group of punishers or a large group of non-punishing cooperators.
Big groups of non-punishing cooperators are an easy target for cheaters. One agent randomly tries cheating and makes a load of cash, and then other agents imitate the strategy, soon making it unprofitable for anyone to cooperate. But if the group consists primarily of punishers, an agent who tries cheating loses money to numerous fines, which discourages others from cheating. Groups with plenty of punishers therefore tend to be very stable and long-lasting, because they produce plenty of cooperators.
If participation were mandatory, the state of near-total non-participation could never occur, so even if a small group of cooperators arose, it wouldn’t have enough influence to make cooperation the norm. The only way cooperation could evolve in that case would be for nearly all the participants to simultaneously begin to cooperate. That, however, is very unlikely.
Sigmund says the study offers insight into the early evolution of cooperation. He is skeptical, though, that game theory can lead to new strategies with powerful applications. He chuckles at claims made during the 1950s that game theory could be used to win the Cold War. “What is most important,” he says, “is that this gives you insight into some elements of human behavior.”
If you would like to comment on this article, please see the blog version.