Can the following be applied to the current revelations of the unregulated financial activities of Goldman Sachs...or is this just a simple matter of greed?
"To build a cooperative society, is it better to punish or reward?"
by
Lisa Zyga
April 19th, 2010
PhysOrg.com
by
Lisa Zyga
April 19th, 2010
PhysOrg.com
One of the basic components of a functional, cooperative society is a code of law, where the laws are usually enforced by some kind of incentive. Social incentives can either be positive (rewards) or negative (punishments), and a society must decide which combination to use to achieve the greatest efficiency, or the highest level of cooperation at the lowest cost. Using a game theoretic model, a new study has analyzed this social dilemma in order to investigate how individuals are swayed by incentives, and how cooperation can emerge due to various incentive strategies.
Christian Hilbe and Karl Sigmund, mathematicians from the University of Vienna, have published the study, called “Incentives and opportunism: from the carrot to the stick,” in a recent issue of the Proceedings of the Royal Society B. Overall, their results show how a population can evolve to become dominated by individuals who cooperate by default (that is, they cooperate unless they know they can get away with uncooperative behavior) when faced with negative incentives.
As the researchers explain in their study, the efficiency in terms of a benefit-to-cost ratio of the two types of incentives depends on the circumstances. In a society where most people cooperate, then it will be costly to reward them all, while a society in which most people defect would pay a high price for trying to punish them all. So the obvious way to transform an uncooperative population into a cooperative one would be to first provide positive incentives, and later punish the few remaining individuals who refuse to be swayed.
“In the last 10 years, there has been an intensive discussion about whether and how (human) cooperation can be promoted by offering incentives,” Hilbe told PhysOrg.com. “Especially the effect of punishment is heavily disputed; some researchers argue that the extensive use of punishment could lead to a downfall of overall welfare (for example, as punishment might provoke counter-punishment). Our study is one of the first examining the interplay of both types of incentives. We found that opportunism makes both types of incentives profitable, but they have different effects. In our model, rewards are very effective in increasing cooperation but, ironically, increased cooperation makes rewards expensive. At some point punishment might be more efficient.”
The researchers capture this dynamic in a game that is generally similar to the Prisoner’s Dilemma game or the ultimatum game, except that here only the first player chooses to cooperate or defect, while the second player chooses how to respond with incentives, and each player receives respective pay-offs. More specifically, the first player can choose one of four strategies: always cooperate (cooperation comes with a small cost), always defect, cooperate unless they know they can defect without being punished, and defect unless they know that their co-player rewards cooperation or punishes defection. The last two strategies are opportunistic, meaning that players use them to take advantage of a possible incentive, regardless of whether they must cooperate or defect to attain the incentive. The second player then responds with one of four strategies: offer no incentive, only use punishment, only use rewards, or use both incentives. In any interaction between two random players, there is only a limited probability that player one knows player two’s strategy.
In the way that the pay-off values are arranged, the first player can gain the most by receiving a reward for their cooperation. Although the second player gets a slight benefit from rewarding cooperation, they gain even more if the first player cooperates for no reward (which can occur because the first player does not always know if they will be receiving a reward for cooperation).
Hilbe and Sigmund found that, since the frequency of how often a certain strategy is used changes, a wide variety of evolutionary dynamics can occur. Some pairs of strategies tend to be dominated by other strategies, meaning that some strategies tend to evolve into certain others. However, other pairs of strategies are stationary and only change due to small random shocks. Further, there is one pair of strategies that tends to be the ultimate evolutionary outcome, and that is when player one uses opportunistic cooperation (i.e. they cooperate unless they know they can defect without being punished) and player two uses only punishment. The mathematicians call this pair of strategies a Nash equilibrium, since neither player can benefit by changing their strategy while the other player keeps theirs unchanged.
While many populations evolve toward this Nash equilibrium, the researchers identified one essential step in this evolution, which is when player one transitions from opportunistic defection to opportunistic cooperation. Moreover, the researchers found that the time until this transition occurs is greatly reduced if player two has a strategy involving rewarding, which entices player one to become more cooperative. In other words, the model accurately represents the two-step incentive strategy stated earlier, where step one is rewarding and step two - the more lasting step - is punishment. In this way, using the model may offer the potential to help determine the effectiveness of incentives in social programs by providing a glimpse into the future.
“At the moment, the discussion about the evolution of (human) cooperation is on a rather theoretical level,” Hilbe explained. “The main aim is to understand under which circumstances individuals tend to cooperate with each other and to which extent they behave selfishly. But the knowledge about the nature of human altruism might eventually lead to optimally adapted incentive schemes (for example, for increasing worker motivation).
“However, we don’t expect our study to be the final say on this topic. It is a delicate matter to capture the complexity of human interactions in game theoretic models and usually those models are very sensitive to the underlying assumptions. It will take much further research to get a conclusive understanding of the effects of incentives.”
No comments:
Post a Comment