Aidas Masiliunas

Séminaires internes
Eco-lunch

Aidas Masiliunas

AMSE
Learning in contests with payoff risk and foregone payoff information
Lieu

IBD Salle 21

Îlot Bernard du Bois - Salle 21

AMU - AMSE
5-9 boulevard Maurice Bourdet
13001 Marseille

Date(s)
Jeudi 16 novembre 2017| 12:30 - 13:15
Contact(s)

Ugo Bolletta : ugo.bolletta2[at]unibo.it
Mathieu Faure : mathieu.faure[at]univ-amu.fr

Résumé

We test a hypothesis that deviations from Nash equilibrium in rent-seeking contests are caused by slow convergence of a boundedly-rational learning process. We identify two elements of the game that slow down payoff-based learning, and eliminate them in an experiment. First, the distribution of payoffs generated by each action depends on opponent's action that varies over time. We eliminate this source of payoff variability by providing foregone payoff information, allowing all actions to be evaluated against the same sequence of opponent's actions. The second element is payoff risk, which slows down learning by reducing the correlation between realized and expected payoffs. We manipulate payoff risk using a 2x2 design: payoffs from contest investments are either risky (as in standard contests) or safe (as in proportional contests), and payoffs from the part of endowment not invested in the contest can also be either safe (as in standard contests) or risky. We find that Nash equilibrium rates go up to 100% when payoff risk is not present and foregone payoff information is available, but are at most 20% in all other cases. This result can be explained by payoff-based learning but not by other theories that  might interact with payoff risk (non-monetary utility of winning, risk-seeking preferences, spitefulness, probability weighting, QRE). We propose a hybrid learning model that combines reinforcement and belief learning with preferences, and show that it fits data well, mostly because of reinforcement learning.