9
$\begingroup$

Since the pathbreaking paper Stochastic Games (1953) by Shapley, people have analyzed stochastic games and their deterministic counterpart, dynamic games, by examining Markov Perfect Equilibria, equilibria that condition only on the state and are sub-game perfect. Now these games are essentially all games with observable actions. I would like to know if there are analog equilibrium concepts for games with persistent incomplete information. With persistent, I mean that private information is not independent between periods, so that players have to actually learn.

Edit:

Motivation: I have written a paper on a certain conceptual issue of Markov Perfect Equilibrium (the definition of the state space). Several applied economists have asked me if a similar analysis can be done for MPE in incomplete information games. So I would like to know how the notion is applied in the literature.

  • 0
    This paper seems related: https://www.aaai.org/ocs/index.php/WS/AAAIW11/paper/viewFile/3958/42852016-08-25

3 Answers 3

1

To analyze dynamic games with persistent information, standard equilibrium concepts still apply--obviously not Markov, if you want it to have memory, but any Nash Equilibrium, or Bayesian Equilibrium will suffice.

If you want to capture learning dynamics, those would be captured by strategies. Maynard, Smith, and Price (1973) define Evolutionarily Stable Strategies (ESS). You may also be interested in the model of fictitious play by Brown (1951).

1

My understanding is that this has not been worked out and would be very valuable particularly for empirical application. There is a cite to some work by Maskin and Tirole, but I asked Tirole and the cited paper doesn't exist.

1

More of an extended comment: I strongly suggest Mailath and Samuelson's "Repeated Games: Reputations, LongRun Relationships". See the discussion on page 190, 5.6.3 Markov Perfect Equilibrium.