Aditya Makkar
Equilibria in Static Games
  1. INTRODUCTION
    1. What Is An Equilibrium?
    2. Classification of Games
    3. Notation
  2. GAMES WITH COMPLETE INFORMATION
    1. Normal-Form Games and Nash Equilibrium
      1. Computing Nash Equilibria
    2. Correlated Equilibrium
      1. A game with correlated equilibrium but no Nash equilibrium
    3. Properties of correlated equilibrium
  3. GAMES WITH INCOMPLETE INFORMATION
    1. Belief Spaces
      1. Belief Operators
    2. Modeling Games with Incomplete Information
      1. Bayesian Equilibrium
    3. Belief Hierarchies
  4. REFERENCES

This blog post is a lightly edited version of the project report I submitted for a course on multi-agent learning taught by Professor Eugene Vinitsky.

INTRODUCTION

The importance of equilibria in game theory cannot be overstated. Arguably, the first major result in game theory was in 1928 with John von Neumann showing that any two-person zero-sum finite game has an equilibrium—a consequence of the well-known von Neumann’s minimax theorem (von Neumann, 1928). His ideas built on general foundations of game theory first laid down by Émile Borel in 1921 (Myerson, 2021). The next major result was in 1950 when John Nash showed that every \(n\)-player non-cooperative finite game has at least one, what we now call, Nash equilibrium, where each player's mixed strategy maximizes his payoff if the strategies of the others are held fixed (Nash, 1950), (Nash, 1951).

Since then, different game settings and desired properties have engendered a zoo of equilibrium concepts. These range from relatively well-known like correlated equilibrium and sub-game perfect equilibrium to esoteric ones like self-confirming equilibrium (a generalization of Nash equilibrium for extensive form games) and M equilibrium (relaxed assumptions of perfect maximization and perfect beliefs).

In this blog post I aim to discuss a few notions of equilibrium in static game settings for non-cooperative game theory. These are game settings in which the players make simultaneous moves with the goal of maximizing their own utility. The players may have only partial information about the data of the game and about the other players. If everything is common knowledge (defined below) then the setting reduces to the special case of normal-form games (also called strategic-form games). After discussing the notions of Nash equilibrium and correlated equilibrium, most of the post focuses on games of incomplete information.

This is an expository style post whose intended audience need not be experts in game theory. Although the prerequisites from game theory are minimal, the post develops the ideas in the most general and abstract setting as possible, and thus presupposes some mathematical maturity in analysis and probability theory. The mathematician Gustave Choquet used to exhort other mathematicians to always consider a problem in the most general setting where it makes sense. As we will see, this abstraction allows us to subsume many of the game settings studied in the literature.

What Is An Equilibrium?

What even is a game? For our purposes, a game is a mathematical model for decision making characterized by a formal structure governing the behavior of the players. The precise formal structure defines a particular game setting. Aumann (Aumann, 1974) defines the notion of an equilibrium in a game as a self-enforcing agreement. Intuitively, it is a tuple of strategies, one for each player, such that no player can win more by unilaterally switching to a different strategy. In general, it can help to think of equilibria as fixed points of the best response functions. The notion of an equilibrium is intimately tied to the game setting, and it gets called by a different name depending on the setting.

An equilibrium need not always exist. Here are three examples (taken from a Quora answer by Erik Madsen):

  1. Consider a very simple single player game, where the strategy space is \((0,1)\) and the utility is the number you chose. No matter what \(s \in (0,1)\) you choose, \((s + 1)/2 \in (0,1)\) is always better. This example highlights that non-compact strategy spaces are not nicely behaved.

  2. But even compact strategy spaces are not sufficient to ensure an equilibrium. For example, consider again a single player game where the strategy space is \([0,1]\) and the utility is \(s\) if \(s \in [0,1)\) and \(0\) otherwise. This example highlights that discontinuous utility functions are not nicely behaved.

    These two examples highlight that even without strategic interaction, when the game theory problem is simply an optimization problem, an equilibrium may not exist.

  3. As expected, when there are multiple players, the complications of single player setting only gets exacerbated. The previous game can be generalized to two-player setting as follows: the strategy space for both the players is \([0,1]\) and the utility of each player is their choice, unless both players chose \(1\), in which case both of them get \(0\). As can be easily checked, there is no equilibrium, not even a randomized one.

Classification of Games

First, let us mention again that in this post we are studying game theory in the flavor of non-cooperative game theory studied in economics, computer science, and, now more and more commonly, machine learning. There is another subject of combinatorial game theory that has a very different focus, tools, and techniques. See the book (Conway, 2001) by John Conway for an introduction to that subject.

There are two common ways to classify games. One is static vs dynamic games, and the other is games of complete information vs games of incomplete information.

We already mentioned static games above. Dynamic games are those games where there are multiple moves involved and the game makes explicit the order in which the players move and what each player knows (using information sets) when making each of their move. They can be described using a game tree. Each vertex of this tree represents a position in the game. The game starts at the root of this tree and moves to the next vertex depending on the move made by the players. Extensive-form games are an example of this.

The distinction between games of complete and incomplete information is precisely what their names suggest. In complete information, all aspects of the game are common knowledge, which means that everybody knows everything about the game, everybody knows that everybody knows everything about the game, and so on. Games that don’t meet this (stringent) criteria are called games of incomplete information.

In this post, our focus is on static games with complete or incomplete information.

Notation

I will be using sans-serif typestyle for spaces, for example \(\mathsf{S}, \mathsf{X}, \mathsf W\) etc., and caligraphic typestyle for \(\sigma\)-algebras, for example, \(\mathcal S, \mathcal X, \mathcal W\) etc. For finite sets the \(\sigma\)-algebra will be the power set, and for topological spaces it will be the Borel \(\sigma\)-algebras.

GAMES WITH COMPLETE INFORMATION

The setting studied in this section is a special case of the more general setting considered in the next section on games with incomplete information. But the intuition developed here will be useful. We start by defining normal-form games; these games are also called strategic-form games.

Normal-Form Games and Nash Equilibrium

Definition 1 (Normal-Form Games): There are \(N\) players, denoted \([N] := \left\{ 1, \ldots, N\right\}\). Each player \(i \in [N]\) has a strategy set \(\mathsf S_i\), and all the players simultaneously choose an element from their strategy sets. The decisions of the players are motivated by their utility functions, i.e., for each player \(i \in [N]\), there is a utility function \(u_i \colon \mathsf{S} \to \mathbb R\), where \(\mathsf{S} := \mathsf{S}_1 \times \cdots \times \mathsf{S}_N\), that depends on their own decision and the decision of other players and which they seek to maximize.

This structure of the game is common knowledge, i.e., each player knows this structure, each player knows that the other players know this structure, and so on. This can be achieved by reading aloud the above paragraph in a room where all the players are present (assuming none of them are deaf, or are distracted, etc); then all the players will hear it, all the players will know that all the other players heard it, and so on.

Given a strategy tuple \(s = (s_1, \ldots, s_N) \in \mathsf{S}\) and a player \({i \in [N]}\), we will often write \(s\) as \((s_i, s_{-i})\) when the focus is on \(s_i;\) here \(s_{-i} = (s_1, \ldots, s_{i-1}, s_{i+1}, \ldots, s_N) \in \mathsf{S}_{-i} := \mathsf{S}_1 \times \cdots \mathsf{S}_{i-1} \times \mathsf{S}_{i+1} \times \cdots \times \mathsf{S}_N\).

Definition 2 (Nash Equilibrium): Given a game, a strategy tuple \(s^* = (s_1^*, \ldots, s_N^*) \in \mathsf{S}\) is a Nash equilibrium if for each player \({i \in [N]}\) and each strategy \(s_i \in \mathsf{S}_i\), we have
\[\begin{aligned} u_i(s^*) \ge u_i(s_i, s^*_{-i}).\end{aligned}\]

There may be multiple Nash equilibria. The definition of Nash equilibrium simply states that given that everyone plays \(s^*\), no player has an incentive to change their strategy. The game of rock, paper and scissors for two players has no Nash equilibrium, as can be easily checked. Therefore, it makes sense to allow players to play randomized or mixed strategies where instead of choosing one element from their strategy set, they assign a probability distribution on their strategy set and draw an element from this distribution.

Definition 3 (Mixed Strategy): The setting is that of Definition 1. We assume that the sets \((\mathsf{S}_i)_{i \in [N]}\) have \(\sigma\)-algebras associated with them and the functions \((u_i)_{i \in [N]}\) are measurable. This is trivially true if the strategy sets are finite. For each player \({i \in [N]}\), define \(\Sigma_i := \mathscr{P}(\mathsf{S}_i)\) to be the set of all probability measures on \(\mathsf{S}_i\), and extend \(u_i\) to \(\Sigma := \Sigma_1 \times \cdots \times \Sigma_N\) as follows
\[\begin{aligned} \Sigma \ni \sigma = (\sigma_1, \ldots, \sigma_N) \mapsto u_i(\sigma) = \int_{\mathsf{S}_1} \cdots \int_{\mathsf{S}_N} u_i(s_1, \ldots, s_N) \, \mathrm{d}{\sigma_N(s_N)} \cdots \, \mathrm{d}{\sigma_1(s_1)} \in \mathbb R.\end{aligned}\]

Note that we could have written the integral above as

\[\begin{aligned} \int_{\mathsf{S}} u_i(s) \, \mathrm{d}{\sigma(s)},\end{aligned}\]
where, abusing notation, we write \(\sigma\) to mean the product measure \(\sigma_1 \otimes \cdots \otimes \sigma_N\) on \(\mathsf{S}\).

We can again define a notion of equilibrium here. These equilibria are called mixed Nash equilibria and the notion of equilibria defined in Definition 2 are called pure Nash equilibria. Without a qualifier, the term ‘Nash equilibrium’ typically refers to a mixed Nash equilibrium—the context should make it clear.

Definition 4 (Mixed Nash Equilibrium): Under the setting of Definition 3, a tuple \(\sigma^* = (\sigma_1^*, \ldots, \sigma_N^*) \in \Sigma\) is a mixed Nash equilibrium if for each player \({i \in [N]}\) and each mixed strategy \(\sigma_i \in \Sigma_i\), we have
\[\begin{aligned} u_i(\sigma^*) \ge u_i(\sigma_i, \sigma_{-i}^*).\end{aligned}\]

Nash in (Nash, 1950) and (Nash, 1951) showed that, for finite games, i.e., when the strategy sets are finite, a Nash equilibrium always exists. We mention a generalization of this result by Glicksberg from 1952 (Glicksberg, 1952).

Theorem 5 (Existence of Nash Equilibrium): Suppose the strategy sets \((\mathsf{S}_i)_{i \in [N]}\) are compact Hausdorff, and the utility functions \((u_i)_{i \in [N]}\) are continuous functions, then a Nash equilibrium always exists.

See the paper (Reny, 1999) for a discussion on existence of Nash equilibria when the utility functions may be discontinuous.

Computing Nash Equilibria

For this section we assume that the game is finite, since computing a Nash equilibrium in the full generality of Theorem 5 can only be done approximately, and our focus here is exact computation.

We first consider the simple setting of two-player zero-sum games. Here we just need one utility function \(u \colon \mathsf{S} \to \mathbb R\) since if player 1 gets \(u(s_1, s_2)\) for strategies \((s_1, s_2)\), then player 2 gets \(-u(s_1, s_2)\). We can extend \(u\) to the space \(\Sigma\) of mixed strategies, and this is a multilinear extension because of finiteness of strategy spaces. Player 1 will play one of

\[\begin{aligned} \argmax_{\sigma_1 \in \Sigma_1} \min_{\sigma_2 \in \Sigma_2} u(\sigma_1, \sigma_2),\end{aligned}\]
and player 2 will play one of
\[\begin{aligned} \argmin_{\sigma_2 \in \Sigma_2} \max_{\sigma_1 \in \Sigma_1} u(\sigma_1, \sigma_2).\end{aligned}\]
von Neumann’s minimax theorem (von Neumann, 1928) shows that
\[\begin{aligned} \max_{\sigma_1 \in \Sigma_1} \min_{\sigma_2 \in \Sigma_2} u(\sigma_1, \sigma_2) = \min_{\sigma_2 \in \Sigma_2} \max_{\sigma_1 \in \Sigma_1} u(\sigma_1, \sigma_2),\end{aligned}\]
and thus an equilibrium always exists. Of course, this is a special case of Theorem 5. But the optimization problem above is a linear optimization problem, and can be easily solved.

There is no known computationally efficient approach for mixed-sum games, even for two players. In fact, finding a Nash equilibrium is in PPAD-complete (Daskalakis, Goldberg and Papadimitriou, 2009). But for simple enough games we can manually compute an equilibrium by using the definitions.

Correlated Equilibrium

Correlated equilibrium was introduced by Robert Aumann in 1974 (Aumann, 1974). The intuition behind this equilibrium is that mediation by a third party can often improve the outcome of a game. We start with the same setting as in Definition 1. We need to add some structure on top of it to be able to talk about correlating mechanisms.

We have a probability space \((\Omega, \mathcal F, \mathbb P)\) and for each player \({i \in [N]}\) a sub-\(\sigma\)-algebra \(\mathcal I_i \subseteq \mathcal F\). The set \(\Omega\) can be thought of as the states of the randomizing mechanism (the outcomes of coin toss or die roll, for example) and the \(\sigma\)-algebras \(\mathcal I_i\) formalize the information revealed to each player. Let me expand on this notion a bit more; also see the discussion in Chapter-1 Section-4 Sub-section ‘‘Subfields’’ of (Billingsley, 1995) on how sub-\(\sigma\)-algebras correspond heuristically to partial information.

First, when we talk about information in \(\sigma\)-algebras, we are not talking about information in the technical sense of entropy, but rather we use that word as a heuristic notion, the math itself is formal. Suppose a point \(\omega \in \Omega\) is sampled from \(\mathbb P\), i.e., \(\omega\) lies in \(A \in \mathcal F\) with probability \(\mathbb P(A)\). What can you, who doesn't directly observe \(\omega\) but has access to the \(\sigma\)-algebra \(\mathcal F\), say about \(\omega\)? You can say for each \(A \in \mathcal F\) whether \(\omega \in A\) or \(\omega \notin A\). Thus, to you \(\omega\) and \(\omega'\) are equivalent if, for every \(A \in \mathcal F\), either both are in \(A\) or both are not in \(A\). More intuitively, \(\omega\) and \(\omega'\) are equivalent if the \(\sigma\)-algebra \(\mathcal F\) is too coarse to distinguish between them. This is an equivalence relation, called \(\mathcal F\)-equivalence, on \(\Omega\) and thus creates a partition on \(\Omega\). The finer the partition, the more informative the \(\sigma\)-algebra used to create the partition. Now, if instead of having access to \(\mathcal F\), you only have access to a sub-\(\sigma\)-algebra \(\mathcal I \subseteq \mathcal F\), then you have only partial information since \(\mathcal F\)-equivalence of \(\omega\) and \(\omega'\) also implies \(\mathcal I\)-equivalence of \(\omega\) and \(\omega'\). In the setting described in the previous paragraph, different players have access to different partial information given by their \(\sigma\)-algebras.

Coming back to the game, it proceeds as follows:

  1. A point \(\omega \in \Omega\) is sampled according to the probability measure \(\mathbb P\).

  2. The players do not explicitly see \(\omega\), but each player \({i \in [N]}\) is informed whether \(\omega \in A\) or \(\omega \notin A\) for all \(A \in \mathcal I_i\).

  3. Each players privately chooses a pure strategy \(s_i \in \mathsf{S}_i\).

  4. The utility of player \(i\) is \(u_i(s_1, \ldots, s_N)\).

We don’t have to limit players to pure strategies. For player \(i\), a mixed strategy is a \(\mathcal I_i\)-measurable function \(\sigma_i \colon \Omega \to \mathsf{S}_i\). Note that this induces a probability measure \(\mathbb P \circ \sigma_i^{-1}\) on \(\mathsf{S}_i\), which is the distribution of \(\sigma_i\), and what we were calling \(\sigma_i\) in Definition 3. It is helpful to see this distribution defined more explicitly. A pure strategy is simply a constant mixed strategy.

We can define a notion of subjective probability measure \(\mathbb P_i\) for each player \(i\) because of how information is revealed to them. One way would be \(\mathbb P_i(\cdot) := \mathbb P(\cdot \mid \mathcal I_i)\).

Definition 6 (Correlated Equilibrium): The tuple \(\sigma^* = (\sigma_1^*, \ldots, \sigma_N^*) \colon \Omega \to \mathsf{S}\) of player strategies is called a correlated equilibrium if for each player \({i \in [N]}\) and for each strategy \(\sigma_i \colon \Omega \to \mathsf{S}_i\), we have
\[\begin{aligned} \int_\Omega u_i(\sigma^*(\omega)) \, \mathrm{d}{\mathbb P_i(\omega)} \ge \int_\Omega u_i(\sigma_i(\omega), \sigma_{-i}^*(\omega)) \, \mathrm{d}{\mathbb P_i(\omega)}.\end{aligned}\]

A game with correlated equilibrium but no Nash equilibrium

This example is taken from this StackExchange answer by Michael Greinecker. There are three players \(1, 2\) and \(3\). The strategy space of players \(1\) and \(2\) is \([0,1] \times \mathbb N\), and the strategy space of player \(3\) is \([0, 1]\). Denote with \((a, m)\) the choice of player \(1\), with \((b, n)\) the choice of player \(2\), and with \(c\) the choice of player \(3\).

The utilities are defined as follows: for player \(3\) it equals \(1\) if \(a = c\), and equals \(0\) otherwise. For players \(1\) and \(2\) we have three cases

  1. it equals \(2\) for both if \(a = b \neq c\),

  2. it equals \(-2\) for both if \(a = c\), and

  3. finally in the case where \(a \neq b\) and \(a \neq c\) it depends on the value of \(m\) and \(n\): player \(1\) gets \(1\) and player \(2\) gets \(-1\) if \(m > n\), player \(2\) gets 1 and player \(1\) gets \(-1\) if \(m < n\), and both get \(0\) if \(m=n\).

Note that the situation for players \(1\) and \(2\) is not symmetric.

This game has no Nash equilibrium. To see this, first note that the way utilities in this game are defined, player 1 must play a randomized strategy to choose \(a\) because if he does not then player 3 will always play \(c = a\). Now if player 1 is playing a randomized strategy to choose \(a\), then with non-zero probability we will have case 3 above, but then no choice of \(m\) and \(n\) can be in equilibrium.

The game has a correlated equilibrium. The correlating mechanism uniformly samples a point from \([0,1]\), and reveals the result only to players \(1\) and \(2\). Player 3 will guess this uniformly chosen value correct with probability \(0\), and thus we are in case 1 above where players \(1\) and \(2\) get a utility of \(2\) and player \(3\) gets a utility of \(0\).

Properties of correlated equilibrium

We briefly list some properties of correlated equilibria. More details can be found in Chapter-8 of the book (Maschler, Solan and Zamir, 2020).

  1. If a two-player zero-sum game has a unique Nash equilibrium then it is also the unique correlated equilibrium by a general property of two-player zero-sum games (Forges, 1990).

  2. Every Nash equilibrium is also a correlated equilibrium.

  3. In fact, every point in the convex hull of Nash equilibria is a correlated equilibrium.

  4. Correlated equilibrium can lie outside the convex hull of Nash equilibria. For this it is necessary to have a randomizing mechanism that is not public, and thus different players receive different partial information.

GAMES WITH INCOMPLETE INFORMATION

In modeling real-life situations using games it is unreasonable to assume that every aspect of the game is common knowledge among the players. For now we work with the following informal definition of common knowledge. A generalization of this notion, called common belief, is defined in Definition 14.

Definition 7 (Common Knowledge (Informal)): A fact \(F\) is common knowledge among the players of a game if all the players know \(F\), all the players know that all the players know \(F\), all the players know that all the players know that all the players know \(F\), and so on for every finite number of levels.

This is informal because we haven’t defined what a ‘‘fact’’ is or what ‘‘knowing’’ means. An example of a fact that is common knowledge is a fact that is publicly announced in a room where every player is present, as we discussed previously.

As a motivating example consider the following game.

Example 8: We have two players, player 1 and player 2. A three-sided die with faces \(\left\{ A, B, C\right\}\) (they do exist, or simply label the faces of a regular die as follows: \(\left\{ 1,2\right\} \to A\), \(\left\{ 3,4\right\} \to B\) and \(\left\{ 5,6\right\} \to C\)) is rolled. A fair coin with faces \(\left\{ H, T\right\}\) is tossed once. The outcomes of the roll and the toss are revealed to the players as follows.

The outcome of the die roll is revealed to player 1 if it is an \(A\) and otherwise it isn’t revealed. The outcome of the die roll is revealed to player 2 depending on the outcome of the coin toss—if the result is \(H\), then player 2 is informed of the outcome of the roll, otherwise he isn’t informed; the outcome of the coin toss is kept hidden from player 1. Both of these facts are common knowledge.

The utility of each player is 2 if he can correctly guess the outcome of the die roll and can correctly guess the other player's guess, is 1 he can correctly guess the outcome of the die roll but guessed the other player's guess wrong, and is 0 otherwise.

In this game, each player has an incentive to correctly guess the outcome of the die roll and know each other's beliefs, but they have incomplete information. The terminology that we will be using ahead to describe this is as follows. There are three possible states of the reality \(\mathsf{R} = \left\{ A, B, C\right\}\). Because of the structure of the game, the players may not directly observe which element of \(\mathsf{R}\) is the truth. Each player instead has a belief about the states of the ‘‘world’’ which in turn describes the states of the reality and beliefs about other players' beliefs, and so on.

Since this game is simple, we can verify that it has six possible states of the world, which we denote with \(\mathsf{W} = \left\{ \omega_{A,H}, \omega_{B,H}, \omega_{C,H}, \omega_{A, T}, \omega_{B, T}, \omega_{C, T}\right\}\). Player 1 never knows if player 2 knows the outcome of the die roll since the coin toss was hidden from him. The description of these states of the world is as follows.

  1. The state \(\omega_{A,H}\) is associated with the situation in which the outcome of the die roll was \(A\) and the coin toss was \(H,\) in which case both player 1 and player 2 know the outcome of the die roll, player 1 doesn't know if player 2 knows the outcome [of the die roll], player 2 knows that player 1 knows the outcome, player 1 doesn't know if player 2 knows that player 1 knows the outcome, and so on.

  2. The state \(\omega_{B, H}\) is associated with the situation in which the outcome of the die roll was \(B\) and the coin toss was \(H,\) in which case player 1 only knows that the outcome was either \(B\) or \(C\), player 2 knows the outcome, player 1 doesn't know if player 2 knows the outcome, player 2 knows that player 1 knows that the outcome is either \(B\) or \(C,\) player 2 knows that player 1 doesn't know if player 2 knows the outcome, and so on.

  3. The state \(\omega_{C, H}\) is exactly the same as the state \(\omega_{B, H}\), except interchange the role of \(B\) and \(C\).

  4. The state \(\omega_{A, T}\) is associated with the situation in which the outcome of the die roll was \(A\) and the coin toss was \(T\), in which case player 1 knows the outcome of the die roll, player 2 doesn't know the outcome, player 1 doesn't know if player 2 knows the outcome, player 2 doesn't know if player 1 knows the outcome, player 1 doesn't know if player 2 knows that player 1 knows the outcome, and so on.

  5. The state \(\omega_{B, T}\) is associated with the situation in which the outcome of the die roll was \(B\) and the coin toss was \(T\), in which case player 1 only knows that the outcome was either \(B\) or \(C\), player 2 doesn't know the outcome, player 1 doesn't know if player 2 knows the outcome, player 2 doesn't know if player 1 knows the outcome, player 2 doesn't know if player 1 doesn't know if player 2 knows the outcome, and so on.

  6. The state \(\omega_{C,T}\) is exactly the same as the state \(\omega_{B, T}\), except interchange the role of \(B\) and \(C\).

We first notice that by associating each state of the world with a state of the reality we have created a mapping \(\mathfrak{r} \colon \mathsf{W} \to \mathsf{R}\). Here the mapping is trivial: \(\mathfrak{r}(\omega_{X,Y}) = X\) for \(X \in \left\{ A,B,C\right\}\) and \(Y \in \left\{ H,T\right\}\). We next notice that player 1 cannot distinguish between \(\omega_{A,H}\) and \(\omega_{A,T}\), and between \(\omega_{B, H}\), \(\omega_{B, T}\), \(\omega_{C, H}\) and \(\omega_{C, T}\). Player 2, on the other hand, cannot distinguish between \(\omega_{A,T}\), \(\omega_{B,T}\) and \(\omega_{C,T}\). Recalling the idea of using \(\sigma\)-algebras to encode information from the section on correlated equilibrium, we have \[\begin{aligned} \begin{split} \mathcal I_1 &= \sigma(\left\{ \left\{ \omega_{A,H}, \omega_{A,T}\right\}, \left\{ \omega_{B, H}, \omega_{C, H}, \omega_{B, T}, \omega_{C, T}\right\}\right\}), \text{ and}\\ \mathcal I_2 &= \sigma(\left\{ \left\{ \omega_{A,H}\right\}, \left\{ \omega_{B, H}\right\}, \left\{ \omega_{C, H}\right\}, \left\{ \omega_{A,T},\omega_{B, T}, \omega_{C, T}\right\}\right\}) \end{split}\end{aligned}\] as their information \(\sigma\)-algebras on \(\mathsf{W}\).

We need a mathematical structure that allows us to model players’ beliefs about the game, players’ beliefs about the beliefs of other players, and so on, that lets us study this infinite hierarchy without having to explicitly write it every time. Situations where players do not necessarily have the full information about the game, or are uncertain about the other players’ beliefs about the game, and so on, are called games with incomplete information. These ideas were developed by John Harsanyi and Robert Aumann in the 1960s and 1970s, and later refined by many other mathematicians and economists.

Belief Spaces

Taking inspiration from example 8 we define the following. The definition here generalizes the modeling of beliefs we did in the section on correlated equilibrium because instead of having one prior distribution \(\mathbb P\) we have a different stochastic kernel for each player.

At this point it might be useful to look at the definition of stochastic kernels. I use the notation defined there.

Definition 9 (Belief Space): A belief space is a tuple \(\left( N, (\mathsf{R}, \mathcal{R}), (\mathsf{W}, \mathcal W), \mathfrak{r}, \left( \kappa_i \right)_{i \in [N]} \right)\) with the following definitions:

\[\begin{aligned} \kappa_i\left( \omega, \left\{ \omega' \in \mathsf{W} \,:\, \kappa_i^{\omega'} = \kappa_i^\omega\right\} \right) = 1.\end{aligned}\]

We immediately explain the mysterious coherency condition. For any \({i \in [N]}\), if we define a relation \(\equiv_i\) on \(\mathsf{W}\) by letting \(\omega \equiv_i \omega'\) iff \(\kappa_i^\omega = \kappa_i^{\omega'}\). It is an equivalence relation on \(\mathsf{W}\), as can be easily checked. Thus, we get a partition \(\left\{ \mathsf{W}_{i,\alpha}\right\}_{\alpha \in A_i}\), for some indexing set \(A_i\), of \(\mathsf{W}\). That is, \(\left\{ \mathsf{W}_{i,\alpha}\right\}_{\alpha \in A_i}\) is a disjoint class of non-empty subsets of \(\mathsf{W}\) whose union is \(\mathsf{W}\). The player \(i\) cannot distinguish between the world states within an element of this partition—all of them have the same distribution on world states. Take an element of this partition, say, \(\mathsf{W}_{i,\alpha}\), for some \(\alpha \in A_i\). The coherency condition states that there is one probability measure \(\kappa_i^\alpha\) such that for every \(\omega \in \mathsf{W}_{i,\alpha}\), we have \(\kappa_i^\alpha = \kappa_i^\omega\), and that the support of this measure is \(\mathsf{W}_{i,\alpha}\). Therefore, the coherency condition just formalizes the reasoning that the player cannot distinguish between states in \(\mathsf{W}_{i,\alpha}\) and the fact that he believes that the true world state is one of \(\mathsf{W}_{i,\alpha}\).

Let us go back to example 8 and write it down in the language just developed. We already did most of the heavy lifting having defined \(\mathsf{R}\), \(\mathsf{W}\) and \(\mathfrak{r}\). It is now a simple exercise to compute \(\kappa_1\) and \(\kappa_2\). See the table below.

\(\mathsf{W}\)\(\mathsf{R}\)\(\kappa_1\)\(\kappa_2\)
\(\omega_{A,H}\)\(A\)\(\kappa_1^{\omega_{A,H}} = \frac{1}{2}\delta_{\omega_{A,H}} + \frac{1}{2}\delta_{\omega_{A,T}}\)\(\kappa_2^{\omega_{A,H}} = \delta_{\omega_{A,H}}\)
\(\omega_{A,T}\)\(A\)\(\kappa_1^{\omega_{A,T}} = \frac{1}{2}\delta_{\omega_{A,H}} + \frac{1}{2}\delta_{\omega_{A,T}}\)\(\kappa_2^{\omega_{A,T}} = \frac{1}{3} \delta_{\omega_{A,T}} + \frac{1}{3} \delta_{\omega_{B,T}} + \frac{1}{3} \delta_{\omega_{C,T}}\)
\(\omega_{B,H}\)\(B\)\(\kappa_1^{\omega_{B,H}} = \frac{1}{4}\delta_{\omega_{B,H}} + \frac{1}{4}\delta_{\omega_{B,T}} + \frac{1}{4}\delta_{\omega_{C,H}} + \frac{1}{4}\delta_{\omega_{C,T}}\)\(\kappa_2^{\omega_{B,H}} = \delta_{\omega_{B,H}}\)
\(\omega_{B,T}\)\(B\)\(\kappa_1^{\omega_{B,T}} = \frac{1}{4}\delta_{\omega_{B,H}} + \frac{1}{4}\delta_{\omega_{B,T}} + \frac{1}{4}\delta_{\omega_{C,H}} + \frac{1}{4}\delta_{\omega_{C,T}}\)\(\kappa_2^{\omega_{B,T}} = \frac{1}{3} \delta_{\omega_{A,T}} + \frac{1}{3} \delta_{\omega_{B,T}} + \frac{1}{3} \delta_{\omega_{C,T}}\)
\(\omega_{C,H}\)\(C\)\(\kappa_1^{\omega_{C,H}} = \frac{1}{4}\delta_{\omega_{B,H}} + \frac{1}{4}\delta_{\omega_{B,T}} + \frac{1}{4}\delta_{\omega_{C,H}} + \frac{1}{4}\delta_{\omega_{C,T}}\)\(\kappa_2^{\omega_{C,H}} = \delta_{\omega_{C,H}}\)
\(\omega_{C,T}\)\(C\)\(\kappa_1^{\omega_{C,T}} = \frac{1}{4}\delta_{\omega_{B,H}} + \frac{1}{4}\delta_{\omega_{B,T}} + \frac{1}{4}\delta_{\omega_{C,H}} + \frac{1}{4}\delta_{\omega_{C,T}}\)\(\kappa_2^{\omega_{C,T}} = \frac{1}{3} \delta_{\omega_{A,T}} + \frac{1}{3} \delta_{\omega_{B,T}} + \frac{1}{3} \delta_{\omega_{C,T}}\)

We observe that the partition of \(\mathsf{W}\) by player 1 is \(\left\{ \left\{ \omega_{A,H}, \omega_{A,T}\right\}, \left\{ \omega_{B, H}, \omega_{C, H}, \omega_{B, T}, \omega_{C, T}\right\}\right\}\), and for player 2 is \(\left\{ \left\{ \omega_{A,H}\right\}, \left\{ \omega_{B, H}\right\}, \left\{ \omega_{C, H}\right\}, \left\{ \omega_{A,T},\omega_{B, T}, \omega_{C, T}\right\}\right\}\). Note that these partitions are exactly what were used to generate the \(\sigma\)-algebras in equation (1), and it is isn’t difficult to convince yourself why this is the case.

Let us discuss two more examples.

Example 10: We have just one player who wants to guess the outcome of a coin toss. The problem is he has no idea about the probability \(p\) of the coin being heads, so he assumes a uniform distribution over possible values of \(p\). We can model this using belief spaces as follows. The set of states of the reality is \(\mathsf{R} = [0,1]\), the possible values of \(p\). The set of states of the world is also \(\mathsf{W} = [0,1]\), with \(\mathfrak{r}(\omega) = \omega\) for \(\omega \in \mathsf{W}\). Finally, \(\kappa^\omega_1 = U[0,1]\) for every \(\omega \in \mathsf{W}\), where \(U[0,1]\) denotes the uniform probability measure on \([0,1]\).

Example 11: There are two players. A point \((x,y)\) is uniformly chosen from the unit square \([0,1]^2\). The first coordinate is revealed to player 1 and the second coordinate is revealed to player 2.

The fact that the point \((x,y)\) is drawn from a uniform distribution is not known to any player. The fact that the first coordinate is revealed to player 1 is common knowledge, and so player 2 knows that player 1 knows the first coordinate, player 1 knows that player 2 knows that player 1 knows the first coordinate, and so on. The fact that the second coordinate is revealed to player 2 is known only to player 2, and so player 1 doesn't know if player 2 knows the second coordinate and player 2 knows that player 1 doesn't know if player 2 knows the second coordinate. The facts that player 1 doesn't know second coordinate and player 2 doesn't know the first coordinate is common knowledge.

Player 1 believes that the point \((x,y)\) was drawn uniformly from the diagonal, and so if he is given \(x\) he thinks \(y=x\). Player 1 believes that player 2 believes that the point \((x,y)\) was drawn uniformly from the unit square. Player 1 believes that player 2 believes that player 1 believes that the point \((x,y)\) was drawn uniformly from the unit square, and so on. Player 2 believes that the point \((x,y)\) is drawn from a uniform distribution over the unit square. Player 2 believes that player 1 believes that the point \((x,y)\) is drawn from a uniform distribution over the unit square, and so on.

How do we model this? Although the setting above took many words to describe, the belief space formulation is almost trivial. The reality state space \(\mathsf{R}\) is simply the unit square \([0,1]^2\). The world state space \(\mathsf{W}\) is also just the unit square, with the added context that at each world state the players beliefs are as described above. There is nothing happening in this setting that makes players change their beliefs, and so no matter what they see (the coordinates) their beliefs remain the same. The mapping \(\mathfrak{r}(\omega) = \omega\) is identity. The stochastic kernels are also trivial: \(\kappa_i^\omega = \delta_{\omega}\). The complication will arise after defining a game with its strategy sets and utilities.

Belief Operators

These operators act on \(\mathcal W\) and tell us about world states where a player believes an event happens almost surely (a.s.).

Definition 12 (Belief Operator): For each player \({i \in [N]}\), let \(\mathtt{B}_i \colon \mathcal W \to \mathcal W\) be defined as
\[\begin{aligned} \mathtt{B}_i A := \left\{ \omega \in \mathsf{W} \,:\, \kappa_i(\omega,A) =1\right\}.\end{aligned}\]

Note that \(\mathtt{B}_i A = \kappa_i(A)^{-1}(\left\{ 1\right\}) \in \mathcal W\) follows from the measurability of \(\kappa_i(A)\) (condition 2 of the definition of stochastic kernel here).

The belief operators satisfy certain intuitive properties. The proofs involve simple definition chasing.

Proposition 13: For each player \({i \in [N]}\), the belief operation \(\mathtt{B}_i\) satisfies:

  1. \(\mathtt{B}_i \mathsf{W} = \mathsf{W}\) (in words, at each state of the world, player \(i\) believes that \(\mathsf{W}\) is the set of states of the world),

  2. \(\mathtt{B}_i A \cap \mathtt{B}_i C = \mathtt{B}_i (A \cap C)\) (in words, if player \(i\) believes event \(A\) happens a.s. and believes event \(C\) happens a.s., then he believes that event \(A \cap C\) happens a.s.),

  3. \(\mathtt{B}_i \mathtt{B}_i A = \mathtt{B}_i A\) (in words, if player \(i\) believes that he believes that even \(A\) happens a.s., then he believes that even happens a.s.),

  4. \(\mathtt{B}_i(\mathtt{B}_i A)^\mathsf{c} = (\mathtt{B}_i A)^\mathsf{c}\) (in words, if player \(i\) believes that he does not believe that event \(A\) happens a.s., then he does not believe that event \(A\) happens a.s.),

  5. if \(A \subseteq C\), then \(\mathtt{B}_i A \subseteq \mathtt{B}_i C\) (in words, if player \(i\) believes that event \(A\) happens a.s., then he also believes that every event containing \(A\) happens a.s.).

Note that a player may believe an event to be happening a.s., but may be totally wrong since it may never happen. That’s why we use the word ‘‘belief’’ and not ‘‘knowledge’’. The belief operators allow us to formalize the concept of common beliefs.

Definition 14 (Common Belief): Let \(A \in \mathcal W\) be an event and \(\omega \in \mathsf{W}\) be some world state. The event \(A\) is common belief among the players at the state of the world \(\omega\) if at state \(\omega\) every player believes that \(A\) happens a.s., every player believes that every player believes that event \(A\) happens a.s., and so on.

In the language of belief operators, the event \(A\) is common belief among the players at the state of the world \(\omega\) if for every \(\ell \in \mathbb N\) and every \(i_1, i_2, \ldots, i_\ell \in [N]\), we have

\[\begin{aligned} \omega \in \mathtt{B}_{i_1} \mathtt{B}_{i_2} \cdots \mathtt{B}_{i_\ell} A.\end{aligned}\]

Modeling Games with Incomplete Information

We already have most of the formal structure in place; we just need to augment the belief spaces with strategy spaces and utilities for each player to make it a game. It would be more accurate to call this game setting as games with possibly incomplete information because we can always reduce this very general setting to games of complete information.

Definition 15 (Game with incomplete information): A game with incomplete information is a tuple \(\left( N, (\mathsf{R}, \mathcal{R}), (\mathsf{W}, \mathcal W), \mathfrak{r}, \left( \kappa_i \right)_{i \in [N]}, \left( \mathsf{S}_i, \mathcal S_i \right)_{i \in [N]}, (\mathfrak{s}_i)_{i \in [N]}, (\mathfrak{u}_i)_{i \in [N]} \right)\) with the following definitions and conditions:

The raison d'être for the last assumption in Definition 15 is to avoid measurability issues. This is the topic of measurable selection theorems. See the review papers (Wagner, 1977), (Wagner, 1980) or Chapter 18 of the book (Aliprantis and Border, 2006). Similar concerns arise in defining Markov decision processes; see the discussion here.

The players may choose a mixed strategy. We define it as follows.

Definition 16 (Mixed Strategy): The setting is that of Definition 15. A mixed strategy for player \(i\) is a measurable function \(\sigma_i \colon \mathsf{W} \to \mathscr{P}(\mathsf{S}_i)\), with the constraints that for all \(\omega, \omega' \in \mathsf{W}\),
\[\begin{aligned} \sigma_i(\omega) \in \mathscr{P}\!\left( \mathfrak{s}_i^{\mathfrak{r}(\omega)} \right) \text{ and } \omega \equiv_i \omega' \implies \sigma_i(\omega) = \sigma_i(\omega').\end{aligned}\]
We denote \(\sigma_i(\omega)\) with \(\sigma_i^\omega\) for notational convenience. We denote the space of all mixed strategies for player \(i\) with \(\Sigma_i\). Also define \(\Sigma := \Sigma_1 \times \cdots \times \Sigma_N\). Note that we have overloaded our notation; compare \(\Sigma\) in Definition 3.

In Definition 16, the first constraint is added because the player \(i\) can only choose from \(\mathfrak{s}_i^{\mathfrak{r}(\omega)}\), and the second constraint is added because if the player \(i\) cannot distinguish between \(\omega\) and \(\omega'\) then his distribution over available pure strategies should be the same.

Next, we need to extend the domain of utilities to mixed strategies.

Definition 17 (Payoff Function): For each player \({i \in [N]}\), a payoff function \(\gamma_i \colon \mathsf{W} \times \Sigma \to \mathbb R\) is defined as follows: for a world state \(\omega \in \mathsf{W}\) and a mixed strategy tuple \(\sigma = (\sigma_i)_{i \in [N]} \in \Sigma\),
\[\begin{aligned} \gamma_i(\omega, \sigma) = \int_{\mathfrak{s}^{\mathfrak{r}(\omega)}} \mathfrak{u}_i^{\mathfrak{r}(\omega)}(s) \, \mathrm{d}{\sigma^\omega(s)},\end{aligned}\]
where \(\sigma^\omega = \sigma_1^\omega \otimes \cdots \otimes \sigma_N^\omega\) is the product measure on \(\mathfrak{s}^{\mathfrak{r}(\omega)}\).

Bayesian Equilibrium

We are finally ready to define a notion of equilibrium!

Definition 18 (Bayesian Equilibrium): Under the setting of Definitions 15, 16 and 17, a mixed strategy tuple \(\sigma = (\sigma_i)_{i \in [N]} \in \Sigma\) is called a Bayesian equilibrium if for every player \({i \in [N]}\), every world state \(\omega \in \mathsf{W}\), and every mixed strategy \(\tau_i \in \Sigma_i\), we have
\[\begin{aligned} \gamma_i(\omega, \sigma) \ge \gamma_i(\omega, (\tau_i, \sigma_{-i})).\end{aligned}\]
Here \((\tau_i, \sigma_{-i})\) is a mixed strategy tuple that is same as \(\sigma\), except that we have replaced \(\sigma_i\) with \(\tau_i\).

The setting considered in this section and the notion of equilibrium defined above is a significant generalization of the setting considered in the section on games with complete information.

There are no general results about the existence of Bayesian equilibria, but we do have that if all the sets \(\left( \mathsf{R}, \mathsf{W}, (\mathsf{S}_i)_{i \in [N]} \right)\) are finite, then a Bayesian equilibrium always exists. The proof is very similar to that of the existence of a Nash equilibrium.

Belief Hierarchies

Until now we haven’t precisely and succinctly characterized the beliefs of players about the true state of reality, beliefs about beliefs about the states of reality, and so on. Let us do that now. We want to define this hierarchy inductively. It is from the viewpoint of a single player. At the first order, a player has beliefs (which are just probability measures) over the states of reality \(\mathsf{R}\). At the second order, a player has beliefs over the states of reality, and beliefs over the beliefs of the players about the state of the reality. At the third order, we have beliefs of order 1 and 2 just discussed, while also having beliefs about the second-order beliefs of other players. And so on for higher orders. More formally, we have the following definition.

Definition 19 (Belief Hierarchies): The space of belief hierarchies of order \(k\) of a set of \(N\) players on the set of states of reality \(\mathsf{R}\) is the space \(\mathsf{X}_k\) defined inductively as follows:
\[\begin{aligned} \mathsf{X}_1 = \mathscr{P}(\mathsf{R}) \text{ and } \mathsf{X}_k = \mathsf{X}_{k-1} \times \mathscr{P}\!\left( \mathsf{R} \times \mathsf{X}_{k-1}^{N-1} \right) \text{for } k\ge 2.\end{aligned}\]

There is an extensive literature on this, but since the focus of this blog post is the study of equilibria in various game settings, we stop here; see Chapter-11 of the book (Maschler, Solan and Zamir, 2020) for a detailed treatment.

REFERENCES