In game theory, we are usually looking for one or more equilibria (ideally only one), which we regard as representing the likely outcome of a particular situation. The principal criteria which an equilibrium is expected to satisfy are the Nash equilibrium condition and 'subgame perfection'.

Dominant strategies

We can write the strategy of player *i* as s* _{i}*. By strategy we mean a particular move or policy, e.g. “produce low output” (collude), or “cooperate if and only if the other player did so the previous round” (trigger).

We can write the payoff to player 1

if player 1 plays s_{1} and player 2 plays s_{2}

as *u*_{1}(s_{1},s_{2}).

If *u*_{1}(s_{1}^{A},s_{2}) < *u*_{1}(s_{1}^{B}, s_{2}) for all possible s_{2} and some s_{1}^{B}

(i.e. whatever player 2 does, it is possible for player 1 to do better than s_{1}^{A})

then s_{1}^{A} is a *dominated* strategy.

If a strategy remains after iterative removal of all dominated strategies, it is a *rationalisable* strategy.

If there is only one rationalisable strategy, it is the *dominant* strategy (e.g. “defect” in the Prisoner's Dilemma game).

Nash Equilibrium

For *n* players, (s_{1}*,s_{2}*,…,s* _{n}**) is a Nash equilibrium (NE) if and only if:

*u*(s

_{i}_{1}*,s

_{2}*,…,s

**,…,s*

_{i}**) ³*

_{n}*u*(s

_{i}_{1}*,s

_{2}*,…,s

*,…,s*

_{i}**) for all s*

_{n}*and all*

_{i}*i*.

All NEs consist of rationalisable strategies, but not all combinations of rationalisable strategies are NEs.

All combinations of dominant strategies are NEs, but not all NEs consist of dominant strategies.

Mixed strategy equilibrium

A mixed strategy is a probability distribution over strategies. I.e. a given player is playing each of his possible strategies with some probability. For example, firm A produces high output with 40% probability and low output with 60%, and firm B similarly mixes these strategies, but in the ratio 70:30. The game may be one-shot, so the point isn’t necessarily that the players alternate between strategies.

In this case the NE consists of optimal probability choices by each player *given* the probability choices of other players.

Once you allow for mixed strategies then every (finite) game has at least one NE.

Non-cooperative game

In a cooperative game, the rules permit binding agreements prior to play. (Hence collusion would be possible in a one-shot cooperative game.) In practice, we are usually concerned with games in which this is not possible, i.e. a non-coooperative game.

Games of complete information

• Players’ payoffs as functions of other players' moves are common knowledge.

• Each player knows that other players are 'rational' (i.e. payoff-maximising), and knows that they know that *he* is rational.

Static and dynamic games

In a static (or ‘one-shot’) game, players move simultaneously and only once. In a dynamic game, players either move alternately, or more than once, or both. Bertrand and Cournot models of oligopoly competition are both static games, while the Stackelberg model (firm B moves after firm A) is a dynamic game.

An equilibrium for a dynamic game must satisfy subgame perfection.

Normal and extensive forms

A game expressed in ‘normal form’ is in the form of a payoff matrix, as shown below for the Prisoner's Dilemma game.

A game expressed in ‘extensive form’ shows the ‘tree’ of the possible move paths depending on what each player does at each stage. A dynamic game can only be shown in extensive form.

Repeated games

The same agents repeatedly playing a given one-shot game (“form game”) in sequence is called a supergame. A supergame can consist of either a finite, or an infinite, repetition of a form game. Or we could have a game with a certain probability p of being repeated, i.e. a probability 1 – p of breakdown.

A *strategy* in this context can be contingent on what other players have done in previous moves.

Subgame perfection

For a dynamic game, some Nash equilibria are not acceptable as solutions because one or more players will want to, and be able to, avoid those outcomes. The subgame perfection criterion demands that, at each stage of the game, the strategy followed is still optimal from that point on.

Non-credible threat

A 'non-credible threat' is a strategy that one player is trying to use to manipulate the behaviour of another (usually via the second move in a sequential game), which forms part of a Nash equilibrium but one that is not subgame perfect.

The strategy is one which is being claimed in some way by the threatening player (e.g. by means of a signal that he is capable of using it) but which is not credible: although the threatened player’s optimal response to the strategy is to do what the threatening player wants, the former knows that by moving first in a different way, the latter will adopt another strategy to generate a different Nash equilibrium.

In a sense, there is no ‘credible threat’ — the term ‘threat’ implies that player 1 *will* do something specifically designed to harm player 2 *if* player 2 doesn’t comply, but such a threat would never be carried out in a finite game with perfect information because such a move would not be optimal for player 2. Player 2 will always ‘accommodate’ when it comes to it.

[Next week: applying game theory to Northern Rock & the Bank of England]

## 2 comments:

Hmmm, seems cool.

Joe joestain13@yahoo.com

I´ve never understood those theories, but you´ve explained´em really really well...are you a techer? Congratulations...regards

Post a Comment