Economic fundamentals and bank runs.
Ennis, Huberto M.
Recently there has been a renewed discussion in the literature
about the determinants of bank runs. Two alternative theoretical
explanations are usually provided. According to the first theory, bank
runs are exclusively driven by changes in economic fundamentals, such as
a deterioration in the return on investment. The second theory views
bank runs as a consequence of the existence of multiple equilibria. In
the latter case, which equilibrium obtains depends on the realization of
an extrinsic random variable, often called "sunspots."
Extrinsic uncertainty is uncertainty in economic outcomes that does not
originate directly in changes of economic fundamentals (see Shell and
Smith [1992]). The word "sunspots" is intended to convey the
idea that these random variables do not directly influence the economic
fundamentals of the economy.' However, sunspots can still influence
economic outcomes to the extent that people believe they do. In this
sense, sunspots can be viewed as coordination devices for agents'
expect ations in decentralized market economies. This is the view
adopted in the bank-run literature and in this paper.
Some scholars have recently argued that the
multiple-equilibria-plus-sunspots explanation of bank runs is
inconsistent with available evidence showing that bank runs have
historically been strongly correlated with deteriorating economic
fundamentals (see Gorton [1988]; Allen and Gale [1998]; and Schumacher
[2000]). In this paper I will argue that such a conclusion is not well
justified. More specifically, I will show that the multiple-equilibria
model of bank runs, combined with a reasonable (and well-accepted)
equilibrium selection concept, can provide theoretical justification for
the correlation observed in the data. In other words, the presence of an
empirical correlation between bank runs and poor economic fundamentals
cannot be used to discriminate between the two competing theories.
Furthermore, the equilibrium selection story presented here strongly
accords with the long-standing belief that some bank runs can be
characterized as events resulting from exogenous waves of pessimism and
that those mood s hifts are more likely when economic conditions are bad
or deteriorating.
The empirical evidence that links bank runs to economic conditions
has been well documented. Gorton (1988) discusses what he calls the
"recession hypothesis," according to which bank panics are
closely associated with the business cycle. In a related paper, Miron
(1986) presents evidence in favor of the "seasonal
hypothesis," which is that bank runs tend to be correlated with
seasonal fluctuations in the liquidity needs of depositors. Saunders and
Wilson (1996) and Schumacher (2000) discuss evidence on the selectivity of depositors: not all banks are equally likely to experience a run
during a panic, and in particular a questionable solvency position prior
to the run tends to increase the probability of depositors running on a
particular bank. (2)
Gorton (1988) studies bank panics during the National Banking Era
(1865-1914). Using data for national banks, Gorton investigates whether
the model and variables that explain the behavior of depositors during
no-panic situations also explain their behavior during panics. In this
sense, panics would not be purely random events; rather, they would be
directly correlated with the arrival of new information that determines
depositors' desire to withdraw funds from the bank. Gorton finds no
evidence for something special happening during panics that cannot be
explained by the model that describes the behavior of depositors in
no-panic situations. Instead, the evidence seems to suggest that panic
events are just the consequence of extreme realizations of the
circumstances that explain behavior during normal times. It is important
to note, however, that Gorton finds examples in which shocks of equal
magnitude to those usually associated with runs did not cause a panic
(for example, the November 1887 spike in the lia bility of failed
businesses did not induce a panic, while the smaller increase in June
1884 did). Finally, in Tables 1 and 2 we can see that there is some
disagreement as to what constitutes a panic. For example, Gorton does
not consider the episodes of May 1901 and March 1903 as panics.
Furthermore, and more germane to this paper, Tables 1 and 2 suggest that
there were several bank panics in periods with no economic recession. Of
course, seasonality may be part of the answer in those cases (as
discussed by Miron [1986]). (3)
These are interesting findings, but they are not enough to rule out
the possibility that, in some cases, banking panics are associated with
the existence of multiple equilibrium outcomes (that is, situations
where both the panic and the no-panic outcomes are possible). These
stylized facts refute only the simplest way of modeling multiple
equilibria and even then only under fairly specific conditions. Showing
that reasonable theories of multiple-equilibria bank runs are not
refuted by the available evidence is important since policy
prescriptions depend on the assessment of the economic conditions that
generate those bank runs. It would be helpful for policymakers to be
able to conclude that multiple-equilibria bank runs are not the norm.
However, as I will show here, the evidence discussed above does not
allow us to reach that conclusion.
The paper is organized as follows. In the next section I discuss a
simple model of bank runs that is now standard in the economic
literature. I then study the conditions under which multiple equilibria
arise, and I review different theories of how an equilibrium is selected
in those cases. I show that some of the more appealing equilibrium
selection mechanisms are indeed compatible with the available evidence.
Finally, in the conclusion I discuss some policy implications.
1. MODELING BANK RUNS
The Environment
The environment is similar to that in Diamond and Dybvig (1983),
except that the return on investment is stochastic. There are two time
periods, t = 1, 2, and a large number of ex ante identical agents (a
continuum of agents with unit mass). Each agent is endowed with a
consumption good at the beginning of date 1 and none after that. Agents
are uncertain about their preferences: some will be impatient and will
need to consume at the end of period 1; the rest will be patient and can
wait to consume in period 2. At the beginning of period 1 agents do not
know whether they will be patient or impatient, but they know that the
probability of being impatient at the end of the period is u.
Preferences are represented by the following utility function:
[nu] (c1, c2) = { 1/[gamma] [(c1).sup.[gamma]] with probability u
1/[gamma] [(c1 + c2).sup.[gamma]] with probability 1 - u ,
where c1 is consumption at the end of period 1, c2 is consumption
at period 2, and [gamma] < 1. The realization of preference types is
independent across agents, implying that u will also be the fraction of
the population that becomes impatient. Agents' types are not
observable and hence patient agents can always pretend to be impatient
if they wish to do so (impatient agents could pretend to be patient, but
this is never the case for the contracts studied below).
There are two saving technologies available: storage and
investment. One unit of consumption placed in storage yields one unit of
consumption at any future time. For the investment technology, one unit
of consumption placed in investment at the beginning of period 1 yields
R units in period 2. The return on investment R is a random variable
taking values greater than unity and with a probability density function given by f(R). Note that the expected value of R is necessarily greater
than one and hence investment is a better technology than storage to
save consumption for the second period (that is, for funds that are
needed with certainty in the second period). if investment is liquidated
early (at the end of period 1), then it yields x < 1 units of
consumption per unit invested. Hence, investment is an illiquid asset
that yields a higher return than storage if held to maturity, but a
lower return if liquidated early.
Timing
Since agents do not know their preferences until after the
opportunity to invest has passed, they pool their endowments in banking
coalitions. These banks then allocate some resources into the illiquid
investment and provide insurance to their members in case they happen to
become impatient at the end of period 1.
Competition in the banking industry drives the banks to offer the
best possible available contract to consumers. I restrict the type of
contracts that banks can offer to simple deposit contracts that are
subject to a sequential service constraint (Wallace [1998]). Under this
type of deposit contract, an agent gets the right to either a fixed
payment at the end of period 1 (as long as the bank has funds) or a
contingent payment in period 2. The sequential service constraint
prevents the bank from adjusting the payment to early withdrawers
according to the number of agents that decide to withdraw early. The
bank must pay a fixed amount until it runs out of funds. This kind of
contract is in the tradition of Diamond and Dybvig (1983) and Cooper and
Ross (1998). I use it here mainly because of its simplicity and
potential descriptive content. (4)
The timing of events is as follows. At the beginning of period 1,
the bank, without knowing the value of R, chooses a deposit contract and
a portfolio of assets (investment is possible only at this point). This
choice can be summarized by the pair (a, [eta]), where a is the payment
that the bank will give to depositors if they decide to withdraw early
and is the proportion of total deposits that the bank decides to keep in
storage (with (1 - [eta]) being the proportion that the bank puts in the
illiquid investment technology). Also at this time, agents decide
whether or not to deposit their funds in the bank. At the end of period
1, the uncertainty about preferences and technology is resolved: agents
find out whether or not they are impatient and the value of R is
revealed. (5) At this time, then, agents decide whether or not to go to
the bank to withdraw their deposits. Impatient agents have no choice but
to withdraw early. Patient agents, however, could choose to wait until
period 2, which they will do if t hey are not better off imitating the
impatient agents. Whether a patient agent would be better off
withdrawing his or her deposits early depends, in general, on what all
the other patient agents are doing. Hence, patient agents play a
strategic game at the end of period 1. Following Peck and Shell (2003),
I shall call it the "post-deposit game." In period 2, the
return on the illiquid technology is realized and those agents that did
not withdraw their deposits early (at the end of period 1) go to the
bank and share the total remaining resources equally.
2. THE POST-DEPOSIT GAME
The source of multiplicity of equilibria in the model lies in the
post-deposit game played by patient agents. The expected outcome of this
game will determine the bank's investment decisions and the
willingness of agents to make deposits in the bank. The details of those
problems are presented in Section 4. What is important here is to
understand that solving those problems requires knowing what could
happen in the post-deposit game. For this reason, I turn next to the
study of this game.
I concentrate only on symmetric pure strategy equilibria. (6) At
the end of period 1, the patient agents are faced with the decision of
whether to withdraw their deposits early or leave them in the bank until
period 2. Let r denote the decision to go to the bank to withdraw (i.e.,
to run) and n the decision to wait until the next period (i.e., not to
run). Let us define as [P.sub.ij] (R; a, [eta]) the payoff to a patient
agent following action i (i = r, n) given that all other patient agents
are following action j (j = r, n). We need only to consider those
payoffs because we are looking at symmetric equilibria, where all
patient agents act in the same manner. The normal form of the
post-deposit game played by patient agents is given by the following
matrix:
Other Patient Agents
Run No Run
Patient Run [P.sub.rr] (R; a, [eta]) [P.sub.rn] (R; a, [eta])
Agent No Run [P.sub.nr] (R; a, [eta]) [P.sub.nn] (R; a, [eta])
Note that the payoff [P.sub.ij] (R; a, [eta]) depends on the return
on investment R and on the deposit contract chosen by the bank (a,
[eta]). (Note also that deviations by a single player do not change the
payoff to the rest of the players because we are assuming that there is
a large number of players.)
It is easy to state conditions under which this game has multiple
equilibria. In particular, if [P.sub.rr](R; a, [eta]) > [P.sub.nr](R
a, [eta]) and [P.sub.rr] R; a, [eta]) > [P.sub.nr](R; a, [eta]), then
running to the bank at the end of period 1 and waiting until period 2 to
withdraw are both equilibria of the game. To see this, note that when
[P.sub.rr](R; a, [eta]) > [P.sub.nr](R; a, [eta]) holds, if the
patient agent thinks that all other patient agents will run to the bank,
then it is in her best interest to run as well. Therefore, if all
patient agents believe that a run will occur, the run does occur and
running is a Nash equilibrium of the game. Likewise, when [P.sub.nn] (R;
a, [eta]) > [P.sub.nn](R; a, [eta]) holds, if the patient agent
thinks that no other patient agent will run to the bank, then it is in
her best interest not to run. Therefore, if all patient agents believe
that there will be no run, there is indeed no run and not running is a
Nash equilibrium of the game. In equilibrium, then, al l players play
the same strategy, and I will denote each equilibrium by the strategy
being played in it. Thus, I call the run equilibrium (if it exists)
equilibrium r," and the no-run equilibrium "equilibrium
n."
Another important characteristic of this post-deposit game is that
the multiple equilibria are usually Pareto-ranked. (7) One equilibrium
is better than another equilibrium in the Pareto sense if all players in
the former receive a payoff at least as high as in the latter and one or
more players receive a strictly higher payoff. In the game studied here,
if [P.sub.nn] (R; a, [eta]) > [P.sub.rr] (R; a, [eta]), then the
no-run equilibrium n is Pareto-preferred to the run equilibrium r.
Given the possibility of multiple equilibria, the natural next step
is to ask, how does one of the equilibria get selected? I will discuss
the answer to this question in the next section.
Before going into the equilibrium selection issue, it is worth
noting that we can further characterize the payoff matrix of the
post-deposit game. Studying these payoffs will give us a better idea of
the conditions that determine the existence of multiple equilibria in
the game.
Since the bank chooses the contract before observing the return R,
the values of [eta] and a depend only on the probability distribution of
R and not on the particular realizations of R. The bank will never
choose a contract such that ua > [eta] holds. In such a case, the
bank will be certain to need to early-liquidate some of the investment
in order to pay depositors (even if no patient agent runs). Since early
liquidation is costly, this contract is never optimal. I will study the
problem of the bank later, but for now let us assume that the
distribution of R is such that the bank chooses a contract (a, [eta])
that satisfies [eta] + x (1 - [eta]) < a. This inequality implies
that if every agent goes to the bank early, then the bank would run out
of resources before being able to pay the promised amount a to each
withdrawer. Furthermore, if the inequality does not hold, then there
would be no runs in equilibrium. These two inequalities allow us to
determine the value of waiting when there is a run, [P.sub.nr ](R; a,
[eta]), and the value of running when there is no run, [P.sub.rn] (R; a,
[eta]). First, we have that [P.sub.nr](R; a, [eta]) = 0 because if
(almost) every agent goes to the bank to withdraw early, then the bank
will run out of funds and no payments will be made in the second period.
Second, we have that [P.sub.rn](R; a, [eta]) = [P.sub.rn](a) =
[a.sup.[gamma]]/[gamma] because when only impatient agents withdraw
early, total withdrawals are equal to ua and the bank has access to
enough liquid funds, [eta] + x(1 - [eta]),to cover that amount.
Let us now define u [equivalent to] [[eta] + x(1 - [eta])] /a <
1 as the probability of being paid when every agent goes to the bank
early. This formula is a direct consequence of assuming that agents take
random positions in the line formed at the bank's window and that
there is a sequential service constraint. Thus, we have that
[P.sub.rr](R a, [eta]) = [P.sub.rr](a, [eta]) =
u[a.sup.[gamma]]/[gamma]. It is important ti note that [P.sub.nr],
[P.sub.rn], and [P.sub.rr] are not functions of the particular
realization of R. The only payoff that is a direct function of the
realization of R is that for late withdrawals when there is no run, that
is
[P.sub.nn](R; a, [eta]) = 1/[gamma] (R(1 - [eta]) + ([eta] -
ua).sup.[gamma]/1 - u).
Note that [P.sub.nn](R; a, [eta]) is a continuous, increasing, and
unbounded function of R. Hence, there exists a threshold value [R.sup.*]
such that if R > [R.sup.*], we have that [P.sub.nn](R; a, [eta]) >
[P.sub.rn](a) = [a.sup.[gamma]]/[gamma] and the post-deposit game is a
multiple-equilibria coordination game. If R < [R.sup.*], the
post-deposit game has a unique equilibrium in which all agents withdraw
their deposits at the end of period 1. In summary, the payoff matrix for
the post-deposit game is:
Other Patient Agents
Run No Run
Patient Run I/[gamma] I/[gamma] [a.sup.[gamma]]
u[a.sup.[gamma]]
Agent No Run 0 I/[gamma] (R(I-[eta])+([eta]
-ua/I-u).sup.[gamma]
3. EQUILIBRIUM SELECTION IN THE POST-DEPOSIT GAME
There is an extensive literature on equilibrium selection in games.
This literature has concentrated some attention on 2 x 2 games with
multiple equilibria. The post-deposit game of the previous section can
be thought of as just an example of a 2 x 2 symmetric game with the
potential for multiple equilibria (i.e., a 2 x 2 symmetric coordination
game). (8) In this section, I will review some of the basic ideas from
this literature and discuss how they apply to the bank-run problem at
hand.
It is useful at this point to introduce the concept of equilibrium
selection mechanism (ESM). An ESM is a probability distribution that
assigns, to each equilibrium of the game, a probability indicating how
likely it is to be the result of play. For the post-deposit game under
consideration, an ESM is a function that for each possible triplet (R,
a, [eta]) assigns a probability [pi] to the run equilibrium (r) and a
probability (1 - [phi]) to the no-run equilibrium (n). These
probabilities must be feasible in the sense that, for given values (R,
a, [eta]), if the run equilibrium does not exist, then [phi] = 0, and if
a run is the only equilibrium, then [phi] = 1. It is important to note
that there is a degree of coordination being assumed from the outset:
agents know that the only possible outcomes are those where all the rest
of the agents play in the same manner (and this coordination is common
knowledge). The ESM provides some structure to the coordination problem
but does not explain why and how coordinati on arises. In this sense,
the concept of an ESM can be thought of as a generalization of the
traditional sunspot approach: there is still in place an exogenous
coordination device on which all agents base their actions. The
innovation is that the general ESM allows for the probability of each
equilibrium to depend on exogenous and endogenous variables in the
model.
The next natural question is, where does the function [phi] (R, a,
[eta]) come from? In the traditional sunspot approach, the function
[eta] is a constant between zero and one when feasible (i.e., when both
equilibria exist). Another commonly used criterion for equilibrium
selection is to assume that the best equilibrium (in the Pareto sense)
will be selected. In this case, the ESM is such that the probability
[eta] is equal to zero as long as the no-run equilibrium exists and
switches to unity when only the run equilibrium exists. Yet there are
other possible forms that the function [eta] may take and that can be
reasonably justified. I review some of these forms next.
Let us start by defining the risk factor of equilibrium j, for j
[member of] {r, n} as the smallest probability p such that if a player
believes that with probability strictly greater than p all the other
players are going to play action j, then action j is the unique optimal
action to take (see, for example, Young [1998]). Hence, the risk factor
of the run equilibrium (r) is given by the solution to the following
equation: (9)
[p.sub.r][P.sub.rr] + (1 - [p.sub.r])[P.sub.rn] =
[p.sub.r][P.sub.nr] + (1 - [P.sub.r])[P.sub.nn].
Therefore,
[p.sub.r] = [P.sub.nn] - [P.sub.rn]/([P.sub.rr] - [P.sub.nr] +
([P.sub.nn] - [P.sub.rn])
is the risk factor of the run equilibrium. When both equilibria
exist (run and no-run), the only payoff that depends on R is and this
payoff is increasing in R. Hence, [p.sub.r] is an increasing function of
R. This result is rather intuitive. It says that the higher the return
on investment R, the higher the belief probability of a run p must be in
order to induce a patient agent to run on the bank.
An equilibrium j is p-dominant if the equilibrium action] is the
unique best response to any belief of the player that puts probability
at least p [Cents] [0, 1] on the other players playing action j (see
Morris, Rob, and Shin [1995]). Hence, the run equilibrium is
pr-dominant.
If the risk factor of the run equilibrium Pr is less than or equal
to one-half, then the run equilibrium is risk dominant (Harsanyi and
Selten [19881). Risk dominance has been used as a criterion for
equilibrium selection: the risk-dominant equilibrium will be the one
selected and played. This criterion has an appealing interpretation. If
each player is uncertain about the action of the other players, it is
plausible that he or she would assign equal probability to each of the
possible outcomes (a flat or diffuse prior). If the risk factor of
equilibrium j is less than one-half, that is, if equilibrium j is risk
dominant, and if players have flat priors about the actions of the other
players, then equilibrium j will be the one played. In the post-deposit
game, when each player assigns equal odds to all of his or her opponents
playing either action r or n, then the players will choose to play the
action of the risk-dominant equilibrium. In terms of the definition of
ESM, the risk dominance criterion assigns pr obability one to the
risk-dominant equilibrium.
Another way of motivating an equilibrium selection rule in games
with multiple equilibria is to study learning dynamics under repeated
iterations of the static (stage) game. See, for example, Kandori,
Mailath, and Rob (1993); Young (1998); and Mntsui and Matsuyama (1995).
These papers concentrate on games with two players and assume that there
are frictions limiting the ability of agents to adjust their strategies.
Kandori, Mailath, and Rob also assume bounded rationality on the part of
the agents playing the dynamic game (in the form of myopic behavior and
some propensity to make mistakes). It is interesting to note that the
learning dynamics under these assumptions tend to select (as the
frictions or the probability of mistakes vanish) the risk-dominant
equilibrium as the one most likely to be played. Temzelides (1997)
extends this work and applies it to the bank-run model.
Ennis and Keister (2003 a) study a learning model that induces a
probability distribution over the possible equilibria of a 2 x 2
macroeconomic coordination game. We show that the probability of
equilibrium j induced by this learning process is strictly decreasing in
the risk factor of equilibrium j and can take values strictly lower than
one even when equilibrium j is risk dominant. In terms of the previous
ESM terminology, we have that the function [pi] is a decreasing function
of Pr and may take values strictly between zero and one. Since Pr is an
increasing function of R (the fundamentals), we have that the
probability of a run [pi] is a decreasing function of R. That is, the
better the fundamentals (R), the less likely is a bank-run event. In
Ennis and Keister (2003b) we apply these ideas to study the effect of
bank runs on economic growth.
Let us now go back to the case of equilibrium selection based on
the traditional sunspot approach. Assume that the return on investment R
takes values only in the interval (R*, [infinity]), where R* is the
threshold such that for values of R greater than R* there are multiple
equilibria of the post-deposit game. In other words, assume that the
contract is such that the no-run equilibrium exists for every possible
value of R. Assume also that a binomial sunspot random variable
determines which equilibrium is selected. Because both equilibria exist
for every value of R, the probability of a bank run is always given by
the constant probability associated with the sunspot realization that
coordinates agents to "run" to the bank. This is the sense in
which the previous literature on bank runs has dismissed the sunspot
explanation for not conforming with the observed correlation of bank
runs with economic fundamentals.
However, note that if R can be below R* with positive probability,
then for those realizations, regardless of the sunspot variable, the
probability of a run will be equal to unity. In such a case, even though
sunspots still play an important role in coordinating the agents when
there are multiple equilibria, the probability of bank runs will be
higher for lower values of R, and indeed the probability of observing a
bank run will be the highest (equal to one) when the fundamentals
deteriorate sufficiently (that is, when R < R*). In this sense, even
the traditional sunspot approach can account for some of the correlation
of bank runs with economic fundamentals. Economic fundamentals determine
whether multiple equilibria exist, and then probabilities have to adjust
to reflect this fact. (10)
Furthermore, the traditional sunspot approach seems too simplistic for this environment, and the risk-dominance-based selection mechanism
appears to be a reasonable extension. We can think that the risk
dominance ESM is the case where the particular sunspot variable that
coordinates patient agents to run to the bank is correlated, in a
specific way, with the stochastic variable R determining fundamentals.
Risk dominance provides discipline and intuition to this correlation.
In particular, the risk dominance criterion divides the support of
the distribution of R into two sets: the set where R < R, in which
the run equilibrium is risk dominant, and the set where R > R, in
which the no-run equilibrium is risk dominant. We can think that there
is an associated sunspot random variables, perfectly correlated with R,
such that whenever R takes values in the interval [1, R], the variable s
takes the value r, and whenever R takes values in the interval (R,
[infinity]o), the sunspot variables equals n. If agents associate values
of s = r with a run situation and values of s = n with a no-run
situation, the equilibrium selection process is still driven by sunspots
(the variable s), but it generates a correlation of bank runs with the
behavior of fundamentals. It is worth noting that for most values of R,
both equilibria still exist, even though one of them is risk dominant.
What determines which equilibrium will be played is a matter of how
agents get coordinated. Coordination is driven by the sunspot variable
s. Risk dominance can be thought of as the justification for why the
particular sunspot random variable s has been selected as a coordination
device over all possible variables that may be available. Note that
there is a higher level of coordination among agents in the choice of
the relevant sunspot variable. This interpretation of sunspots is in
fact associated with another argument that has been used to explain the
appearance of such coordination devices: sunspots can be viewed as the
limiting case of situations in which agents are overreacting to some
small movement in economic fundamentals. Manuelli and Peck (1992)
formalize this argument.
Finally, it should be clear at this point that the more general ESM
approach (Ennis and Keister [2003a]), in which the probability of a bank
run [pi] is a decreasing function of R, is also consistent with both the
multiplicity of equilibria and the correlation of bank runs with
economic fundamentals. In fact, with this approach the probability [pi]
can be strictly between zero and unity and at the same time be dependent
on R. This feature seems very appealing, since the historical
correlation was never perfect: sometimes bank runs did not occur even
though economic fundamentals were as bad as or worse than in periods
where a bank run did occur (see Gorton [1988]).
4. THE BANK'S PROBLEM
In Section 2 we assumed that agents would be willing to deposit
their funds in the bank and that the bank would choose a contract with
some specific properties. This section provides the justification for
those assumptions.
Given that the banking system may be subject to runs, agents might
choose not to participate in the banking system. (11) In that case,
their payoff would be given by the following "autarky" problem
[V.sub.A] [equivalent to] max/[eta] [integral] (u
[([eta]+x(1-[eta])).sup.[gamma]]/[gamma]+(1-u)
[([eta]+R(1-[eta])).sup.[gamma]/[gamma]) f(r)dR, subject to 0 [less than
or equal to] 1. At the beginning of period 1, the agent decides how to
split the endowment between storage ([eta]) and investment (1 - [eta]).
At the end of period 1, the agent finds out whether she is patient or
impatient. If she is impatient, then she liquidates the investment and
consumes (funds are useless for her in the second period). If she is
patient, then she stores the liquid funds and consumes in the second
period both the liquid funds and the return on investment (recall that
we are assuming that R > 1 > x).
A bank could always choose a contract that eliminates the
possibility of experiencing a run. I will call the best contract with
such property the "runproof contract." A contract is run-proof
if there is enough liquidity in the bank to pay all agents the amount a
at the end of period 1. But because the contract is run-proof, patient
agents actually wait until the second period to withdraw. The problem of
a bank choosing the run-proof contract is the following:
[V.sub.RP] = [max.sub.a,[eta]] [integral] (u
[a.sup.[gamma]]/[gamma] + (1 - u) 1/[gamma] [(R(1 - [eta]) + ([eta] -
ua)/1 - u).sup[gamma]]) f (R)D R,
subject to
a [less than or equal to] [eta] + x (1 - [eta]), a [greater than or
equal to] 0, [less than or equal to] [eta] [less than or equal to] 1.
The first constraint is the run-proof constraint. It says that even
if all agents go to the bank in the first period (i.e., early), the bank
will not run out of funds.
Finally, after having studied equilibrium selection in the
post-deposit game, we are now in a position to write down the problem
faced by the bank at the beginning of period 1. It is important to note
that the probability of a run may depend on the contract chosen by the
bank and hence the bank will take this effect into account when
determining the best possible contract. Formally, the bank's
problem is given by
V [equivalent to] [max.sub.a, [eta]] [integral] [[pi] (R, a, n)
[P.sub.rr](a, [eta]]) + (1 - [pi] (R, a, [eta])) (u
[a.sup.gamma]]/[gamma] + (1 - u) [P.sub.nn] (R, a, [eta]))] f (R) dR.
subject to a [greater than or equal to 0 and [less than or equal
to] [eta] [less than or equal to] 1. Note that [P.sub.nn] does not enter
the problem directly. It may, however, enter the problem indirectly
through the determination of [pi] (R, a, n), as in the case of the ESM
based in risk dominance or adaptive learning.
When we have [V.sub.A] < max{V, [V.sub.RP]}, the agents will
choose to deposit their funds at the bank. When we have V >
[V.sub.RP], the bank will choose the contract that allows for the
possibility of bank runs according to the ESM that is operating in the
economy (that is, according to the given function [pi](R, a, [eta])). It
is important to note that if there exist values of R such that R <
[R.sup.*] and f(R) > 0, then for those values of R we must have that
[pi](R, a, [eta]) = 1 because the post-deposit game has a unique (run)
equilibrium for those values of R.
Diamond and Dybvig (1983) show that when the return on investment R
is not stochastic (and greater than unity) and the probability [pi] is
arbitrarily set at zero, the bank chooses a contract (a, [eta]) for
which a bank run is a possible equilibrium of the post-deposit game
played by the patient agents. Hence, using arguments of continuity, it
can be shown that there exist functions f(R) and [pi] (R, a, [eta]) >
0 such that a bank solving the problem V described above will also
choose a contract that admits runs (that is, a contract such that [eta]
+ x (1 - [eta]) < a holds).
5. CONCLUSION
I have shown that even when bank runs are driven by self-fulfilling
expectations in environments with multiple equilibria, the historical
correlation of bank runs with poor economic fundamentals can still be
accounted for. More evidence would be necessary to reject the case of
bank runs originating in situations with multiple equilibria. For now,
when we observe a bank run, we cannot in principle confidently discard
the possibility that another equilibrium with no bank run was also
possible. This conclusion is important from a policy standpoint. In some
cases, multiple-equilibria bank runs can be avoided by the design of
off-equilibrium policies that are hence never observed. For example, the
suspension of convertibility could make the run situation I have
presented no longer an equilibrium of the post-deposit game (as proposed
by Diamond and Dybvig in their original paper). But because suspension
would occur only when there is a run and runs are not equilibrium
outcomes anymore, the suspension of payments wi ll not be observed. An
important qualification is that, like many other off-equilibrium
threats, this policy entails a certain ability of the bank to commit to
actually implementing the policy if it becomes necessary.
There is another important policy implication of the ideas
presented here. In the multiple-equilibria case, bank runs are usually
not optimal and in general the policymaker would like to avoid them (or
at least lower their probability). Contrary to this position, Allen and
Gale (1998) present the case of bank runs that are not the consequence
of a coordination failure and that are in fact part of the optimal
arrangement for risk sharing in the economy. The policymaker would not
want to avoid the Allen-Gale type of bank runs. Determining which of the
two cases is driving a particular episode is an important issue that the
policymaker would need to carefully evaluate.
Table 1
Financial Panics, 1890-1908 (Miron, 1986)
Major Panics September 1890
May 1893
December 1899
May 1901
March 1903
October 1907
Minor Panics February 1893
September 1895
June 1896
December 1896
March 1898
September 1899
July 1901
September 1901
September 1902
December 1904
April 1905
April 1906
December 1906
March 1907
September 1908
Table 2
Business Cycle and Bank Panics (Gorton, 1988)
NBER Cycle (Peak-Trough) Panic Date
October 1873 - March 1879 September 1873
March 1882 - May 1885 June 1884
March 1887 - April 1888 No panic
July 1890 - May 1891 November 1890
January 1893 - June 1894 May 1893
December 1895 - June 1897 October 1896
June 1899 - December 1900 No panic
September 1902 - August 1904 No panic
May 1907 - June 1908 October 1907
January 1910 - January 1912 No panic
January 1913 - December 1914 August 1914
Table 3 Notation
u Probability of being impatient
y Coefficient of relative risk
aversion
R Return on the risky investment
x Return from early liquidation
f(R) Probability distribution of R
a Bank payment for early withdrawal
[eta] Proportion of total deposits held
in storage
u Probability of getting paid in
case of run
R* Multiple-equilibria threshold
for R
R Risk-dominance threshold for R
pr Risk factor of the bank-run
equilibrium
[phi] Probability of a bank run
Research Department, Federal Reserve Bank of Richmond,
huberto.enmis@rich.frb.org. Some of the ideas discussed in this article
are the product of my joint work with Todd Keister from Instituto
Technologico Autonomo de Mexico. I would like to thank Emilio Espino,
Tom Humphrey, Ned Prescott, John weinberg, Alex Wolman, and especially
Todd Keister for comments on an earlier draft. All errors are of course
my own. The views expressed here do not necessarily reflect those of the
Federal Reserve Bank of Richmond or the Federal Reserve system.
(1.) Shell and Smith (1992, 602) write: "The
'sunspot' terminology is a bit of a spoof on the work of
Jevons (1884) and his followers, who related the business cycle to the
cycle of actual sunspots. To the extent that actual sunspots do affect
economic fundamentals this is intrinsic uncertainty, but the overall
effects of actual sunspots on economic fundamentals are probably not
major. Then, if actual sunspot activity does have substantial impacts on
the economy, it must be that it serves a role beyond its effects on
fundamentals. Cass-Shell (1983) sunspots are highly stylized; by
definition, they represent purely extrinsic uncertainty."
(2.) Calomiris and Mason (1997) find evidence of depositors'
confusion during the June 1932 bank panic, but they also find that
solvent banks were able to support each other to avoid failure.
(3.) Gorton (1988) finds no evidence of seasonal effects as causes
for panics using his definition.
(4.) See also Ennis and Keister (2003b). In this environment, there
are potential gains from making the early payments contingent on the
realization of the return on investment R. The contracts studied here do
not allow for this possibility. Gale and Vives (2002) and Allen and Gale
(1998) do not assume sequential service, but the optimal contract has a
structure similar to the deposit contract in the sense that for high
values of R the payoff to early withdrawers is not contingent. This is
because investment cannot be liquidated (it has zero liquidation value),
and for high enough values of R (so that late consumers get more than
early consumers), early consumers just divide the available liquid funds
among them, resulting in a fixed quantity for each, independent of the
value of R. The costly state verification literature provides another
justification for the debt contracts (see, for example, Williamson
[1986]).
(5.) This value of R is common to all investment in the economy. No
diversification is possible.
(6.) Symmetry implies that in equilibrium all impatient agents play
the same strategy and all patient agents play the same strategy (but
perhaps different from the one played by the impatient agents). Pure
strategies are those strategies that do not involve randomization over
different possible actions (each agent plays a single action with
probability one).
(7.) Games with multiple Pareto-ranked equilibria are called
"coordination games" in the literature (for a general review,
see Cooper [1999]).
(8.) Usually we refer to a 2 x 2 game as a game that is played by
two individuals who each have two possible pure strategies that they can
choose to play. In the post-deposit game, agents play a "game"
against the population that is often called a "macroeconomic
game." See Cooper (1999) for an extensive discussion on the
subject.
(9.) The payoffs are still a function of the triple (R, a, [eta]),
but I choose not to explicitly write this dependence in order to
simplify notation.
(10.) Ironically, the model in the second part of the paper by
Allen and Gale (1998) can be used to provide a good example of this
situation. For some parameter values their model has multiple
equilibria. Their equilibrium analysis delineates three relevant regions
for the possible realization of the return on the risky asset R. When R
is very low, the equilibrium has a bank run; when R is very high, there
are no bank runs in equilibrium; and for intermediate values of R, there
are multiple equilibria: both having a bank run and not having a bank
run are possible equilibrium outcomes. Therefore, just using a simple
sunspot variable to determine which of the two equilibria will be
observed in the intermediate region of R would deliver the historical
correlation: as fundamentals deteriorate (as R goes from high to low
levels), the probability of bank runs first goes from zero to positive
(the value associated with the sunspot) and then to unity when
fundamentals are so poor that a bank run is unavoidable.
(11.) For the sake of simplicity, I am restricting agents to
deposit either all their resources in the bank or nothing at all. Ennis
and Keister (2003b) consider the case where agents can deposit just part
of their initial resources in the hank. This is an important extension
in environments where bank runs can happen with positive probability, as
is the case in this paper.
REFERENCES
Allen, Franklin, and Douglas Gale. 1998. "Optimal Financial
Crises." Journal of Finance 53 (August): 1245-84.
Calomiris, Charles, and Joseph Mason. 1997. "Contagion and
Bank Failures During the Great Depression: The June 1932 Chicago Banking
Panic." American Economic Review 87 (December): 863-83.
Cass, David, and Karl Shell. 1983. "Do Sunspots Matter?"
Journal of Political Economy 91 (April): 193-227.
Cooper, Russell. 1999. Coordination Games. Cambridge: Cambridge
University Press.
-----, and T. W. Ross. 1998. "Bank Runs: Liquidity Costs and
Investment Decisions." Journal of Monetary Economics 41 (February):
27-3 8.
Diamond, Douglas, and Philip Dybvig. 1983. "Bank Runs, Deposit
Insurance, and Liquidity." Journal of Political Economy 91 (June):
401-19.
Ennis, Huberto, and Todd Keister. 2003a. "Government Policy
and the Probability of Coordination Failures." Centro de
Investigacion Economica Discussion Paper 03-01, ITAM (February).
-----. 2003b. "Economic Growth, Liquidity, and Bank
Runs." Forthcoming in the Journal of Economic Theory.
Gale, Douglas, and Xavier Vives. 2002. "Doilarization,
Bailouts, and the Stability of the Banking System." Quarterly
Journal of Economics (May): 467-502.
Gorton, Gary. 1988. "Banking Panics and Business Cycles."
Oxford Economic Papers 40 (December): 751-81.
Harsanyi, John, and Reinhard Selten. 1988. A General Theory of
Equilibrium Selection in Games. Cambridge: MIT Press.
Jevons, W. Stanley. 1884. Investigation in Currency and Finance.
London: Macmillan.
Kandori, Michihiro, George Mailath, and Rafael Rob. 1993.
"Learning, Mutation, and Long Run Equilibria in Games."
Econometrica 61 (January): 29-56.
Manuelli, Rodolfo, and James Peck. 1992. "Sunspot-Like Effects
of Random Endowments." Journal of Economic Dynamics and Control 16
(April): 193-206.
Matsui, Akihiko, and Kiminori Matsuyama. 1995. "An Approach to
Equilibrium Selection." Journal of Economic Theory 65 (April):
415-34.
Miron, Jeffrey. 1986. "Financial Panics, the Seasonality of
the Nominal Interest Rate, and the Founding of the Fed." American
Economic Review 76 (March): 125-40.
Morris, Stephen, Rafael Rob, and Hyun Song Shin. 1995.
"p-Dominance and Belief Potential." Econometrica 63 (January):
145-57.
Peck, James, and Karl Shell. 2003. "Equilibrium Bank
Runs." Journal of Political Economy 111 (February): 103-23.
Saunders, Anthony, and Berry Wilson. 1996. "Contagious Bank
Runs: Evidence from the 1929-1933 Period." Journal of Financial
Intermediation 5 (October): 409-23.
Schumacher, Liliana. 2000. "Bank Runs and Currency .Run in a
System Without a Safety Net: Argentina and the 'Tequila'
Shock." Journal of Monetary Economics 46 (August): 257-77.
Shell, Karl, and Bruce Smith. 1992. "Sunspot
Equilibrium." In The New Palgrave Dictionary of Money and Finance.
Vol. 3. New York: Macmillan, 601-7.
Temzelides, Ted. 1997. "Evolution, Coordination, and Banking
Panics." Journal of Monetary Economics 40 (September): 163-83.
Wallace, Neil. 1988. "Another Attempt to Explain an Illiquid
Banking System: The Diamond and Dybvig Model with Sequential Service
Taken Seriously." Federal Reserve Bank of Minneapolis Quarterly
Review 12 (Fall): 3-16.
Williamson, Stephen. 1986. "Costly Monitoring, Financial
Intermediation, and Equilibrium Credit Rationing." Journal of
Monetary Economics 18 (September): 159-79.
Young, Peyton. 1998. Individual Strategy and Social Structure.
Princeton, N.J.: Princeton University Press.