Behavioral foundations of reciprocity: experimental economics and evolutionary psychology.
Hoffman, Elizabeth ; McCabe, Kevin A. ; Smith, Vernon L. 等
I. INTRODUCTION
Theorists have long studied the fundamental problem that cooperative,
socially efficient outcomes generally cannot be supported as equilibria
in finite games. The puzzle is the occurrence of cooperative behavior in
the absence of immediate incentives to cooperate. For example, in
two-person bargaining experiments, where noncooperative behavior does
not result in efficient outcomes, we observe more cooperative behavior
and greater efficiency than such environments are expected to produce.
Similarly, in public good experiments with groups varying in size from
four to 100 people, the participants tend to achieve much higher payoff
levels than predicted by noncooperative theory. Moreover, examples of
cooperative behavior achieved by decentralized means have a long history
in the human experience. Anthropological and archaeological evidence
suggest that sharing behavior is ubiquitous in tribal cultures that lack
markets, monetary systems, or other means of storing and redistributing
wealth (see, e.g., Cosmides and Tooby [1987; 1989]; Isaac [1978]; Kaplin
and Hill [1985]; Tooby and De Vote [1987]; Trivers [1971]).
In this paper we draw together theoretical and experimental evidence
from game theory, evolutionary psychology, and experimental economics to
develop a reciprocity framework for understanding the persistence of
cooperative outcomes in the face of contrary individual incentives. The
theory of repeated games with discounting or infinite time horizons
allows for cooperative solutions, but does not yield conditions for
predicting them (Fudenberg and Tirole [1993]). Recent research in
evolutionary psychology (Cosmides and Tooby [1987; 1989; 1992]) suggests
that humans may be evolutionarily predisposed to engage in social
exchange using mental algorithms that identify and punish cheaters.
Finally, a considerable body of research in experimental economics now
identifies a number of environmental and institutional factors that
promote cooperation even in the face of contrary individual incentives
(Davis and Holt [1993]; Isaac and Walker [1988a,b; 1991]; Isaac, Walker
and Thomas [1984]; Isaac, Walker and Williams [1991]). Moreover, these
experimental results indicate that trust and trustworthiness play a much
greater role than the evolutionary psychologists' punish-cheaters
model would suggest. We hypothesize that humans' abilities to read
one anothers' minds (Baron-Cohen [1995]) in social situations
facilitates reciprocity.
II. REPEATED GAMES
Repeated-game theory offers two explanations of cooperation based on
self-interest: self-enforcing equilibria and reputations. Self-enforcing
equilibria are based on the idea that players can credibly punish
noncooperative defections. The nagging problem with self-enforcing
cooperative equilibria is that there are many equilibria in such games
with cooperation being only one possibility.
Experiments demonstrating that subjects cooperate in games with
repeated play and relatively short finite horizons (Selten and Stoecker
[1986]; Rapoport [1987]) suggest reputations are important in games with
incomplete information (Kreps et al. [1982]). The idea is that if
players are uncertain about other players' types, then the
possibility emerges that players will mimic (develop a reputation as) a
type different from their own. In circumstances where cooperation is
mutually beneficial players have an incentive to mimic cooperative
behavior.
In the examples given by Kreps et al. [1982], players rationally
compute strategies based on (utility or payoff) type uncertainty. They
cooperate from the beginning until near the end of the game, and then
defect. This is not, however, the pattern observed in experiments, where
it is common for cooperation to develop out of repeated interactions;
also, defection near the end is often not observed.
The strength of the theory is that it is based on individual (but
longer run) self-interest, and is parsimonious. Its weakness is that it
admits many possible equilibria without suggesting why cooperation is
the most likely outcome. Moreover, for reputation-based equilibria,
people must entertain beliefs about certain types of people.
But where do these beliefs come from? We introduce the hypothesis
that types emerge from the evolutionary fitness of certain cognitive
abilities which predispose many people towards reciprocity. Actual
circumstances and experiences may lead to reciprocal behavior by many
persons. Not everyone has to be a particular type; variability is the
stuff from which selection occurs and which allows nature to adapt to
change. But the type must exist in sufficient numbers for people to
believe that reciprocity pays. And if reciprocity pays, culture and
norms develop to specify the forms that reciprocity will take.
III. MENTAL ALGORITHMS FOR SOCIAL EXCHANGE: STRATEGIES IN HUMAN
COGNITION THAT SUPPORT COOPERATION
The complex organization of the human mind is thought to be the
product of at least a few million years of evolutionary adaptation to
solve the problems of hunting and gathering.(1) Evolutionary
psychologists hypothesize that these problems were solved not only by
neurobiological adaptations, but also by adaptations in human social
cognition (see Cosmides and Tooby [1992], hereafter CT, and the
references therein). The idea is that humans have special and highly
developed cognitive mechanisms for dealing with social exchange
problems: that is, mental modules for solving social problems are as
much a part of the adapted mind as our vision and hearing-balance
faculties.(2)
Examples of mental "computational" modules that solve
specialized design problems include vision, language and "mind
reading." The mechanism which constitutes vision involves neural
circuits whose design solves the problem of scene analysis (Marr
[1982]). The solution to this problem employs specialized computational
machinery for detecting shape, edges, motion, bugs (in frogs), hawks (in
rabbits), faces, etc. Just as we learn by exposure to see and interpret
scenes without being taught, we learn to speak without formal training
of any kind.
Although culture is known to operate on our mental circuitry for
language learning, the deep structure of language is common across
cultures (Pinker [1994]). Normal English-speaking preschoolers can apply
mental algorithms to root words to form regular noun plurals by adding
"s" and the past tense of regular verbs by adding
"ed" (Pinker [1994, 42-43]). The preschooler even
"knows" that you can say that a house is mice-infested but
never that it is rats-infested, that there can be teethmarks but never
clawsmarks - the mental algorithm here allows compound words to be
formed out of irregular plurals but never out of regular plurals. This
is because of the way the unconscious brain works: regular plurals are
not stem words stored in the mental inventory, but words derived
algorithmically by the inflectional rule to add "s."
Preschoolers in all languages automatically make these kinds of
distinctions without being taught by their mothers or their teachers
(Pinker [1994, 146-47]).
That the mind contains blueprints for grammatical rules is further
indicated by a language disorder in families which appears to be
inherited like a pedigree with a dominant gene. English speakers
afflicted with this disorder are unable to inflect root words to form
derivatives such as the "s" rule for obtaining plurals.
"Mind reading" - the process of inferring the mental states
of others from their words and actions - facilitates "social
understanding, behavioral predictions, social interaction, and
communication" (Baron-Cohen [1995, 30]). Autism in children makes
them mind blind - they are not automatically aware of mental phenomena
in others, and cannot "mind read".(3) A genetic basis is
suggested by its greater risk in identical twins and biologically
related siblings. Baron-Cohen [1995, 88-95] implicates the amygdala and
related areas of the brain as jointly controlling the ability to detect
eye direction in others and to interpret mental states (have a theory of
mind) in others. Other detector mechanisms appear to include
"friend or foe" - cooperation is not automatic for foes - and
the "fight or flight" response to sudden danger.
The hypothesis that our minds are also predisposed to learn
behavioral responses that promote cooperative outcomes does not mean
that we are born with such behavioral responses. We only need to be born
with the capacity to learn such responses developmentally from social
exposure, much as we are born with the capacity to learn any language
but not with the ability to speak any particular one. A capacity for the
natural learning of strategies that induce cooperation in social
exchange has fitness value. But the implementational form of what is
learned varies widely, depending upon the environment, accidents of
nature, and how parental, familial, and societal units organize exchange
processes. Consequently, "culture" is endlessly variable, but,
functionally, reciprocity is universal.
Naturally selected fitness strategies are hypothesized to be embodied
in the designs that modulate reasoning about social exchange. An
analysis of these strategies allows one to deduce the behavioral
characteristics of the associated mental algorithms. This analysis also
allows predictions about human responses in reasoning experiments of the
kind that we summarize below. These psychology experiments are of
particular interest to experimental economists because they complement
subject behavior in many games of strategic interaction.
Consider the standard two-person Prisoner's Dilemma (PD) game,
but think of the entries corresponding to C (cooperate) or D (defect)
for the row and column players as net benefits and net costs measured in
units that increase (or decrease) the individual's inclusive
fitness. C might represent the strategy "trade," while D might
represent "steal." As discussed above, game theory predicts
that mutual cooperation will not emerge in a single-move game - people
are all self-interested "foes."
Imagine a tournament that matches pairs from a large population of
organisms so that the same two individuals are never matched a second
time. Each member is matched, reproduces itself, and dies. The offspring
inherits the strategy choice propensity of the parent, and the number of
offspring is proportional to the payoff gains of the parent in its
matched plays of the game. Each generation repeats this process.
Repeated-game principles can be used to analyze equilibrium outcomes
in such a game. Repeat interaction is a prominent characteristic of
social exchange. Needs are rarely simultaneous. But, long before human
societies invented a generally accepted medium of exchange, various
cultural mechanisms provided social adaptations which allowed delayed
mutual benefits to be gained: I share my meat with you when I am lucky
at the hunt, and you share yours with me when you are lucky. Although
this is commonly referred to as reciprocal altruism, we prefer to call
it reciprocity. I am not altruistic if my action is based on my
expectation of your reciprocation.
Reciprocity leads naturally to property rights. If I grow corn and
you grow pigs, and we exchange our surpluses, then we each have an
interest in the other's property right in what is grown. If either
of us plays "steal," that ends the trading relationship.
Hence, mutual recognition and defense of informal property right systems
need not require the pre-existence of a Leviathan.
But how might such mutual cooperation emerge in a repeated PD game?
We know from the work of Axelrod and Hamilton [1981] that strategy C
cannot be selected for in repeated play, but that the contingent
cooperative strategy, T (tit-for-tat), can be selected for. In general
any strategy, including T, can successfully invade a population of
defectors if (and only if) it cooperates with cooperators and punishes
defectors (Axelrod [1984]). As noted by CT [1992, 176-77], it is an
empirical issue to determine which strategy, out of this admissible set,
is actually embodied in human cognitive programs.
The need to solve the PD problem to achieve cooperation provides an
abstract schema for organizing our thoughts about cooperation beyond
immediate kin. However, simply referring to the motivating example of
the PD will not carry us to a full understanding of human social
exchange. In particular, it will not help us understand cooperative
behavior toward anonymous strangers when there is no prospect for
punishment. This is an anomaly in the CT evolutionary paradigm.
An important question for the evolutionary paradigm is whether the
mental algorithms for social exchange consist of a few content-free
generalized rules of reasoning, or whether they consist of designs
specialized for solving social exchange problems. Economic/game theory
is driven by the principle that humans naturally use content-free
generalized rules of reasoning in solving decision problems. If this is
so, why is economics so hard to teach? If these rules come only from
culture, where does culture come from?
CT [1992] argue that the evolutionary perspective favors specialized
over generalized rules. General rules, applicable to any subject matter,
"will not allow one to detect cheaters ... because what counts as
cheating does not map onto the definition of violation imposed by the
propositional calculus. Suppose we agree to the following exchange:
'If you give me your watch then I'll give you $20.' You
would have violated our agreement - you would have cheated me - if you
had taken my $20 but not given me your watch. But according to the rules
of inference of the propositional calculus, the only way this rule can
be violated is by your giving me your watch but my not giving you
$20" (CT [1992, 179-80]). That is, the way you falsify "if P,
then Q," statements is to look for "P, not Q," evidence.
In this example, giving me your watch is the P statement; my not giving
you $20 is the not-Q statement. If such rules were the only ones
contained in our minds, we would have no special ability to detect
cheating.(4)
One theme in the CT research program is to design experiments that
will test these kinds of propositions (CT [1992, 181-206]). The
selection task that CT employ was developed by Wason [1966], whose
motivation was to inquire as to whether the ordinary learning
experiences of people reflected the Popperian hypothesis-testing logic
outlined above. The procedure uses four cards, each carrying one of the
labels, P not-P, Q, not-Q on the side facing up, and another of the same
four labels on the side facing down. Each card corresponds to some
situation with one of the labeled properties. The rule is violated only
by a card that has a P on one side and a not-Q on the reverse side.
Subjects are asked to indicate only the card(s) that definitely need
to be turned over in order to see if any cases violate the rule. The
correct answer is to indicate the cards showing P (to see if there is a
not-Q on the other side) and not-Q (to see if there is a P on the other
side). In one example, a secretary's task is to check student
documents to see if they satisfy the rule: If a person has a
"D" rating, then his document must be marked code
"3." Four cards show D, F, 3, and 7, and subjects should
indicate the cards showing the letter D and the numeral 7. Less than 25%
of college students choose both of these cards correctly.
Now consider a law which states that "If a person is drinking
beer, then he must be over 20 years old." Out of four cards which
also include "not drinking beer" and "25 years old"
the correct response is to choose the card "drinking beer" and
the card "16 years old." In this experiment about 75% of
college students get it right. Why the difference from the previous
example?
Although people do better in more familiar examples such as:
"Ira person goes to Boston, then he takes the subway," less
than half get it right. A survey of this literature (Cosmides [1989])
suggests that "Robust and replicable content effects were found
only for rules that related terms that are recognizable as benefits and
cost/requirements in the format of a standard social contract" (CT
[1992, 183]). Sixteen out of 16 experiments using social contracts
showed large content effects. Fourteen out of 19 experiments which did
not use contract rules produced no content effect, two produced a weak
effect, and three produced a substantial effect.
These findings launched a number of studies designed to separate the
social contract hypothesis from confounding interpretations, such as
familiarity, or that the social context merely facilitates Popperian
reasoning. CT report that the alternative hypotheses have not survived
experiments designed to separate them from the cheater-detection
hypothesis.(5)
IV. OBSERVABILITY, COMMUNICATION, AND INTENTIONALITY SIGNALING
If humans are preprogrammed to learn to achieve cooperative outcomes
in social exchange, then factors that facilitate the operation of these
natural mechanisms should increase cooperation even in the presence of
contrary individual incentives. For example, cooperation should increase
if individuals can observe and monitor one anothers' behaviors,
even if there are no direct mechanisms for enforcing specific behaviors.
In Baron-Cohen's [1995] model of mind reading, the eye direction,
shared attention, and intentionality detectors are used to identify and
ratify the volitional states of others. Observation and monitoring
activate one or more of these detectors. Moreover, if it is possible for
agents to directly punish cheating by other agents, cooperation should
increase even further.
Similarly, if agents can communicate with one another, they can frame
a group decision as a social exchange problem and ratify one
anothers' volitional states, thus activating natural inclinations
to cooperate for increased individual gain. Thus, communication can
increase cooperation even if there are no effective mechanisms for
monitoring and punishing cheaters.
Voluntary Contribution Experiments
The standard environment for studying the free rider problem in the
allocation of public goods is the voluntary contribution mechanism
(VCM), extensively studied by Isaac and Walker, and their coauthors
(Isaac, McCue and Plott [1985]; Isaac, Schmitz and Walker [1989]; Isaac
and Walker [1988a,b]; Isaac, Walker and Thomas [1984]; Isaac, Walker and
Williams [1991]). In a VCM experiment, each subject is given a set of
tokens at the beginning of each period. The subject may invest tokens in
an individual exchange, with a fixed monetary return per token, and/or a
group exchange, which returns money to the subject as a function of the
total contributions of all the subjects in the experiment.
Typically the individual incentives are designed to make strong free
riding, or zero contributions to the group exchange, the dominant
strategy for each subject. On the other hand, the highest joint payoff
for all subjects is achieved when all subjects contribute 100% of their
tokens to the group exchange.
Isaac and Walker and their coauthors, as cited above, find that
contributions to the group exchange are sensitive to differences in the
rules of message exchange that relate to our previous discussion of
cognitive mechanisms for social exchange. With subject groups of four or
ten subjects, if subjects make contributions in private, if there is no
identified target level of contributions, and if they do not communicate
with one another at any time during the experiment, then contributions
to the group exchange decline from about 40% of tokens in period one to
about 10% of tokens in period 10 (Isaac and Walker [1988a]; Isaac,
Walker and Thomas [1984]). These results extend to large groups of 40 or
100 people, but per capita contributions actually increase relative to
groups of size four or ten in some treatments.
In the same experimental environment, however, if subjects can talk
with one another for a short period before each decision, contributions
to the group exchange quickly rise to almost 100% of tokens, even if
actual investment decisions are made in private (Isaac and Walker
[1988b]). These results illustrate the importance of "cheap
talk" communication in creating an environment in which agents
expect one another to behave cooperatively and they abide by the
reinforced norm even when all decisions are made in private and no
individual's defection can be detected by others.
The results can also be interpreted in a signaling context. During
the communication phase, individuals verbally signal that they will
behave cooperatively and that they expect others to reciprocate. During
the decisionmaking phase, individuals generally abide by the norm
reinforced by the signal, and a cooperative outcome is achieved. While
no direct punishment can be inflicted by other subjects in the event of
defection, other subjects can exact general punishment by defection
against other subjects in future rounds.
In other experiments (Isaac, Schmitz and Walker [1989]), the
experimenter establishes a minimum provision-point contribution to the
group investment. Comparing results with and without a provision point,
and allowing no communication, contributions to the group account
increase with the provision point. When the provision point is 100% of
tokens, contributions rise even further, although many groups fail to
attain it.
From a signaling perspective, the provision point signals an expected
joint level of contribution to the group account, and helps to induce
common expectations of substantial contributions to the group account.
With equal endowments the implied signal is that each subject should
contribute (1/n)th of the announced provision point.
Ultimatum and Dictator Experiments
Ultimatum and dictator experiments illustrate the importance of
observability, shared expectations of social norms, punishment, and
signaling in enforcing reciprocity behavior. In an ultimatum game,
player 1 makes an offer to player 2 of $X from a total of $M. If player
2 accepts the offer, then player 1 is paid $(M - X) and player 2
receives $X; if player 2 rejects the offer, each gets $0. In the
dictator game, player 2 must accept player 1's offer.
Under the usual rationality assumptions the noncooperative
equilibrium of the ultimatum game is for player I to offer player 2 the
smallest dollar unit of account, and for player 2 to accept the offer.
In the dictator game player 1 offers player 2 nothing. In the ultimatum
game, however, player 2 can punish player 1 for "cheating" on
an implied social norm of reciprocal sharing across time, in social
exchange, by rejecting player 1's offer. That response is a
dominated strategy, if viewed in isolation, since both players would be
financially better off even with a vanishingly small offer. But, in the
absence of common knowledge of self-interested behavior, the possibility
of punishment may change player 1's equilibrium strategy.
In Kahneman, Knetsch and Thaler [1986] (hereinafter KKT), players 1
and 2 in an ultimatum game are "provisionally allocated" $10
and player 1 is asked to make an initial offer to "divide" the
$10 between the two players. Player 2 may veto the division, in which
case they both get $0. Kahneman and his coauthors find that most often
player 1 offers $5 to player 2; offers of less than $5 are sometimes
rejected. Although there are some differences, the general features of
these results have been replicated in cross-cultural comparisons
suggesting that the results are not strongly culture-specific (Roth,
Prasnikar, Okuno-Fujimara and Zamir [1991]). This suggests that the
explanation may transcend culture.
Forsythe, Horowtiz, Savin and Sefton [1994] (hereinafter FHSS)
replicate KKT's results from the ultimatum game, and also study the
dictator game. They find that about 20% of dictator player 1s offer
nothing to their player 2 counterparts, as noncooperative game theory
would predict; however, it is more common for player 1 to offer $5 than
to offer nothing, and offers of $1, $2, $3, and $4 are approximately
evenly distributed. Thus, removing the threat of punishment reduces
sharing behavior, but not by as much as game theory predicts.
Recognizing that the prospect of punishment might create expectations
that change player 1's behavior, Hoffman, McCabe, Shachat and Smith
[1994] (hereafter HMSS) consider experimental treatments explicitly
designed to affect subject expectations about operating norms of social
exchange. The experimental instructions that describe the different
treatments might be viewed as signals to the subjects of the expected
social norm operating in each experiment.
Brewer and Crano [1994], a recent social psychology textbook, lists
three norms of social exchange that may apply in ultimatum games. From
our perspective, norms are the product of culture interacting with
mental modules in order to solve specific problems of social exchange.
Such norms can then inform a theory of mind mechanism as to
another's volitional state. Equality implies that gains should be
shared equally in the absence of any objective differences between
individuals suggesting another sharing rule. Equity implies that
individuals who contribute more to a social exchange should gain a
larger share of the returns. Reciprocity implies that if one individual
offers a share to another individual, the second individual is expected
to reciprocate within a reasonable time. We distinguish negative
reciprocity - the use of punishment strategies to retaliate against
behavior that is deemed inappropriate - and positive reciprocity - the
use of strategies that initiate or reward appropriate behavior.
The designs of KKT and FHSS invoke the equality norm. No distinction
is made between the two individuals "provisionally allocated"
$10, and they are told to "divide" the money. Hence,
deviations from equal division are more likely to be punished as
"cheating" on the social exchange. Using the same task
description, HMSS replicate the FHSS results in a "random/divide
$10" treatment.
To invoke equity, HMSS explore two variations on their random/divide
$10 treatment in a 2x2 experimental design. First (the exchange
treatment), without changing the reduced form of the game, HMSS describe
it as a market in which the "seller" (player 1) chooses a
price (division of $10) and the "buyer" (player 2) indicates
whether he or she will buy or not buy (accept or not accept). From the
perspective of social exchange, a seller might equitably earn a higher
return than a buyer. Second (the contest treatment), they make each
seller earn the property right to be a seller by scoring higher on a
general knowledge quiz than buyers. Winners are then told they have
"earned the right" to be sellers. Going back to Homans [1967],
equity theory predicts that individuals who have earned the right to a
higher return will be socially justified in receiving that higher
return.
Figure 1 reproduces HMSS's random/divide and contest/exchange
experimental results. Social exchange theory predicts that, in a
situation in which it is equitable for player 1 to receive a larger
compensation than player 2 (i.e., contest/exchange), (a) player 1 will
offer significantly less to player 2; while (b) player 2 will accept any
given offer with higher probability. The data in Figure 1 are consistent
with prediction (a) and not inconsistent with prediction (b). Player 1s
offer significantly less to player 2s, while rejection rates are
statistically indistinguishable. These results suggest that the change
from random/divide to contest/exchange alters the shared expectations of
the two players regarding the social exchange norm operating to
determine an appropriate sharing rule. Finally, the difference between
random/divide and contest/exchange carries over to dictator experiments
as well, indicating that the change in expectations takes place even
when there is no threat of punishment from player 2.
But why do these treatments reduce offers without causing an increase
in the rejection rate? One hypothesis is that both players infer one
anothers' mental states - in this case expectations - from relevant
information in the experiment. "Mind reading" implies the
ability to take the perspective of another person who has common
information. In this experiment, player 1 expects player 2 to find a
lower offer acceptable, while player 2 expects, and is prepared to
accept, a lower offer. At minimum, this involves a shared attention
mechanism.
Observability is potentially powerful in the enforcement of social
norms. Thus, FHSS recruited Player 1s and Player 2s in separate rooms,
and the players were anonymous with respect to one another. However,
subject decisions were not anonymous with respect to the experimenter.
Someone was still "watching"; hence player Is were still not
entirely removed from a social exchange setting where reciprocity norms
might unconsciously apply.
This led HMSS to design a "double-blind" dictator
experiment, with several features that were later changed one or two at
a time, to investigate the role of social isolation in extinguishing
behavior reflecting social norms (Hoffman, McCabe and Smith [1996a]). In
the double-blind treatment, 64% of the Player 1s take all $10; about 90%
take at least $8.(6)
These results are strikingly different from the dictator results in
FHSS, and from the HMSS random/divide and contest/exchange dictator
experiments in which subjects were observed by the experimenters. Next,
in three stages, HMS vary each of the elements of the double-blind
dictator experiment in ways intended to reduce the "social
distance" between the subjects and anyone who might see their
choices. The experimental results form a predicted ordered set of
distributions. As the social distance between the subject and others
decreases, the cumulative distribution of offers to Player 2s increases.
These results demonstrate the power of isolation from implied
observability in the enforcement of norms of equality, equity and
reciprocity.
Signaling, Trust, and Punishment in Bargaining Experiments
In this section we review the results of two-person extensive form
bargaining/trust experiments in which players move sequentially, and one
player can choose to play - signal - cooperatively. Berg, Dickhaut and
McCabe [1995] have adapted the double-blind procedure to study trust and
reciprocity in a two-stage dictator game. In stage one player 2 decides
how much of $10 to send to player 1, and how much to keep. The amount
sent triples to M before reaching player 1. In stage two player 1,
acting as a dictator, decides how to split the M dollars. Since the
amount to be split is endogenous, the two players now share a common
history before the dictator game is played. If reciprocity plays a
significant role in promoting social exchange, then their common history
should reduce the "social distance" between subjects in a
two-stage dictator game. While Berg, Dickhaut and McCabe find
significant use of trust and reciprocity, subjects in their experiments
had no alternative except to rely on trust for mutual gain.
McCabe, Rassenti and Smith [1996a] study an extensive form game in
which a player can choose between two subgames, each of which can result
in mutual gain. In one subgame mutual gain can be achieved using
reciprocity incentives, while in the other subgame mutual gain is
achieved using self-interested incentives. By choosing the reciprocity
subgame the subject signals a desire to cooperate, and each subject can
earn 50. By choosing the self-interested subgame the subject signals a
desire to play noncooperatively, and each subject earns 40. In some of
these experiments, the signaling player, at a cost to himself or
herself, can directly punish the other player for "cheating"
on the implied social exchange. In the other "trust"
experiments, there is no direct opportunity to retaliate against
defection from a signal to cooperate.
The Constituent Games: Payoffs. Figure 2 shows the extensive form
bargaining tree for these two constituent, or stage, games played by two
persons. Player 1 begins with a move right or down at node [x.sub.1]. A
move right terminates the play with payoffs (35, 70), in cents, in
repeat play (multiplied by 20 in single play), respectively for Players
1 and 2. If the move is down, then Player 2 moves left or right at node
[x.sub.2], and so on. Play ends with any move that terminates at a
payoff box on the right or the left of the tree. Game 1 shows the
baseline payoff structure used; Game 2 is the same except for the
payoffs in the boxes corresponding to plays left at nodes [x.sub.3] and
[x.sub.5]. McCabe, Rassenti and Smith [1996a] have studied behavior in
these games under a variety of matching protocols and information
treatments.
In both Games 1 and 2 the right side of the tree contains the subgame
perfect (SP) noncooperative outcome (40, 40), where Player 2 moves right
at [x.sub.6]. This outcome is achieved by simple dominance, once Player
2 moves right at [x.sub.2]; i.e., it is in Player 1's interest to
play down at [x.sub.4], and for Player 2 then to play right at
[x.sub.6].
In Game 1, cooperative actions by the players can lead to the largest
symmetric (LS) outcome (50, 50), achieved if Player 1 moves left at
[x.sub.3]. Under complete payoff information, a move left at [x.sub.2]
by Player 2 can be interpreted as a signal to Player 1 that Player 1
should go left at [x.sub.3]. (This is because 50 at LS is clearly better
than 40 at SP for Player 2, allowing Player 1 to infer Player 2's
reason for playing left at [x.sub.2].) Player 1, however, can defect,
move down at [x.sub.3], and force Player 2, in his or her own interest,
to move left at [x.sub.5] giving Player 1 a payoff of 60. In fact this
is the game theoretic prediction if play occurs on the left side of the
tree in Game 1. In a single play, Player 2 should see this and the
theoretical prediction becomes Selten's SP outcome on the right.
But a move left at [x.sub.2] in Game 1 is more than a signal that
Player 2 wants to achieve the LS outcome (50, 50). It can also be
interpreted as a potential threat to play down at [x.sub.5], punishing
Player 1 if Player 1 defects or "cheats" by playing down at
[x.sub.3]. This action, however, is costly to Player 2, since each
player gets 20 if Player 1 moves left at [x.sub.7]. But, given the way
subjects behave in ultimatum games, it is not unreasonable to assume
that some subjects will move left at [x.sub.2] and then punish
defections at [x.sub.3].
Game 2 contrasts with Game 1 in that to achieve LS, by Player 2
moving left at [x.sub.5], Player 1 must resist the temptation to move
left at [x.sub.3]. In Game 2, Player 1 can "cheat" on the
invitation to cooperate by choosing (60, 30) without the prospect that
Player 2 can punish Player 1. Thus, Game 2 allows signaling, but not
punishment; it is a game of trust.
Experimental Design. Table I shows four treatments that vary the
protocol for matching pairs in each experiment. An experiment consists
of groups of 8-16 subjects who are randomly assigned to pairs. In Repeat
Single we begin the session with 16 subjects, and each person plays
every other counterpart once, with their roles alternating between
Player 1 and 2. Under Contingent each player indicates her choice at
each node. Then the computer executes the play. Single means that all
pairs play the constituent game exactly once for a multiple of 20 times
the payoffs shown in the boxes of Figure 2.
Summary of Results. Table II lists the conditional outcome
frequencies for each payoff box. Reading across data row 1 for Single 1
we observe that 13 of 26 Player 2s moved left at [x.sub.2] indicating
cooperation; 10 of the 13 left plays ended with Player 1 choosing (50,
50); three Player 1s defected by playing down at [x.sub.3]; two of these
Player 2s accepted the defection and responded with (60, 30), while one
played down at [x.sub.5] to punish Player 1 who then chose (20, 20). In
the right game, played by 13 of 26 Player 2s, 12 of 13 ended at the SP
outcome (40, 40); one play was at (15, 30). The column labeled
E([Pi]2[where]Left) computes the expected profit, 44.6 cents, to player
2 from playing left at [x.sub.2], based on the relative frequencies of
subsequent play by both players. E([[Pi].sub.1][where]Down) is the
expected profit, 46.7 cents, to Player 1 from defecting at node
[x.sub.3]. Efficiency is the percentage of the cooperative total payoff
at (50, 50) that is realized by all players. Thus in Single 1 85.5% of
the cooperative surplus is collected by all pairs. At SP efficiency is
80%, so any greater efficiency implies a net social benefit from
cooperative initiatives.
[TABULAR DATA FOR TABLE I OMITTED]
Result 1. Game theory predicts that in Single 1 all plays will be in
the right subgame. In fact half are in the left subgame. In Repeat
Single 1, we observe that experience does not help to achieve SP; now
58% play the left subgame. Contrary to the theory, we observe both too
much attempted cooperation and too few defections on these attempts.
Conditional on right-branch play however, game theory does very well in
predicting the SP outcome for both Game 1 and Game 2 in all treatments.
Result 2. In all treatments it is (weakly) advantageous in the
expected payoff sense to play in the left subgame. This is indicated by
the fact that the expected profit to Player 2 of left-branch play is at
least 40.0 cents in all treatments, and 40 is the payoff to Player 2 at
SP. Thus, right subgame play by the minority is not profitable in both
Games 1 and 2.
Result 3. Defections by Player 1 at node [x.sub.3] of Game 1 are not
profitable under the Single 1 and Repeat Single 1 treatments: the
expected profit of playing down is always less than 50, the payoff to
Player 1 by not defecting. Thus, the "punish cheaters" mental
module hypothesized by Cosmides [1985] is alive. Moreover it is used
only just enough to be effective, but not so much that efficiency is
badly compromised.
Result 4. Single 1 Contingent converts Game 1 from the extensive to
the normal form by requiring each player's choices at all nodes of
the tree to be made in advance for simultaneous play. It is equivalent
to expressing all payoff path outcomes in matrix form for simultaneous
choice by both players. Game theory hypothesizes that the normal and
extensive forms are equivalent, but previous research has shown that
this is not generally the case (Schotter, Wiegelt and Wilson [1994]).
Comparing Single 1 with Single 1 Contingent we see that left play
declines (right play increases) in the latter. Why? Our hypothesis is
that the extensive form, with sequential turntaking moves, allows the
players to engage in a move interpreting conversation. Thus, at node
[x.sub.2], Player 2 has just received the message, "I moved down at
[x.sub.1] because I want to do better than receive 35," from Player
1. If Player 2 now moves left, the message is "I am playing left
because I want to forgo the (40, 40) on the right in favor of (50, 50)
which is better for both of us. Also, note that if you respond by
playing down at [x.sub.3], then I have the option of punishing you with
(20, 20)." This hypothetical dialogue is disrupted with
simultaneous play, although under strict rationality it is irrelevant:
Player 2's message is not credibly self enforcing. But as we have
seen (Baron-Cohen [1995]), mindreading allows players to infer mental
states from actions and, as shown by these results, may lead them to
play differently in the extensive form than in the normal form.(7)
[TABULAR DATA FOR TABLE II OMITTED]
Result 5. The failure of the SP predicted outcome (Result 1)
motivated the study of Game 2 in which the cooperative (50, 50) outcome
cannot be supported by the prospect of punishment. Comparing Single 2
with Single 1 (rows 2 and 1 of Table II), we see a slight reduction in
left moves by Player 2s in Game 2. Play in left subgame 2 produces fewer
(50, 50) outcomes (50%) than in Game 1 (76.9%). This reduces the
expected profit of left play from 44.6 cents in Game 1 to a break-even
40 cents in Game 2. Clearly, the strategic difference between the two
games is making a difference in the game theoretic predicted direction.
The more interesting observation is that the trust element in Game 2 is
sufficient to yield cooperation for half of the pairs who play the left
subgame. This is consistent with results reported by Fehr, Kirchsteiger
and Riedl [1993] in labor market experiments, and by Berg, Dickhaut and
McCabe [1995] in investment dictator games. In these studies first
movers trusted second movers to reciprocate with no possibility of
punishment.
If you think of noncooperative game theory as applying to
"foes," in these extensive form experiments the theory
accurately predicts behavior in up to half the observations. The
relevance of traditional game theory for a large segment of this
population cannot be dismissed. However, the other half, who persist in cooperation, need also to be explained and modeled. Their behavior is
not extinguished with experience: in Repeat Single 1, the percent of
play in the left reciprocity branch increases to 58%. We conjecture that
minimal elements for a complete theory of mental phenomena in games of
strategy should include: (1) a friend-or-foe detection mechanism, and
(2) an intentionality detector mechanism, where the latter requires
extensive form play to achieve its full scope.
V. WHEN DO PEOPLE ABANDON RECIPROCITY IN FAVOR OF NONCOOPERATIVE PLAY
The above examples illustrate a model of a mixture of individuals,
some of whose play reflects game theoretic principles, while
others' play reflects learned or innate responses involving
signaling, trust, punishment and other ingredients of reciprocity
behavior. In the latter, the play objective serves the typical subject
well: they exceed the performance of strict game-theoretic players in
that surplus-improving cooperative outcomes are more often attained than
theory would predict.
In this section we consider a contrary example to those above, one in
which subjects begin with their intuitive automatic responses, discover
that these responses cannot sustain good performance, then adjust in the
direction of the noncooperative rational expectations outcome predicted
by theory. In this case subjects are given common information, but this
is not sufficient to induce common knowledge in the sense of
expectations. (Also see Smith, Suchanek and Williams [1988] and Harrison
and McCabe [1992]). This, we argue, is because common information leaves
behavioral or strategic uncertainty unresolved. The latter is resolved
over time as subjects, in successive extensive form rounds, come to have
common expectations that predicted equilibrium outcomes will prevail.
McCabe [1989] reports a six person, six period, extensive form game
experiment using fiat money. In successive periods subjects use buy,
sell and null messages to trade, or not trade, a unit of fiat money
against cash dividend paying bonds. In the last period a bond holder
should not sell since he or she is left with worthless fiat money.
Similarly, the money should not be accepted on the penultimate round,
and by backward induction should not be accepted in the first period.
Although subjects have complete information on this payoff structure,
trade in the first play of the sequence yields trade in each period
until the last one. Repeating this constituent game 10 times (common
information) causes some, but not a complete, unravelling backward from
the final trial. When subjects return for a second 15 trial experiment,
the slow unravelling process continues, but trade persists, especially
in the early trials. In a third session for 20 trials, gradually, trade
is further diminished, and is virtually eliminated by the 15th trial.
These results can be understood in terms of a model in which people
have been strongly conditioned by reciprocity experience to accept flat
money in trade because they expect others to accept money when they
offer it in trade. This expectation is unconscious; they never ask
themselves why they and others accept money. It is a conditional
reciprocity response, which serves them effectively in daily life. They
are recruited to the laboratory where the conditions for ongoing
repeated exchange are not satisfied; in the end-game intrinsically
worthless money is refused in trade. This failure experience induces
them to reevaluate their unconscious, accustomed response to money. Very
slowly, in the limit, as play is repeated in the finite horizon
environment, trade converges to zero.(8)
VI. CONCLUSIONS
The experimental game results summarized in this paper suggest that
people invoke reward/punishment strategies in a wide variety of group
interactive contexts. These strategies are generally inconsistent with,
but more profitable than, the noncooperative strategies predicted by
game theory. However, in contrast to CT's emphasis on punishing
cheaters, we observe substantial use of positive as well as negative
reciprocity strategies, even in single-play games. Hence behavior is
much richer and more trusting than CT's model would predict.
A punish-cheaters mechanism has the advantage, as in tit-for-tat,
that it can sustain cooperation. But is a pure trust/trustworthy
mechanism sustainable? Recall that the "cooperate" strategy C
in the PD game cannot resist invasion by defectors. This is still an
open question, but Carmichael and MacLeod [1997] offer a model which is
encouraging. They analyze gift exchange showing that a stable
gift-giving custom, which does not depend upon the use of punishment
strategies, may emerge.
Consider the following hypothetical model of the mind for human
decision making. We inherit a circuitry which is modularized for solving
social exchange problems. But the switches are not set; that occurs
sometime in our maturation, requires no formal instruction, and is not a
self-aware process. In this sense it is like the way we
"learn" natural language without being taught. The switches
are set differently in different cultures, but the results are
functionally equivalent across cultures; in particular there is a
propensity to be programmed to try cooperation in dealing with other
people who are not detected as foes. But there is variation so that we
can talk about population distributions of P, the probability that a
person will initiate cooperation, of Q, the probability that a person
will defect on an offer to cooperate, of R, the probability a defection
will be punished, of S, the probability that a person will be trusting,
of T, that a person will be trustworthy, and so on. These distributions
of player types are an adaptation capable of changing slowly over time.
Formal education is hard because it is concerned with conscious
learning, expression, and action, and does not come naturally, just as
written language is unnatural and hard to learn. When people are exposed
to economic principles, most find it extremely hard to learn about
comparative advantage, opportunity cost, gains from exchange, and Nash
equilibria. Many give up, but it does not follow that if they are in an
economics experiment that they will perform poorly. This is because they
may be good at reading other minds and relying on their unconscious
natural mental mechanisms. These mechanisms help to define reputations
that are applied repeatedly across different life, and laboratory,
games. A one-shot game in the laboratory is part of a life-long
sequence, not an isolated experience that calls for behavior that
deviates sharply from one's reputational norm. Thus, we should
expect subjects to rely upon reciprocity norms in experimental settings,
unless they discover in the process of participating in a particular
experiment that reciprocity is punished and other behaviors are
rewarded. In such cases they abandon their natural instincts, and
attempt other strategies that better serve their interests.
We are grateful to the National Science Foundation for research
support under NSF #SBR-9210052 to the University of Arizona.
1. But see Rice [1996] for an experiment in which female fruit flys
are prevented from coevolving with males. After only 41 generations male
adaptation leads to a reduction in female survivorship in the genetic
battle of the sexes.
2. Research by neuroscientists on the amygdala, an almond-sized
structure deep in the temporal lobe of the brain, has shown that it is
directly involved in the perception of social signals. That the amygdala
participates in the social cognition and behavior of animals has been
known for many years, but recent studies have shown that these findings
extend to humans (Allman and Brothers [1994] Adolphs et al. [1994]).
Thus, subjects with damaged amygdalas are unable to recognize or
distinguish expressions such as fear, surprise and anger on faces in
photographs of people. In one study, the subject had great difficulty
determining whether individuals were looking at her or away from her.
The amygdala operates preconsciously: "the evidence ... clearly
indicates that the amygdala is involved in the evaluation of complex
stimuli long before they are completely analyzed cognitively, and
probably long before they enter awareness" (Halgren [1992, 194]).
3. Pinker [1994, 227] for example provides the following exchange:
Woman: "I'm leaving you." Man: "Who is he?" If
you are not autistic you know what this conversation means.
4. Unlike deductive logic, a cheater detection mechanism must account
for intentionality. In the CT experiments exchange is sequential: first,
I give you the watch, then only later do you pay the $20. Here the clear
interpretation is that the second mover has cheated if he or she does
not pay the $20. This rules out the use of the biconditional statement,
"You give me your watch," if and only if, give you $20,"
as a substitute for the conditional. Since the biconditional has an
ambiguous intertemporal interpretation, it is less clear that a contract
is implied. Suppose I give you $20, but you don't give me your
watch. The biconditional is clearly false even if I haven't cheated
you; when, for example, I give you the $20 altruistically. Note we can
write the more complicated logical statement, if (we agree to P iff Q),
then (P iff Q), to give the biconditional the correct intertemporal
interpretation without committing to the order of trade.
5. Other experiments have examined violations of social contracts
when they do not involve cheating (Gigerenzer and Hug, cited in CT
[1992, 195]. Only 44% correctly solve the no-cheating version, while 83%
get the cheating version correct. Cosmides and Tooby (in preparation,
cited in CT [1992, 198]) have examined social contract problems which
distinguish violations due to cheating from violations due to innocent
mistakes. The cheating version is correctly solved by 68% of the
subjects, but the mistake version is only solved by 27% of the subjects.
Other social contract reasoning tasks asked subjects to detect altruists
instead of cheaters. People are not good at detecting altruists. In fact
where the rule was a social law (public good) more people detected
cheaters than altruists (CT [1992, 193-95 and footnote 17]).
6. Bolton, Katok and Zwick [1993], using a different version of the
dictator game and using different doubleblind procedures, find no
difference between their doubleblind and single-blind treatments. The
results from such treatment variations are always of interest, but
claims that the experiments show that the results of HMSS do not
replicate exceed what is demonstrated. When examining treatment
variations on an earlier study, a second experimenter must first show
that he/she can duplicate the original results using the same treatment
and procedures, establishing that the results replicate with different
subjects and different experimenters. Only then can the results using
the new treatment, if different, be attributed to these conditions and
not to the subjects, experimenter, or procedures used. Thus, HMSS
replicated the procedures and results of Forsythe et al. [1994] before
attempting to compare them with the results from new treatments. Given
the sensitivity of the dictator game to procedures and instructions, it
is important that other researchers be able to replicate such findings
before changing the treatment. Eckel and Grossman [1996] replicated the
HMSS double-blind experiments before conducting their interesting new
treatment in which the recipient was the American Red Cross instead of
another subject. Terry Burnham also replicated the HMSS double-blind
experiments in a study currently in process (private communication).
7. Additional tests of the reciprocity hypothesis based on
comparisons of the extensive form with matrix normal form are reported
in McCabe, Smith and Lepore [1997]. The reciprocity hypothesis also
implies that SP outcomes will predominate under private information.
This prediction is strongly supported in McCabe, Rassenti and Smith
[1996b].
8. Similarly, Camerer and Weigelt [1988] report very slow convergence
in a sequential equilibrium reputation model.
REFERENCES
Adolphs, R., D. Tanel, H. Damasio and A. Damasio. "Impaired
Recognition of Emotion in Facial Expressions Following Bilateral Damage
to the Human Amygdala." Nature, 15 December 1994, 669-72.
Allman, John and Leslie Brothers. "Faces, Fear and the
Amygdala." Nature, 15 December 1994, 613-14.
Axelrod, Robert. The Evolution of Cooperation. New York: Basic Books,
1984.
Axelrod, Robert and William D. Hamilton. "The Evolution of
Cooperation." Science, 211, 1981, 1390-96.
Baron-Cohen, Simon. Mindblindness An Essay on Autism and Theory of
Mind. Cambridge, Mass.: MIT Press, 1995.
Berg, Joyce, John Dickhaut and Kevin McCabe. "Trust, Reciprocity
and Social History." Games and Economic Behavior, 10(1), 1995,
122-42.
Bolton, Gary, Elena Kator and Rami Zwick. "Dictator Game Giving:
Rules of Fairness versus Random Acts of Kindness." Working Paper,
University of Pittsburgh, 1993.
Brewer, Marilyn and William Crano. Social Psychology. St. Paul,
Minn.: West Publishing Co., 1994.
Camerer, Colin and Keith Weigelt. "Experimental Tests of a
Sequential Equilibrium Reputation Model." Econometrica, January
1988, 1-36.
Carmichael, Lorne and W. Bentley MacLeod. "Gift Giving and the
Evolution of Cooperation." International Economic Review, 1997,
forthcoming.
Cosmides, Leda. "The Logic of Social Exchange: Has Natural
Selection Shaped How Humans Reason? Studies with the Wason Selection
Task." Cognition, 31(3), 1989, 187-276.
Cosmides, Leda and John Tooby. "From Evolution to Behavior:
Evolutionary Psychology as the Missing Link," in The Latest and the
Best: Essays on Evolution and Optimality, edited by John Dupre.
Cambridge, Mass.: MIT Press, 1987, 277-306.
-----. "Evolutionary Psychology and the Generation of Culture,
Part II." Ethology and Sociobiology, 10(13), 1989, 51-97.
-----. "Cognitive Adaptations for Social Exchange," in The
Adapted Mind, edited by Jerome Barkow, Leda Cosmides and John Tooby. New
York: Oxford University Press, 1992.
Davis, Douglas D. and Charles A. Holt. Experimental Economics.
Princeton, N.J.: Princeton University Press, 1993.
Eckel, Catherine and Philip Grossman. "Altruism in Anonymous
Dictator Games." Games and Economic Behavior, 16(2), 1996, 181-91.
Fehr, Ernst, George Kirchsteiger and Arno Riedl. "Does Fairness
Prevent Market Clearing: An Experimental Investigation." Quarterly
Journal of Economics, May 1993, 437-59.
Forsythe, Robert, Joel Horowitz, N. Eugene Savin and Martin Sefton.
"Replicability, Fairness and Pay in Experiments with Simple
Bargaining Games." Games and Economic Behavior, 6(3), 1994, 347-69.
Fudenberg, Drew and Jean Tirole. Game Theory. Cambridge, Mass.: MIT
Press, 1993.
Halgren, Eric. "Emotional Neurophysiology of the Amygdala within
the Context of Human Cognition," in The Amygdala: Neurobiological
Aspects of Emotion, Memory and Mental Dysfunction, edited by John
Aggleton. New York: Wiley-Liss, 1992.
Harrison, Glenn and Kevin McCabe. "Testing Noncooperative
Bargaining Theory in Experiments," in Research in Experimental
Economics, vol. 5., edited by R. Mark Isaac. Greenwich, Conn.: JAI Press, 1992, 137-69.
Hoffman, Elizabeth, Kevin McCabe, Jason Shachat and Vernon Smith.
"Preferences, Property Rights and Anonymity in Bargaining
Games." Games and Economic Behavior, 7(3), 1994, 346-80.
Hoffman, Elizabeth, Kevin McCabe and Vernon Smith. "Social
Distance and Other Regarding Behavior in Dictator Games." American
Economic Review, 86(3), 1996a, 653-60.
-----. "Trust, Punishment, and Assurance: Experiments on the
Evolution of Cooperation." Paper presented at the Economic Science
Association Annual Meeting, October, 1996b.
Homans, George C. The Nature of Social Sciences. New York: Harcourt,
Brace and World, 1967.
Isaac, Glynn L. "The Food-sharing Behavior of Protohuman Hominoids. Scientific American, 238, 1978, 90-108.
Isaac, R. Mark, Kenneth F. McCue and Charles R. Plott. "Public
Goods Provision in an Experimental Environment." Journal of Public
Economics, February 1985, 51-74.
Isaac, R. Mark, David Schmitz and James M. Walker. "The
Assurance Problem in a Laboratory Market." Public Choice, September
1989, 217-36.
Isaac, R. Mark and James M. Walker. "Group Size Effects in
Public Goods Provision: The Voluntary Contributions Mechanism."
Quarterly Journal of Economics, February 1988a, 179-200.
-----. "Communication and Free-Riding Behavior: The Voluntary
Contributions Mechanism." Economic Inquiry, October 1988b, 585-608.
-----. "Costly Communication: An Experiment in a Nested Public
Goods Problem," in Contemporary Laboratory Research in Political
Economy, edited by Thomas Palfrey. Ann Arbor: University of Michigan Press, 1991, 269-86.
Isaac, R. Mark, James M. Walker and Susan H. Thomas. "Divergent
Evidence on Free Riding: An Experimental Examination of Possible
Explanations." Public Choice, 43(2), 1984, 113-49.
Isaac, R. Mark, James M. Walker and Arlington Williams. "Group
Size and the Voluntary Provision of Public Goods: Experimental Evidence
Utilizing Large Groups." Indiana University Working Paper, 1991.
Kahneman, Daniel, Jack Knetsch and Richard Thaler. "Fairness and
the Assumptions of Economics." Journal of Business, October 1986,
S285-S300.
Kaplin, Hillary and Kim Hill. "Food Sharing Among Ache Foragers:
Test of Explanatory Hypotheses." Current Anthropology, March 1985,
223-46.
Kreps, David, Paul Milgrom, John Roberts and Robert Wilson.
"Rational Cooperation in the Finitely Repeated Prisoners'
Dilemma." Journal of Economic Theory, 27(2), 1982, 245-52.
Marr, David. Vision: A Computational Investigation into the Human
Representation and Processing of Visual Information. San Francisco:
Freeman, 1982.
McCabe, Kevin. "Fiat Money as a Store of Value in an
Experimental Market." Journal of Economic Behaviors and
Organization, October 1989, 215-31.
McCabe, Kevin, Stephen Rassenti and Vernon Smith. "Game Theory
and Reciprocity in Some Extensive Form Bargaining Games."
Proceedings National Academy of Science, November 1996a, 13421-28.
-----. "Reciprocity, Trust and Payoff Privacy in Extensive Form
Bargaining." Manuscripts, Economic Science Laboratory, University
of Arizona, November 1996b.
McCabe, Kevin, Vernon Smith and Michael Lepore. "Intentionality
Signalling: Why Game Form Matters." Manuscripts, Economic Science
Laboratory, University of Arizona, November 1997.
Pinker, Steven. The Language Instinct. New York: William Morrow,
1994.
Rapoport, A. "Prisoner's Dilemma," in The New
Palgrave, vol. 3, edited by John Eatwell, Murray Milgate and Peter
Newman. London: Macmillan, 1987, 973-76.
Rice, William R. "Sexually Antagonistic Male Adaptation
Triggered by Experimental Arrest of Female Evolution." Nature, 16
May 1996, 232-34.
Roth, Alvin, Vesna Prasniker, Masahiro Okuno-Fujimara and Shmuel
Zamir. "Bargaining and Market Behavior in Jerusalem, Ljubligana,
Pittsburgh and Tokyo: An Experimental Study." American Economic
Review, December 1991, 1068-95.
Schotter, Andrew, Keith Wiegelt and Charles Wilson. "A
Laboratory Investigation of Multiperson Rationality and Presentation
Effects." Games and Economic Behavior, May 1994, 445-68.
Selten, Reinhard. "Reexamination of the Perfectness Concept for
Equilibrium Points in Extensive Games." International Journal of
Game Theory, 4(1), 1975, 25-55.
Selten, Reinhard and Rolf Stoecker. "End Behavior in Sequences
of Finite Prisoner's Dilemma Supergames." Journal of Economic
Behavior and Organization, March 1986, 47-70.
Smith, Vernon L., Gerry L. Suchanek and Arlington W. Williams.
"Bubbles, Crashes and Endogenous Expectations in Experimental Spot
Asset Markets." Econometrica, September 1988, 1119-51.
Tooby, John and I. De Vore. "The Reconstruction of Hominoid Behavioral Evolution through Strategic Modelling," in Primate
Models of Human Behavior, edited by Waren G. Kinzey. New York; SUNY Press, 1987, 183-237.
Trivers Robert L. "The Evolution of Reciprocal Altruism."
Quarterly Review of Biology, 46(4), 1971, 35-57.
Wason, Peter. "Reasoning," in New Horizons in Psychology,
edited by Brian M. Foss. Harmondsworth: Penguin, 1966, 135-51.
Hoffman, Elizabeth: Provost and Vice Chancellor for Academic Affairs,
and Professor of Economics, History, and Psychology, University of
Illinois at Chicago, Ill., Phone 1-312-413-3450, Fax 1-312-413-3455,
E-mail ehoffman@uic.edu
McCabe, Kevin A.: ESL Senior Research Scholar, Professor of
Economics, and IFREE Distinguished Research Scholar, Economic Science
Laboratory, University of Arizona, Tucson, Phone 1-520-621-3830, Fax
1-520-621-5642, E-mail kmccabe@econlab.arizona.edu
Smith, Vernon L.: Regents' Professor and McClelland Professor of
Economics, Economic Science Laboratory, University of Arizona, Tucson,
Phone 1-520-621-4747, Fax 1-520-621-5642, E-mail
smith@econlab.arizona.edu