首页    期刊浏览 2024年11月30日 星期六
登录注册

文章基本信息

  • 标题:Are People Sometimes Too Honest? Increasing, Decreasing, and Negative Returns to Honesty.
  • 作者:Conlon, John R.
  • 期刊名称:Southern Economic Journal
  • 印刷版ISSN:0038-4038
  • 出版年度:2000
  • 期号:July
  • 语种:English
  • 出版社:Southern Economic Association
  • 关键词:Economics;Honesty

Are People Sometimes Too Honest? Increasing, Decreasing, and Negative Returns to Honesty.


Conlon, John R.


Atin Basuchoudhary [*]

John R. Conlon [+]

We show that sender honesty can hurt receivers in simple signaling games. The receiver faces a trade-off between its ability to work with senders and the quality of information it can get and use from them. Our example also contradicts recent work suggesting that returns to honesty should be increasing. Positive, increasing returns are restored in our model if the receiver can precommit.

1. Introduction

Ordinary intuition suggests that we prefer to deal with honest people (Sobel 1985). Honest people give reliable information, so we can act on that information with confidence. [1] This paper explores the limits of this intuition in the context of a simple signaling game. In this game, contrary to intuition, the receiver's expected payoff decreases in the probability that the sender is honest.

The example we consider involves an employee (sender) who must decide whether to go to a mentor/supervisor (receiver) for advice. The supervisor is more experienced than the employee and so will be able to help the employee solve the problem. However, if the supervisor thinks that the problem is big, she may investigate in order to assign blame for the problem.

Thus, an employee with a big problem may be reluctant to go to the supervisor for advice. If the employee is dishonest, he may be able to conceal the true magnitude of the problem while still benefitting from the supervisor's advice. [2] However, an honest employee with a big problem may not want to go to the supervisor since, if he goes, he will always reveal the true magnitude of the problem. [3] Since the supervisor wants all employees with problems to come to her for advice, she may therefore be hurt by the presence of honest employees, whose problems remain unsolved because they do not get the benefit of her experience and so are less able to resolve their problems. [4]

This reasoning can be extended to a number of public health applications. Organizations that combat AIDS are often faced with at-risk people who engage in illegal activities (e.g., intravenous drug users or prostitutes). These organizations want at-risk people to reveal their types and use services (like provision of clean syringes or condoms) that may help reduce the risk of AIDS. However, at-risk people may fear that the health agency will reveal compromising information to law enforcement authorities. This may deter at-risk people from truthful revelation. Thus, if at-risk people are honest, or for some other reason have difficulty concealing their illegal behavior from the organization, then they may choose not to use the services of the organization at all.

Similarly, victims of domestic violence may need to go to a hospital emergency room (ER) for the treatment of injuries from the abuse. The ER personnel would like to ask the source of the injury and encourage the victim to go to a domestic violence center for long-term help. However, the victim may be unwilling to face the embarrassment of questioning. To avoid these questions, victims who are uncomfortable finding evasive answers may choose not to go to the hospital for treatment.

The provision of contraceptives in schools presents a similar problem. Sexually active teens with many sexual partners may be especially embarrassed to approach the school nurse for contraceptives. Honest but promiscuous teenagers may prefer unprotected sex rather than admit they have many partners.

There may be negative returns to honesty in other situations as well. For example, potential survey respondents may not be certain about the confidentiality of their responses. If certain questions are especially embarrassing, then honest respondents may choose not to participate in the survey at all rather than answer the embarrassing questions truthfully. Thus, embarrassing questions could lead to sample selection bias for all questions on the survey. An increase in the number of honest potential respondents could therefore lead to less accurate information being collected by the survey.

This phenomenon will arise whenever people have difficulty in hiding compromising behavior, regardless of their actual honesty; that is, any sender who cannot hide compromising issues from the receiver may avoid the receiver altogether. This avoidance may hurt the receiver.

Many organizations, however, understand the need for this kind of confidentiality. They therefore try to commit themselves to respect the sender's confidentiality so senders feel safe coining to them.

This suggests, paradoxically, that the receiver should commit herself to ignoring certain types of useful information. In the example we use for illustration, the supervisor can do better if she commits herself to not attach blame for problems, or, if blame is attached, to not discipline the employee too harshly. Section 4 suggests that returns to the supervisor from employee honesty will generally be positive if the receiver can precommit herself in this way.

Our example is also interesting because the receiver's expected payoff fails to be convex in probabilities of sender types. In single decision-maker environments, decision makers are better off ex ante if some of their uncertainty is resolved before they make a decision (Mossin 1969; Spence and Zeckhauser 1972). This, in turn, implies that expected payoffs are convex in the underlying probabilities. [5] Such convexities can then lead to certain kinds of increasing returns to information, as suggested by Radner and Stiglitz (1984).

This convexity also generalizes to certain multi-decision-maker environments (see, e.g., Malueg and Xu 1993; Conlon 1999). In some signalling games, expected payoffs to the receiver are convex in the probability that the sender is honest, yielding a type of increasing returns to honesty (Conlon 1999). [6] It will be interesting to explore how general this sort of increasing returns to honesty result is.

Section 2 presents a simple initial game without talk. Section 3 presents our central game and shows that sender honesty can yield negative and decreasing returns to the receiver. Section 4 shows that positive returns and convexity are restored if the receiver can precommit, and section 5 concludes.

2. A Simple Initial Signalling Game

This and the next section present a signalling game with negative and decreasing returns to the receiver from sender honesty. This section presents a very simple signalling game--Game 1--and its equilibrium solution. The next section then extends the game to allow for talk. This extended Game 2 then illustrates negative and nonconvex returns to honesty. The solution to Game 2 is obtained by appropriate substitutions into the solution for Game 1.

There are two players. Player I, the employee, has two possible types. He can be an employee with a small problem ([I.sub.SP]) or an employee with a big problem ([I.sub.BP]). The probability of his having a small problem is [alpha] and the probability of his having a big problem is 1 - [alpha]. Employees know their types.

Player II is a mentor/supervisor. In her mentoring role, she wants employees to come to her for advice if they have problems. However, as a supervisor, if she believes that the employee has a big problem, she will want to investigate the situation to see whether the problem is the employee's fault. The employee does not want to be investigated, especially if the problem is, in fact, big. Thus, if the problem is big, he may not want to go to the supervisor for advice in the first place. Thus, the supervisor's desire to eliminate incompetent employees conflicts with her desire to help employees with problems.

The supervisor cannot tell whether the employee has a small problem or a big problem. Thus, if an employee goes to her for advice, she must use Bayes' rule to determine the probability that the employee's problem is big or small.

Thus, Player I either goes (G) to the supervisor or doesn't go (DG). Once Player I goes to the supervisor, the supervisor can either check up on the employee (C) or not check up on the employee (NC). In either case, the supervisor wants the employee to come to her so she can give him advice.

We structure the payoffs as follows:

(a) [I.sub.SP] will always prefer to go to the supervisor (even if the supervisor checks up on him, an employee with a small problem does not worry much about this since his problem is minor).

(b) [I.sub.BP] will prefer to go to the supervisor if there is no risk of being checked up on and will prefer not to go if the investigation is certain.

(c) II likes it when [I.sub.SP] or [I.sub.BP] goes to her, whether or not she checks up on them, since she wants to give the employee advice to help the employee solve the problem, whether big or small.

(d) If II knows that she is facing a type of Player I who has a big problem, she will prefer to check up on him.

(e) If II knows that she is facing a type of Player I who has a small problem, then she will prefer not to check up on him (e.g., it is not worth the effort).

This game (Game 1) is represented in Figure 1, with Player I's payoffs given first. Thus, if I doesn't go (DG), he gets 3 and II gets 2, regardless of I's type. If [I.sub.SP] goes (G), he gets 4 [greater than] 3 or 5 [greater than] 3, depending on whether or not II checks up on him (C) or not (NC). Thus, [I.sub.SP] always prefers to go. Meanwhile, II gets 4 if she checks up on [I.sub.SP] and 5 if she does not. Thus, if she knows I is type [I.sub.SP], she will prefer not to check up on him. If [I.sub.BP] goes, he gets 0 [less than] 3 if checked up on and 5 [greater than] 3 if not. Thus, he will not go if he knows II will check up on him, but he will go if he knows II will not check up on him. Finally, II gets 4 if she checks up on [I.sub.BP] and 3 if she does not. Thus, if she knows that I is type [I.sub.BP], she will prefer to check up on him. [7]

We use [[sigma].sub.I] and [[sigma].sub.II] to represent the behavioral strategies of Players I and II, respectively. Thus, the supervisor's strategy is given by [[sigma].sub.II](C\G), the probability she checks up on the employee given that the employee goes to her. The strategy of the employee with the big problem is described by [[sigma].sub.I](G\[I.sub.BP]), the probability he goes to the supervisor given he is a big-problem employee. Employees with small problems will clearly always go to the supervisor since this always dominates not going for this type of employee.

We now determine the sequential equilibrium in this game. Thus, let [mu] represent the supervisor's beliefs, To find the equilibrium, we first determine the reaction functions for the supervisor (Player II) and for the big-problem employee (Player [I.sub.BP]).

The supervisor's problem depends on her assessment of the probability that the employee has a small problem (is of type [I.sub.SP]) or a big problem (is of type [I.sub.BP]), given that the employee goes (G) to the supervisor. Using Bayes' rule, the probability, [mu]([I.sub.SP]\G), that the employee has a small problem, given that he goes to the supervisor, is

[mu]([I.sub.SP]\G) = [alpha]/[[alpha] + (1 - [alpha])[[sigma].sub.1](G\[I.sub.BP])]. (1)

Next, the supervisor is indifferent between checking (C) and not checking (NC) when

4[mu]([I.sub.SP]\G) + 4[mu]([I.sub.BP]\G) = 5[mu]([I.sub.SP]\G) + 3[mu]([I.sub.BP]\G). (2)

This is because the left-hand side of Equation 2 is the supervisor's expected payoff from C and the right-hand side is her expected payoff from NC.

Next, using [mu]([I.sub.BP]\G) = 1 - [mu]([I.sub.SP]\G) in Equation 2 gives [mu]([I.sub.SP]\G) = 1/2. Finally, plugging this into Equation 1 and solving for [[sigma].sub.1](G\[I.sub.BP]) gives

[[sigma].sub.I](G\[I.sub.BP]) = [alpha]/(1 - [alpha]). (3)

Thus, if [[sigma].sub.I](G\[I.sub.BP]) [greater than] [alpha]/(1 - [alpha]), then the supervisor will always check, so [[sigma].sub.II](C\G) = 1. If [[sigma].sub.I](G\[I.sub.BP]) [less than] [alpha]/(1 - [alpha]), then the supervisor will never check, so [[sigma].sub.II](C\G) = 0. If [[sigma].sub.I](G\[I.sub.BP]) = [alpha]/(1 - [alpha]), then the supervisor will be indifferent between checking and not checking, so [[sigma].sub.II](C\G) can be anything in the interval [0, 1]. Thus, the supervisor's reaction correspondence is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (4)

Next, an employee with a big problem (type [I.sub.BP]) is indifferent between going to the supervisor (G) and not going (DG) when

0[[sigma].sub.II](C\G) + 5(1 - [[sigma].sub.II](C\G)) = 3. (5)

Here the left-hand side is the big-problem employee's expected payoff from going to the supervisor and the right-hand side is his payoff from not going. Solving Equation 5 gives

[[sigma].sub.II](C\G) = 0.4. (6)

Thus, if [[sigma].sub.II](C\G) [less than] 0.4, the big-problem employee will always go to the supervisor, so [[sigma].sub.I](G\[I.sub.BP]) = 1. If [[sigma].sub.II](C\G) [greater than] 0.4, the big-problem employee will never go to the supervisor, so [[sigma].sub.I](G\[I.sub.BP]) = 0, and if [[sigma].sub.II](C\G). = 0.4, then the big-problem employee is indifferent between going and not going, so [[sigma].sub.I](G\[I.sub.BP]) may be anything in [0, 1]. Thus, the big -problem employee's reaction correspondence is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

We can now solve for the equilibrium. This is given by pairs of probabilities ([p.sub.G], [p.sub.C]) such that [p.sub.G] [epsilon] [R.sub.1]([p.sub.c]) and [p.sub.c] [epsilon] [R.sub.II]([p.sub.G]), as shown in Figure 2. The equilibrium strategies of the supervisor and the big-problem employee are obtained separately for three different cases, depending on whether [alpha]/(1 - [alpha]) is less than, greater than, or equal to one.

Case 1, [alpha]/(I - [alpha]) [less than] 1

In this case, the big-problem employee's strategy is to go to the supervisor with probability [[sigma].sub.I](G\[I.sub.BP]) = [p.sub.G] = [alpha]/(1 - [alpha]) and the supervisor's strategy, given that the employee goes to her, is to check up on him with probability [[sigma].sub.II](C\G) = [p.sub.c] = 0.4. The supervisor's belief at her one information set is that there is a probability of [mu]([I.sub.SP]\G) = 0.5 that the employee's problem is small rather than big, given that the employee has gone to her (plug [[sigma].sub.I](G\[I.sub.BP]) = [alpha]/(1 - [alpha]) into Equation 1). This is consistent with Bayes' rule and also makes the supervisor's mixing at her information set rational. This therefore yields a sequential equilibrium in this case.

Case 2, [alpha]/(1 - [alpha]) [greater than] 1

In this case, the employee with a big problem always goes to the supervisor and the supervisor never checks up on him. Here the supervisor's belief at her information set is that there is a probability of [mu]([I.sub.SP]\G) = [alpha] that the employee's problem is small rather than big, given that the employee has gone to her for advice (plug [[sigma].sub.I](G\[I.sub.BP]) = 1 into Equation 1). Given that [alpha]/(l - [alpha]) [greater than] 1, so [alpha] [greater than] 0.5, the supervisor's decision at her information set, that is, not to check, is rational. Again, this yields a sequential equilibrium.

Case 3, [alpha]/(1 - [alpha]) = 1

In this case, the supervisor has an infinite array of possible equilibrium strategies, that is, 0 [leq] [[sigma].sub.II](C\G) [leq] 0.4, while the big-problem employee will always go to the supervisor. The supervisor's beliefs are again [mu]([I.sub.SP]\G) = 0.5, as in case 1, so the supervisor's mixing is again rational, given beliefs consistent with Bayes' rule, yielding a sequential equilibrium.

Intuitively, in case 1, there is a small prior probability, [alpha], that the problem is small, so the probability that the problem is big, 1 - [alpha], is large. Thus, the big-problem employee does not always go to the supervisor, and when he goes, the supervisor sometimes checks whether the problem is big or small. In case 2, by contrast, the problem is not likely to be big, so the supervisor will not check and the employee has no reason not to go to the supervisor. Case 3 is intermediate between these extremes.

3. The Extended Game with Honest and Dishonest Players

We now consider an extension of this game, Game 2. If Player I goes to the supervisor, then he must declare whether he is a small-problem employee (SP) or big-problem employee (BP). [8] We assume that Player I is honest (H) with probability [beta] and dishonest (DH) with probability 1 - [beta] independent of whether his problem is big or small. Thus, the probability of being an honest small-problem employee (HSP) is [alpha][beta], the probability of being a dishonest small-problem employee (DHSP) is [alpha](1 - [beta]), the probability of being an honest big-problem employee (HBP) is (1 - [alpha])[beta], and the probability of being a dishonest big-problem employee (DHBP) is (1 - [alpha])(1 -[beta]).

We structure an honest Player I's payoffs so that he will always prefer not to lie. Specifically, honest Player I's payoffs are identical to his payoffs in Game 1 except that he loses six units of utility from lying. Thus, for an honest Player I, talk is not cheap since he suffers a large disutility from lying. [9] Note that he may be willing to conceal information, for example, by avoiding II altogether; that is, an honest player suffers extra disutility from overt lying but not from concealing information.

If Player I is a dishonest type, however, his payoffs are not affected directly by the statements he makes, so his payoffs are identical to those in Game 1 regardless of what he says; that is, for a dishonest Player I, talk is genuinely cheap, so he will lie or tell the truth depending on how he wants to manipulate Player II. This extended game is represented in Figure 3. The payoff structure shown in Figure 1 is maintained here except that the honest employees lose six units of utility from lying.

We now construct an equilibrium for this extended game. It will then turn out that, in this equilibrium, Player II (the supervisor) will be hurt by increases in the probability, [beta], that Player I (the employee) is honest and that her expected payoff will fall at an increasing rate as [beta] rises. This yields negative and decreasing returns to sender honesty.

Some parts of Player I's strategy are determined by simple dominance arguments. First, if [I.sub.HBP] (the honest big-problem employee) goes to the supervisor, he will admit that he has a big problem. Also, [I.sub.HSP] and [I.sub.DHSP] (the small-problem types of employee) will always go to the supervisor irrespective of their honesty since they are not ashamed of their small problem. Of course, [I.sub.HSP] (the honest small-problem type of employee) will also say he has a minor problem.

It seems reasonable to expect that [I.sub.DHSP] (the dishonest small-problem employee) will say he has a small problem since no one would want to convince the supervisor that their problems are bigger than they really are. However, for large [alpha] and small [beta], there is an equilibrium, albeit implausible, where [I.sub.DHSP] declares BP, but the supervisor assumes that he probably has a small problem anyway. This can happen because talk is cheap for the dishonest types of Player I. Thus, there are sometimes equilibria in which the dishonest types of Player I act as if BP has no real meaning at all. If the dishonest types with small problems are sufficiently important ([alpha] and 1 - [beta] big), then the supervisor will simply assume that everyone has a small problem regardless of what they say, and this implausible equilibrium becomes a possibility.

The more likely possibility, of course, is for a dishonest small-problem employee to declare himself to be a small-problem employee. This gives BP and SP their literal meanings so that honest types of Player I can use them correctly and be understood by Player II.

Thus, we consider the equilibrium for which, if the employee goes to the supervisor and declares BP, the supervisor assumes that he really is a big-problem employee and checks up on him. There is only one such equilibrium, which we now derive.

First, if [I.sub.HBP] went to the supervisor, he would admit BP, so the supervisor would check up on him. Thus, he simply does not go. In addition, [I.sub.DHSP] (the dishonest small-problem employee) will go and declare SP, as argued above. Similarly with [I.sub.HSP] by the simple dominance argument above. Finally, if [I.sub.DHBP] (the dishonest big-problem employee) goes to the supervisor, then he also declares SP. However, [I.sub.DHBP] may go to the supervisor and claim SP or may not go to the supervisor at all.

Since honest big-problem employees do not go to the supervisor and everyone who goes to the supervisor claims to have a small problem, the supervisor can ignore the honest big-problem employees at node [B.sub.2] and the claims of the employees who actually go to her. Therefore, the only decision nodes of Player II's reached with positive probability in equilibrium are the nodes following SP leading from [A.sub.1], [A.sub.2], and [B.sub.1]; that is, Player II can act as if the only types in the game are [I.sub.DHSP], [I.sub.HSP], [I.sub.DHBP] and there is no talking (since all players who come to her claim SP). Similarly, the only other player with a nontrivial decision problem, Player [I.sub.DHBP], can ignore the possibility of the type [I.sub.HBP] since Player II does. Thus, Player II and types [I.sub.DHSP], [I.sub.HSP], and [I.sub.DHBP] of Player I can all act as if the game began at nodes [A.sub.1], [A.sub.2], or [B.sub.1] and there was no talking.

The game, conditional on being at [A.sub.1], [A.sub.2], or [B.sub.1] (i.e., ignoring the honest big-problem employees), then reduces to one like Game 1.

[gamma] = [alpha]/[[alpha] + (1 - [alpha])(1 - [beta])]. (8)

However, in this new situation, the probability of having a small problem, conditional on being [I.sub.DHSP], [I.sub.HSP], and [I.sub.DHBP] (i.e., being at [A.sub.1], [A.sub.2], or [B.sub.1]), is now

In other words, being at node [A.sub.1] or [A.sub.2] in Game 2 corresponds to being at node A in Game 1 (since Player II only cares about the size of the problem; she has no direct concern with Player I's honesty). Similarly, being at node [B.sub.1] in Game 2 corresponds to being at node B in Game 1. Thus, by replacing [alpha] in Game 1 with [gamma], we get the equilibrium strategies for Game 2.

As with Game 1, we get equilibria in Game 2 for three cases, depending on whether [gamma]/(1 - [gamma]) = [alpha]/[(1 - [alpha])(1 - [beta])] is less than one (case 1), greater than one (case 2), or equal to one (case 3).

Case 1, [alpha]/(1 -- [alpha])(1 -- [beta]) [less than]1

Here the strategy for the dishonest big-problem employee is [[sigma].sub.1] (D\[I.sub.DHBP]) = [alpha]/(1 - [alpha])(1 - [beta]) and the strategy for the supervisor, given that the employee goes and claims that the problem is small, is [[sigma].sub.[Pi]](C\G, SP) = 0.4. The supervisor has two information sets: one for a claim of SP and one for a claim of BP. At the SP information set, the probability or [mu]([I.sub.HSP] or [I.sub.DHSP]\ G, SP) = 1/2, as before (by Bayes' rule, similar to case 1 for the simpler game above). The BP information set is not reached, so [mu]([I.sub.HSP] or [I.sub.DHSP] \ G, BP) is not pinned down by Bayes' rule. However, if this probability is less than 1/2, the supervisor will check out the employee in response to BP. This then gives a range of assessments at the BP information set, which, with the above strategies and beliefs at SP, form a sequential equilibrium. Note, incidentally, that [mu]([I.sub.HSP] or [I.sub.DHSP] \G, BP)[less than] 1/2 makes sense since it says that, if an employee claims to have a big problem, the supervisor assumes that the problem is probably not small. [10]

As mentioned above, there is sometimes another equilibrium in which [I.sub.DHSP] claims to have a big problem. However, it seems reasonable to assume that no one would claim to have a big problem if their problem was actually small, so this other equilibrium is implausible. [11]

Case 2, [alpha]/[(1 -- [alpha])(1 -- [beta])] [greater than] 1

The lying big-problem employee always goes to the supervisor, and the supervisor never checks up on anybody. Again, beliefs at the SP information set satisfy Bayes' rule, and the supervisors' actions are rational there. Again, the supervisors' beliefs after BP should put sufficient weight on the problem being big that the supervisor checks up on employees who declare BP. Otherwise, the honest big-problem employee will go to the supervisor and declare BP.

Case 3, [alpha]/[(1 - [alpha])(1 - [beta])] = 1

The supervisor has an infinite number of strategies open to her in the range 0 [leq] [[sigma].sub.[Pi]](C \ G, SP) [leq] 0.4, while the dishonest big-problem employee will always go to the supervisor. Beliefs are obvious from the above. [12]

We now calculate the expected payoffs for the supervisor. Clearly, from the above discussion, we need to calculate these expected payoffs for three cases. However, we can lump cases 2 and 3 together because the supervisor's expected payoff is independent of the choice of equilibria in case 3. Breaking the cases down according to [alpha], the expected payoff, [[pi].sub.II], to the supervisor then becomes

[[pi].sub.II] = { 2 + 4[alpha] if [alpha] [less than] (1 - [beta])/(2 - [beta]) 3 - [beta] + (2 + [beta])[alpha] if [alpha] [geq] (1 - [beta])/(2 - [beta]). (9)

Now, 2 + [beta] [less than] 4 since [beta] [less than] 1. Thus, as [alpha] (the probability that the employee is a small-problem employee) increases, the supervisor's expected payoff increases but at a decreasing rate (see Figure 4); that is, in contrast to single decision-maker problems, the expected payoffs are not convex in the underlying probability [alpha].

Next consider the supervisor's expected payoff as a function of [beta], the probability of honesty. Breaking the cases down according to [beta], [[pi].sub.II] is given by

[[pi].sub.II] = { 2 + 4[alpha] if [beta] [less than] (1 - 2[alpha])/(1 - [alpha] 3 + 2[alpha] - (1 - [alpha])[beta] if [beta] [geq] (1 - 2[alpha])/(1 - [alpha]). (10)

As [beta] increases from zero to [beta] = (1 - 2[alpha])/(l - [alpha]), the expected payoff function stays constant in [beta]. But as [beta] increases beyond this point, the expected payoffs are decreasing in [beta]. Thus, we see that we have both decreasing and negative returns to honesty, again in contrast to standard intuition (see Figure 5).

To understand what's going on here, first note that the probability that the employee goes to the supervisor is

[alpha] + (1 - [alpha])(1 - [beta])[[sigma].sub.I](G\[I.sub.DHBP]). (11)

The first term here is the probability that the problem is small. The second term is the probability that the problem is big but the worker is dishonest times the probability that this dishonest worker goes (and so, incidentally, declares SP).

Next, consider the negative returns issue. When [beta] is big, the employee is probably honest. Therefore, when the employee goes to the supervisor claiming to have a small problem, the supervisor concludes that the problem probably is small and so does not check up on the employee. Thus, dishonest big-problem types always go to the supervisor, so [[sigma].sub.I](G\[I.sub.DHBP]) = 1. This implies that the only type of employee who does not go to the supervisor is the honest big-problem type. Thus, as the probability, [beta], that the employee is honest increases, the aggregate probability (Eqn. 11) that the employee goes to the supervisor falls. Since the supervisor prefers to have employees go to her if they have problems, this reduction in the probability that employees go to her reduces her expected utility.

In other words, the supervisor wants employees with problems to go to her regardless of whether or not they are honest about the size of the problem. However, honest big-problem employees do not go to her since they realize that they will reveal that their problem is big. Thus, if the probability that a player is honest increases past a certain point, the number of employees going to the supervisor with their problems falls, so the supervisor is worse off. Thus, the supervisor's expected payoff falls when employee honesty rises. This yields negative returns.

These negative returns are interesting, not only because they are somewhat unexpected but also because they suggest that the supervisor can do better (see section 4).

To understand the decreasing returns issue, consider what happens when we start with a small value of [beta]. For small values of [beta], the supervisor has little reason to believe in the statements of her employee since he is probably dishonest. She therefore checks up on him with probability 0.4. This makes the dishonest big-problem type of employee indifferent between going and not. Thus, for small [beta], the dishonest big-problem type of employee may not show up. As [beta] increases in this range, the probability, (1 - [alpha])(1 - [beta]), of a dishonest big-problem employee falls, but the probability, [[sigma].sub.I](G\[I.sub.DHBP]), that an employee goes, given that he is a dishonest big-problem type, rises. Thus, the total probability (Eqn. 11) that the employee goes stays constant, keeping the expected payoff to the supervisor constant.

However, as [beta] rises beyond a critical level, all dishonest big-problem employees are going, so further increases in honesty, by reducing the probability of [I.sub.DHBP], simply reduce the number of big-problem employees going to the supervisor. This reduces the expected payoffs to the supervisor, as explained above.

Combining these two effects yields decreasing returns to sender honesty since returns decrease from zero (as long as [[sigma].sub.1](G\[I.sub.DHBP]) [less than] 1) to negative (when [[sigma].sub.1](G\[I.sub.DHBP]) = 1).

This result is interesting, for example, because it indicates a situation in which the receiver may not want certain types of revealed. For example, suppose there is a 50-50 chance that [beta] = [[beta].sub.0] [less than] (1 - 2[alpha])/(1 - [alpha]) or [beta] = [[beta].sub.1] [greater than] (1 - 2[alpha])/(1 - [alpha]). Then, in this example, the supervisor would rather not know the true value of [beta] and, in particular, would want the employees to know that she does not know the true value of [beta]. To see this, let [[pi].sub.II]([beta])be the expected payoff in Equation 10. Then if the supervisor does not know the true value of [beta], her expected payoff will be [[pi].sub.II]([[[beta].sub.0] + [[beta].sub.1]]/2), while if she learns the true value of [beta] and her employees know this, her ex ante expected payoff (prior to learning [beta]) will be [[pi].sub.II]([[beta].sub.0]) + [[pi].sub.II]([[beta].sub.1])]/2. This later quantity is smaller by concavity. Thus, she prefers not learning [beta].

4. The Effect of Commitment in the Example

In the game in section 3, the uninformed party (the supervisor) has an expected payoff that is falling in the probability that the informed party (the employee) is honest. This happens because an honest employee knows that, if he goes to the supervisor, he will reveal any negative information, and this negative information will lead the supervisor to check up on him.

The question naturally arises, therefore, whether the supervisor would benefit from the employee's honesty if the supervisor could commit herself either to not check up on an employee who brings bad news or at least to check up on him less frequently. In this section, we show that, if the supervisor can commit herself to an optimal rule, then, under this optimal rule, she will not be hurt by the employee's honesty. [13]

To derive the optimal rule for the supervisor to follow, first note that, if the supervisor checks up on an employee with probability greater than 0.4, then only small-problem types of employee will go to the supervisor. However, the supervisor prefers not to check up on an employee if she knows that employee is a small-problem type, so checking up with probability greater than 0.4 cannot be optimal. It follows that the supervisor will never check up on an employee with probability greater than 0.4; that is,

[[sigma].sub.II](C\G, SP) [leq] 0.4 and [[sigma].sub.II](C\G, BP) [leq] 0.4. (12)

Since the supervisor never checks on the employee with probability greater than 0.4, the employee will either prefer to go to the supervisor or, at worst, be indifferent between going and not. In the case of indifference, we may assume that the employee actually goes. [14]

As one last preliminary step, we note that, under the supervisor's optimal commitment strategy,

[[sigma].sub.II](C\G, SP) [leq] [[sigma].sub.II](C\G, BP); (13)

that is, the supervisor checks up on an employee who claims a small problem no more often than she checks up on an employee who claims a big problem.

Suppose that [[sigma].sub.II](C\G, SP) [greater than] [[sigma].sub.II](C\G, BP). Then both of the dishonest types will claim to have big problems. Thus, the dishonest types will face the smaller [[sigma].sub.II] (C\G, BP). Meanwhile, the supervisor will check up more frequently on the honest small-problem type and less frequently on the honest big-problem type. Thus, the supervisor could do better by switching [[sigma].sub.II](C\G, SP) and [[sigma].sub.II](C\G, BP). This would not affect her payoff from the dishonest types since they would simply switch what they say. However, the supervisor would now be checking up more on the honest big-problem types and less on the honest small-problem types, so her expected payoff given employee honesty would rise. This contradicts the assumption that the original commitment strategy was optimal for the supervisor and so proves Equation 13.

Thus, we may assume that the supervisor's optimal strategy obeys Equations 12 and 13 and that the employee always goes. It remains to determine what the various types of employee say when they go to the supervisor and how the supervisor reacts. There are two major cases.

First, suppose that [alpha]/[(1 - [alpha)(1 - [beta])] [greater than] 1. Assume that [[sigma].sub.II]:(C\G. SP) = [[sigma].sub.II](C\G, BP) at the supervisor's optimum. We want to reach a contradiction.

If [[sigma].sub.II](C\G, SP) = [[sigma].sub.II](C\G, BP), then the supervisor is ignoring what the employee is saying, so her expected payoff is not affected by the employee's talking strategy. Thus, the supervisor's expected payoff is not affected if all of the dishonest employees claim to have a small problem. But then the probability that the problem actually is small, given that the employee says it's small, is

[alpha]/[alpha] + (1 - [alpha])(1 - [beta])] [greater than] 0.5, (14)

and the probability that the problem is big, given that the employee says it's big, is one. Thus, the supervisor could increase her expected payoff by never checking up on employees who claim small problems and checking up with probability 0.4 on employees who claim to have big problems. In this case, dishonest employees would continue to declare SP, but the supervisor's expected payoff would rise.

This disproves the optimality of a policy with[[sigma].sub.II](C\G, SP) = [[sigma].sub.II](C\G, BP). Thus, by Equation 13, [[sigma].sub.II](C\G, SP) [less than] [[sigma].sub.II](C\G, BP) in this case. Then the argument in the previous paragraph shows that [[sigma].sub.II](C\G, SP) = 0 and [[sigma].sub.II](C\G, BP) = 0.4 is optimal.

Next, suppose that[alpha]/[(1 - [alpha])(1 - [beta])] [less than] 1 and assume [[sigma].sub.II](C\G, SP) [less than] [[sigma].sub.II](C\G, BP), so [[sigma].sub.II](C\G, SP) [less than] 0.4. We again seek a contradiction. Since the supervisor checks up less often on employees claiming to have small problems, both dishonest types claim to have small problems. Thus, the probability that an employee actually has a small problem, given that he claims to have a small problem, is again [alpha]/[[alpha] + (1 - [alpha])(1 - [beta])], which is now strictly less than 0.5. Thus, the problem is probably big, so the supervisor can increase her expected payoff by increasing the probability, [[sigma].sub.II](C\G, SP), of checking up on this employee slightly. This contradicts the assumption that [[sigma].sub.II](C\G, SP) [less than] [[sigma].sub.II](C\G, BP) was optimal.

Thus, [[sigma].sub.II](C\G, SP) = [[sigma].sub.II](C\G, BP) at the optimum. Further, since [alpha]/[(1 - [alpha])(1 - [beta])] [less than] 1, it follows that [alpha]/(1 - [alpha]) [less than] 1, so [alpha] [less than] 0.5. Thus, the probability that the problem is small, given that the employee goes to the supervisor, is less than 0.5, so the supervisor would like to check up on the employee as often as possible. However, by Equation 12, the supervisor can check up on the employee with probability at most 0.4. Thus, the optimal commitment in this case is [[sigma].sub.II](C\G SP) = [[sigma].sub.II](C\G, BP) = 0.4.

Finally, if [alpha]/[(1 - [alpha])(1 - [beta])] = 1, then either policy is optimal. Thus, we may lump this case in with [alpha]/[(1 - [alpha])(1 - [beta])] [greater than] 1. The supervisor's optimal commitment strategy is then as follows:

(a) Check up on Player I with probability 0.4 if Player I says he has a big problem.

(b) Check with probability 0.4 if [alpha]/[(1 - [alpha])(1 - [beta])] [less than] 1 and Player I says he has a small problem.

(c) Don't check up at all if [alpha]/[(1 - [alpha])(1 - [beta]] [geq] 1 and Player I says he has a small problem. [15]

Given (a), honest big-problem employees are indifferent between going and telling the truth versus not going. Thus, we can assume that honest big-problem employees will go to the supervisor. Given (b) and (c), we can also assume that dishonest big-problem employees will go to the supervisor. Thus, everyone goes to the supervisor, and the supervisor checks up on employees with the optimum frequency given that she wants everyone to go to her. The supervisor's expected payoff therefore becomes

[[Pi].sub.II] = {3.4 + 1.2[alpha] if [beta] [less than] (1 - 2[alpha])/(1 - [alpha])

3 + 2[alpha] + 0.4(1 - [alpha])[beta] if [beta] [geq] (1 - 2[alpha])/(1 - [alpha]). (15)

This means that, as the probability of honesty, [beta], rises, the expected payoff to Player II is first constant and then rises. Thus, in our example, Player II's expected payoff is nondecreasing in [beta] and is sometimes increasing; that is, Player I's honesty does benefit Player II when Player II can precommit. Note that Player II's expected payoffs have also become convex in the underlying probabilities.

5. Conclusion

We have shown that, in strategic situations, one player's expected payoffs can be decreasing in the probability that the other player is honest. This phenomenon can be extended to situations where senders find it difficult to hide compromising behavior. In all such cases, the receiver faces a trade-off between her ability to work with senders and the quality of information she can get from senders. Furthermore, the receiver's expected payoffs need not be convex in the underlying probabilities. This distinguishes signalling games from single decision-maker problems.

However, if the supervisor can precommit, her expected payoff will again be convex (as expected from the single decision-maker case) and nondecreasing in sender honesty. This suggests that there may sometimes be good reasons for a decision maker, such as an employment supervisor or a public health agency, to commit itself to ignore information that would seem to be ex post optimal to use.

(*.) Department of Economics and Business, Virginia Military Institute, Lexington, VA 24450, USA; E-mail basua@mail.vmi.edu.

(+.) Economics Area, School of Business Administration, University of Mississippi, University, MS 38677, USA; E-mail jconlon@bus.olemiss.edu; corresponding author.

The authors wish to thank the referee for unusually detailed comments that have enormously improved both the clarity and cohesiveness of the final paper. Anat Admati also made some very helpful suggestions. All remaining errors are ours.

(1.) Frank (1987) argues that honesty will tend to arise endogenously in an evolutionary setting since, if someone clearly has difficulty lying, others will trust that person more.

(2.) our equilibrium, however, the supervisor sometimes checks up on the employee even if the employee says the problem is small since the supervisor knows that some employees are dishonest.

On the other hand, if the employee does not go to the supervisor at all, the supervisor does not check on the employee. This makes sense if there is a large number of employees with no problems at all, and employees who have problems but don't go to the supervisor are hard to distinguish from employees without problems.

Finally, the results here will go through even if the supervisor sometimes checks up on employees who do not go to her. All that is needed is for her to check up more often on employees who she suspects have big problems because they go to her for advice. That is, all that is needed is for the supervisor to regard going to her as a potentially negative signal about the presence of large problems.

(3.) Here Frank's (1987) interpretation of honesty as difficulty lying is especially appropriate since, if the honest employee does not go to the supervisor, he is concealing information, even though he is not lying. Thus, the following analysis does not apply to employees who are honest in the higher ethical sense of always willing to reveal important information to others, regardless of consequences to oneself.

(4.) This result is related to the familiar fact that a player can be hurt by knowing too much--or rather, hurt by having other players know that they know too much. For example, a Stackelberg follower is hurt by the fact that the leader knows that the follower knows the leader's quantity choice.

(5.) For example, suppose that the decision maker faces a 50-50 chance of facing probability vector p or facing probability vector q. Let v(p) be her expected payoff if she faces p. Then if she does not learn which of p or q occurs before making her decision, her expected payoff will be v(0.5p + 0.5q). If she does learn which of p or q applies before making her decision, her ex ante expected payoff will be 0.5v(p) + O.5v(q). Convexity implies the latter is larger than the former, so the decision maker benefits from early revelation of information.

(6.) Increasing returns would occur because the more honest the sender is the more the receiver will trust the sender, and therefore the greater the gain the receiver should get from further increases in sender honesty.

Thus, for example, suppose there are constant marginal costs to the receiver of increasing the probability of sender honesty. Then the receiver will tend to either choose senders who are always honest or not try to influence sender honesty at all (see Radner and Stiglitz 1984 and Admati and Pfleiderer 1999 for related comments).

(7.) In this paper, we use specific numbers for our payoffs to simplify our discussion. Since we are constructing a counter-example, greater generality would not contribute anything to our discussion anyway. it would, of course, be trivial to solve a more general version of this game.

(8.) We assume that, in the absence of checking, the supervisor must depend on the employee's word to find out whether he has a big or a small problem. In reality, a good supervisor may discover that her employee has a problem once productivity levels start dropping. Often, however, an employee may be able to hide their role in the productivity drop off. For example, the supervisor may have difficulty reconstructing the early stages of the development of a problem after several months have passed.

(9.) Again, this builds, for example, on the intuition in Frank (1987).

(10.) Note that this equilibrium satisfies the intuitive criterion of Cho and Kreps (1987). This is because the intuitive criterion does not eliminate the possibility of any type of employee saying BP since all types of employees do better from BP than they do in equilibrium if the supervisor responds to BP by playing NC with probability one.

This equilibrium also satisfies the universal divinity criterion of Banks and Sobel (1987). The only way that any type could be tempted to say BP is if the supervisor responded to BP with a mixture putting sufficiently low weight on C. Thus, suppose that the supervisor responds to BP by playing C with probability x and NC with probability 1 - x. Then each employee type, except [I.sub.HSP], will defect to BP if and only if x [less than] 0.4. Thus, the universal divinity criterion cannot rule out a big-problem employee at the supervisor's information set after BP.

(11.) However, this equilibrium also satisfies the intuitive and universal divinity criteria. This is trivial to verify since there are no unreached information sets in this equilibrium.

(12.) Again, it is straightforward to check the intuitive criterion and universal divinity in cases 2 and 3.

(13.) This is essentially a principal-agent relationship. See, for example, Balder (1996), Fudenberg and Tirole (1991), Kahn (1993), or Page (1992).

Note that we are assuming that the supervisor can commit perfectly due, for example, to reputational considerations. It would be interesting to investigate the consequences of relaxing this assumption to, for example, renegotiation proofness.

(14.) Player I did not go when he was indifferent, the supervisor could check up on the employee with probability 03999, say, instead of 0.4. It is usually assumed in principaluagent problems that, in cases of indifference, the agent decides in favor of the principal. For example, at the principal's optimum, some of the incentive compatibility constraints usually hold as equalities, so the agent is indifferent between actions. In this case, it is usually assumed that the agent breaks ties in the principal's favor.

(15.) Note that this just depends on I's action. It does not require II to know I's type.

References

Admati, Anat, and Paul Pfleiderer 1999. Forcing firms to talk: Financial disclosure regulation and externalities. Review of Financial Studies. In press.

Balder, Erik J. 1996. On the existence of optimal contract mechanisms for incomplete information principal-agent models. Journal of Economic Theory 68:133-48.

Banks, Jeffrey S., and Joel Sobel. 1987. Equilibrium selection in signaling games. Econometrica 55:647-61.

Cho, In-Koo and David M. Kreps. 1987. Signaling games and stable equilibria. Quarterly Journal of Economics 102: 179-221.

Conlon, John R. 1999. Manipulation through truthful revelation. Unpublished paper, University of Mississippi.

Frank, Robert H. 1987. If Homo economicus could choose his own utility function, would he want one with a conscience? American Economic Review 77:593-604.

Fudenberg, Drew, and Jean Tirole. 1991. Game theory. Cambridge, MA: MIT Press.

Kahn, Charles M. 1993. Existence and characterization of optimal employment contracts on a continuous state space. Journal of Economic Theory 59:122-44.

Malueg, David, and Yongsheng Xu. 1993. Endogenous information quality: A job- assignment application. Unpublished paper, Tulane University.

Mossin, Jan. 1969. A note on uncertainty and preferences in a temporal context. American Economic Review 59:172-4.

Page, Jr., Frank H. 1992. Bayesian incentive compatible mechanisms. Economic Theory 2:509-24.

Radner, Roy, and Joseph E. Stiglitz. 1984. A nonconcavity in the value of information. In Bayesian models in economic theory, edited by Marcel Bayer and Richard E Kihlstrom Amsterdam and New York: North-Holland Publishers, pp. 33-52.

Sobel, Joel. 1985. A theory of credibility. Review of Economic Studies 52:557-74.

Spence, Michael, and Richard Zeckhauser. 1972. The effect of the timing of consumption decisions and the resolution of lotteries on the choice of lotteries. Econometrica 40:401-3.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有