首页    期刊浏览 2024年11月24日 星期日
登录注册

文章基本信息

  • 标题:Mutual admiration clubs.
  • 作者:Suen, Wing
  • 期刊名称:Economic Inquiry
  • 印刷版ISSN:0095-2583
  • 出版年度:2010
  • 期号:January
  • 语种:English
  • 出版社:Western Economic Association International
  • 摘要:The social contacts that a person has are influenced by his social roles and by factors such as physical proximity, but they are also a matter of choice. People choose whom to make friends with. Casual observation as well as academic research indicates that attitudinal similarity is a major determinant of interpersonal attraction and positive social judgment (e.g., Byrne 1971; McElwee et al. 2001; Newcomb 1961; Wittenbaum, Hubbell, and Zuckerman 1999). In other words, even after controlling for socioeconomic status, people prefer to associate with those who share similar views as their own. In his book Republic.com, Sunstein (2001) describes how the advent of information technology has enormously expanded the range of social contacts available to individuals. By logging into Internet chat rooms, for example, the choice of conversation partners is not confined by physical distance or social status. But instead of increasing the exchange of opinion among people of diverse viewpoints, Sunstein argues that information technology has led to an increasing fragmentation of the social space. The reason is that people choose to expose themselves only to familiar viewpoints, and technology has facilitated a more precise matching of individuals who share similar attitudes.
  • 关键词:Information technology;Social contract;Social economics;Socioeconomics

Mutual admiration clubs.


Suen, Wing


Birds of a feather flock together; presumably, they admire their peers' plumage. There is a wealth of evidence that members of a particular social group evaluate in-group members more favorably than out-group members (e.g., Brewer and Kramer 1985; Brown 1986). At least two possible explanations can account for this observation. One is that repeated interactions within a group produce feelings of solidarity and identification, which lead to mutual admiration. Alternatively, people who appreciate one another may self-select into the same social group. Economics has little to add to the first explanation. Accordingly, this article pursues the second line of argument.

The social contacts that a person has are influenced by his social roles and by factors such as physical proximity, but they are also a matter of choice. People choose whom to make friends with. Casual observation as well as academic research indicates that attitudinal similarity is a major determinant of interpersonal attraction and positive social judgment (e.g., Byrne 1971; McElwee et al. 2001; Newcomb 1961; Wittenbaum, Hubbell, and Zuckerman 1999). In other words, even after controlling for socioeconomic status, people prefer to associate with those who share similar views as their own. In his book Republic.com, Sunstein (2001) describes how the advent of information technology has enormously expanded the range of social contacts available to individuals. By logging into Internet chat rooms, for example, the choice of conversation partners is not confined by physical distance or social status. But instead of increasing the exchange of opinion among people of diverse viewpoints, Sunstein argues that information technology has led to an increasing fragmentation of the social space. The reason is that people choose to expose themselves only to familiar viewpoints, and technology has facilitated a more precise matching of individuals who share similar attitudes.

In this article, I develop a model of group formation, which explains why people prefer to exchange information with like-minded individuals. In this model, people are differentiated by their prior beliefs about some unknown state. They receive private information about the state and exchange their information with others in their group. There are two types of private signals. Informed persons observe real signals that are partially revealing about the state, while uninformed persons observe bogus signals that are pure noise but are believed to be informative. I assume that an individual derives utility from learning real signals; hence, each tries to join the group that is believed to contain the largest proportion of informed persons.

Internet chat rooms provide a convenient metaphor for the nature of the problem. Of the many chat rooms that focus on a particular subject, the choice of which one to join depends on who else is present in each chat room. I may enter a chat room that I believe contains cogent and insightful arguments related to the subject. But other people in the chat room may leave because they think that the quality of the arguments deteriorates as I enter the discussion. It is therefore possible that this kind of situation may admit no equilibrium, as is driven by the logic of Groucho Marx's dictum. (1) Nevertheless, for the setup described in this article, equilibrium can be proved to exist. In this equilibrium, society is segmented into two or more distinct groups. Members of each group prefer to canvass opinion from one another than to sample views from people in other groups because everyone believes that members

of his own group are smarter (i.e., more likely to be informed) than those of other groups. Moreover, the equilibrium has the interesting property that each group consists of individuals who share similar beliefs; people with disparate views do not mingle together.

The intuition behind this result is not difficult to understand. Since each person--informed or not--believes that his own private signal is informative, each will revise his prior in the direction of the signal. As informative signals are correlated with the state while uninformative signals are not, this updating process implies that the distribution of posterior beliefs among informed persons is more clustered near the true state than is the distribution of posterior beliefs among uninformed persons. Therefore, for a person who believes that the state is [[theta].sub.0], say, he expects to find a high concentration of informed persons among a group of people with posterior beliefs near [[theta].sub.0]. To put it differently, this person finds it unlikely that informed persons who have amended their priors based on real signals would have posteriors that are far away from [[theta].sub.0]. He concludes that a group of people with beliefs far away from his own beliefs must consist predominantly of uninformed persons; learning from their bogus signals would add little to his utility. To be sure, such feelings are mutual. People with beliefs far away from [[theta].sub.0] do not want to associate with those with beliefs near [[theta].sub.0] as they think that the other group must have been influenced by pure noise. In this model, people evaluate others through the lens of their own beliefs. There is no commonly agreed yardstick of who are informed and who are not. This explains why the model can escape from Groucho Marx's logic to admit an equilibrium social structure.

The mechanism presented in this article is related to Prendergast (1993). Prendergast shows that using subjective information to evaluate the performance of subordinates produces an incentive for subordinates to conform to the opinion of their supervisors. Similarly, Gentzkow and Shapiro (2006) argue that when there is uncertainty regarding whether information providers are well informed, consumers of information infer that those who produce reports that conform to consumers' priors are more likely to be competent. This article does not discuss the strategic and organizational issues arising from conformity but focuses on the implications of subjective evaluation for equilibrium social structure.

This model of equilibrium social structure shaped by differences in beliefs is closest to the recent work by Murphy and Shleifer (2004). They assume that agents with beliefs that are too dissimilar cannot influence one another. Therefore, the size of any group cannot be too large, and the centers of any two groups must be sufficiently far apart. This article seeks to go one step further by explaining why agents prefer to be influenced by people with similar beliefs.

Currarini, Jackson, and Pin (forthcoming) study the effects of homophily ("love of own kind") on the formation of friendship networks. In their model, individuals prefer to make friends with others with similar demographic characteristics (such as race), producing network structure with a high degree of clustering. The present work does not adopt a network-theoretic approach. Individuals are differentiated along a continuous dimension (their subjective beliefs), but the desire to seek informed opinion leads to their partitioning into distinct social groups.

I. THE STRUCTURE OF INFORMATION AND MISINFORMATION

There are two possible states of the world, [s.sub.L] and [s.sub.R]. The prior probability that an individual attaches to state [s.sub.R] is denoted p. Sometimes, it is convenient to work with log odds ratios instead of probabilities; so, I denote [rho] = log(p/(1 - p)). Assume a continuum of agents with different priors. Let F([rho]) represent the mass of population with a prior log odds ratio lower than [rho].

A fraction [pi] of the population are informed. Assume that whether a person is informed or not does not depend on his beliefs. An informed individual i receives a private signal [Y.sub.i] that is partially revealing about the true state. In particular, assume

Pr[Y.sub.i] = L|[S.sub.L]] = Pr[[Y.sub.i] = R |[s.sub.R]] = q [member of] (0.5, 1).

Let k = log(q/(1 - q)). By Bayes' rule, an individual who observes [Y.sub.i] = R will have a posterior log odds ratio of [rho]' = [rho] + k. Similarly, an individual who observes [Y.sub.i] = L will have a posterior log odds ratio of [rho]' = [rho] - k. Conditional on the state, the signal [Y.sub.i] is identically and independently distributed across informed individuals.

A fraction 1 - [pi] of the population are uninformed. Each uninformed agent i receives a bogus signal [X.sub.i] such that, regardless of the underlying state,

Pr[[X.sub.i] = L] = Pr[[X.sub.i] = R] = 0.5.

These signals are independently distributed across uninformed agents.

Since the distribution of the bogus signals does not depend on the state, they are totally uninformative. Uninformed individuals are aware that only a fraction [pi] of the population are informed. However, each uninformed person believes that he himself is among one of the blessed. That is, an uninformed individual mistakenly thinks that his private signal is [Y.sub.i]. Hence, a person who observes [X.sub.i] = R revises his posterior to [rho]' = [rho] + k, and a person who observes [X.sub.i] = L revises his posterior to [rho]' = [rho] - k. (2)

II. SOCIAL NETWORKS AND INFORMATION EXCHANGE

Because of the assumption that signals are independently distributed across the population, individuals will further update their beliefs if they know the realization of the private signals of other informed agents. People therefore have an incentive to exchange information and learn from one another. Unlike random sampling, much of this kind of informational exchange occurs along established social ties. People do not talk to just anyone on the street; they mostly talk to their friends. Even when the informational flow is one way, the search for information is seldom random. People do not ask just any expert for advice; they ask their trusted experts. Such nonrandom selection would make sense if a person believes that people in his social group are systematically more well informed than other agents in the population. But how do different individuals arrive at the conclusion that their own social groups are superior?

Some notation is needed to clarify the nature of this problem. Let there be two social groups in the population, labeled L and R. I assume that social groups are formed after each individual has received his private signal and that the exchange of information takes place only within members of the same group. Suppose all individuals with posterior belief [rho]' [member of] A belongs to group L, and all individuals with belief [rho]' [not member of] A belongs to group R. Such a partition is an equilibrium if L is preferred to R by all [rho]' [member of] A and R is preferred to L by all [rho]' [not member of] A. To focus on the interesting case, I require that the preference is strict for at least some individuals.

To give content to this definition, I need to specify preferences over social groups. Let [[phi].sup.j.sub.i](A) be the ratio of informed to uninformed persons in group j(j =L, R) in state [s.sub.i](i = L, R). Individuals do not directly observe the fraction of informed agents in a group, but they form expectations about this quantity using their subjective beliefs. Let p' be the posterior probability corresponding to the posterior log odds ratio [rho]'. I assume that the expected utility from joining group j (j = L, R) for a person with posterior probability p' is

[U.sup.j](p'; A) = p'u([[phi].sup.j.sub.R](A)) + (1 - p')u([phi].sup.j.sub.L] (A)), (1)

where u is an increasing function.

This assumption about people's objective function departs from standard decision theory, which postulates that individuals seek information in order to improve the quality of their decisions. Suen (2004) describes a model in which people seek information to improve their decisions. In that model, experts coarsen continuous information into binary signals, and individuals prefer to consult experts who share similar preferences and beliefs as their own. More generally, the cheap talk literature (e.g., Crawford and Sobel 1982) shows that preference similarity tends to facilitate communication and improve the quality of decision making. While I do not deny the instrumental value of information, I argue that this is not the only--perhaps not even the dominant--motive for seeking information in some situations. Consider, for example, the consumption of political news and opinion. The probability that a voter is pivotal in any large election is negligible. Yet people do talk about politics and social issues with friends, and some spend considerable time and effort to follow campaign information. They may be doing this for the sheer fun of it, or they may be trying to educate themselves or to impress upon others. If academics specialize in the disinterested pursuit of truth, it is not hard to imagine that other people may also treat information as a good in itself. But people do not want to consume just any piece of information; they want to consume informed opinion. Learning from a novel and valid argument is a delight, while listening to empty chatter, cliche, or falsehood can be a pain. This is reflected in the assumption that utility u is an increasing function of the ratio of informed to uninformed agents in the group.

Alternatively, one may imagine that the primary motive behind the choice of social groups is to establish social contacts and networks with successful people for future career advancement or business opportunities; the exchange of information is merely incidental to this dominant motive. If this is the case, and to the extent that informed individuals are more intelligent and more likely to be (or become) successful people, then there are gains from joining a group with a greater fraction of informed agents.

With Equation (1) as the criterion for the choice of social group, the following lemma holds.

LEMMA 1. Any equilibrium in which at least some individuals strictly prefer one group to another is characterized by a critical [??] such that a person with posterior belief p' [less than or equal to] [??] belongs to one group and a person with p' > [??] belongs to the other group.

Proof Let [a.sub.i] be the fraction of all informed agents who belong to group L under state [s.sub.i] (i = L, R). Let b be the fraction of all uninformed agents who belong to group L. (Note that b is state independent because bogus signals are state independent.) Then,

[[phi].sup.L.sub.L] = [[pi].sub.aL]/((l - [pi])b);

[[phi].sup.L.sub.R] = [[pi].sub.aR]/((l - [pi])b)

Similarly,

[[phi].sup.R.sub.L] = [pi](1 - [a.sub.L]/((l - [pi] (1 - b));

[[phi].sup.R.sub.R] = [pi](1 - [a.sub.R]/((l - [pi] (1 - b)).

If [a.sub.L] > [a.sub.R], then [[phi].sup.L.sub.L] > [[phi].sup.L.sub.R] and [[phi].sup.R.sub.R] > [[phi].sup.R.sub.L]. In this case, [U.sup.L](p'; A) - [U.sup.R](p'; A) is strictly decreasing in p'. So, if an individual with belief [P.sub.0] prefers L to R, all individuals with P'< [P.sub.0] prefer L and R. If [a.sub.L] < aR, then [U.sup.L](p'; A) - [U.sup.R](p'; A) is strictly increasing in p'. So, if individual with belief Po prefers L to R, all individuals with p'> [P.sub.0] have the same preference. Finally, if [a.sub.L] = [a.sub.R], then either everybody strictly prefers one group to another or everybody is indifferent between the two groups. Q.E.D.

Lemma 1 reduces the problem of finding an equilibrium partition to the problem of finding a critical value [??]. (3) Assume without loss of generality that all agents with p' [less than or equal to] [??] are in group L. Let [??] = log([??]/(1 - [??])). Then, any agent whose prior is less than [??] + k and who observes a signal value of L will belong to this group. Any agent whose prior is less than [??] - k and who observes a signal value of R will also belong to this group. Therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since q > 1 - q and F([??])+ k) > F([??]-k), the relative proportion of informed to uninformed persons in group L is larger when the state is [s.sub.L] than when the state is [s.sub.R]. This reflects the fact that informed individuals tend to revise their beliefs toward the truth. If the true state is sc, informed individuals will have a smaller posterior for se than average. One is then more likely to meet an informed person among a group of individuals whose posterior for [s.sub.R] is small.

Since u([[phi].sup.L.sub.L]) > u([[phi].sup.L.sub.R]), Equation (1) implies that [U.sup.L](p'; [??]) is decreasing in p'. Thus, the incentive to join a group with low values of p' (i.e., group L) is higher among individuals with low values of p'. Similarly, let S = 1 - F. Then, the ratio of informed to uninformed persons in group R in each state is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since u([[phi].sup.R.sub.R]) > u([[phi].sup.R.sub.L]), the function [U.sup.R](p'; [??]) is increasing in p'. Thus, individuals with high values of p' have a greater incentive to join group R. This means that individuals with different beliefs p' have the tendency to segregate themselves into two distinct social groups. Indeed, the following result holds.

PROPOSITION 1. An equilibrium [??] [member of] (0, 1) exists such that any individual with posterior belief p' [less than or equal to] [??] prefers group L to R and any individual with p' > [??] prefers group R to L.

Proof The critical value [??] is defined by the indifference condition

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since [[phi].sup.L.sub.R]([rho]) < [pi]/(1 - [pi]) < [[phi].sup.L.sub.L]([rho]) and [[phi].sup.R.sub.R]([rho]) > [pi]/(1 - [pi]) > [[phi].sup.R.sub.L]([rho]) for all [rho], the left-hand side of Equation (2) is strictly greater than the right-hand side when [??] is sufficiently close to 0, while the reverse is true when [??] is sufficiently close to 1. By the intermediate value theorem, a solution [??] [member of] (0, 1) to the indifference condition exists. Q.E.D.

Proposition 1 establishes existence but not necessarily uniqueness of equilibrium. In general, the equilibrium value of [??] depends on the form of the distribution F. The following result allows one to sidestep this dependence with an assumption about the utility function u.

COROLLARY 1. If the utility function u is linear, then [??] = 0.5 is the unique equilibrium for any distribution function F.

Proof The indifference condition Equation (2) can be written as

[??]/1 - [??] = u([[phi].sup.L.sub.L]) - u ([[phi].sup.R.sub.L]/u[[phi].sup.R.sub.R] - u[[phi].sup.L.sub.R].

When u is linear, the right-hand side of this equation is identically equal to 1. Hence, [??] = 0.5 is the only equilibrium. Q.E.D.

The equilibrium concept introduced in this section assumes that individuals have to choose to join exclusively one group or another. In some settings, it may be appropriate to allow the possibility that some individuals can sample randomly from the population at large without joining any exclusive groups. Since the overall fraction of informed agents is [pi], an individual who chooses not to join any group has utility u([pi]/(1 - [pi])). Note that u([[phi].sup.L.sub.L]) > u([pi]/(1 - [pi])) > u([[phi].sup.L.sub.R]). Hence, a person always prefers joining group L to sampling randomly from the population if his belief p' is sufficiently small. We have the following result.

PROPOSITION 2. Suppose the utility function u is weakly concave. Then, there exists [[??].sub.L] [less than or equal to] 0.5 [less than or equal to] [P.sub.R] such that any individual with posterior belief p' [less than or equal to] [p.sub.L] prefers to join group L, any individual with p' [greater than or equal to] [[??].sub.R] prefers to join group R, and any individual with p' [member of] ([[??].sub.L], [[??].sub.R]) prefers not to join any group. When u is linear, no one strictly prefers not to join an exclusive group.

Proof Let [[??].sub.L] = log([[??].sub.L]/(1 - [[??].sub.L])). The marginal type [[??].sub.L] is determined by the indifference condition

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since [[phi].sup.L.sub.L] > [pi](1 [pi]/1 - [pi] > [[phi].sup.L.sub.R], the left-hand side is strictly greater than the right-hand side when [[??].sub.L] is sufficiently close to 0. By Jensen's inequality, at [[??].sub.L] = 0.5, the left-hand side of Equation (3) is less than or equal to u(0.5 [[phi].sup.L.sub.R] + 0.5 [[phi].sup.L.sub.L]) = u([pi]/1 - [pi])). Hence, there exists a [[??].sub.L] [member of] (0, 0.5) such that the indifference condition is satisfied. Moreover, since [U.sup.L](p'; [[??].sub.L]) is decreasing in p', every individual with p' < [[??].sub.L] strictly prefers sampling exclusively from group L to sampling from the general population. The proof that [[??].sub.R] [member of] [0.5, 1) follows the same logic. Q.E.D.

Proposition 2 shows that even when individuals are not required to join any exclusive group, those with extreme beliefs still prefer to collect information from like-minded persons than to sample randomly from the population. Only individuals with moderate beliefs prefer not to join any exclusive groups. Moreover, if u is linear, the equilibrium entails [[??].sub.L] = [[??].sub.R] = 0.5. In this case, every individual prefers to join one of the two exclusive groups.

III. MULTIPLE GROUPS

In an environment with binary states, it is natural to consider an equilibrium in which the population self-select into two distinct groups. If the model is extended to a richer state space, the pattern of equilibrium group formation can be more complex. In particular, individuals with moderate beliefs may prefer to interact with one another than with people who hold more extreme views. It turns out that the existence of a multiple-group equilibrium is not merely a straightforward generalization of the two-group case. Proposition 3 below shows that the existence of moderate groups depends on, among other things, the variance of the informative signal relative to that of the bogus signal. If the variance of the bogus signal is either too large or too small, equilibrium can only support two polar groups. A general discussion of multiple-group equilibrium with unrestricted state space is beyond the scope of this article. In this section, I study the possibility of a three-group equilibrium when the state variable is a one-dimensional continuous variable using some specific assumptions regarding functional forms. In particular, I assume that the utility function u(*) from group membership is a linear function and that the updation of beliefs follows the linear Bayesian updating rule for normally distributed signals. A fuller treatment of multiplegroup equilibrium has to await further work.

Suppose the underlying state variable is represented by a one-dimensional variable [theta] on the real line. Different individuals have different prior beliefs about [theta]. Let person i's beliefs about [theta] be described by the normal distribution N([[mu].sub.i], v) with mean [[mu].sub.i] and variance v. The distribution of the prior mean [[mu].sub.i] across the population is given by the distribution function F. That is, the mass of population with prior mean [[mu].sup.i] [less than or equal to] m is equal to F(m).

A fraction [pi] of the population are informed. Each informed person observes an informative signal [Y.sub.i] = [theta] + [[epsilon].sup.y.sub.i], where [[epsilon].sup.y.sub.i] is independent of [theta] and is distributed N(0, [[tau].sub.y]). The remainder of the population are uninformed. An uninformed person i observes a bogus signal [X.sub.i] = [[epsilon].sup.x.sub.i] which is distributed N(0, [tau]x). As in Section I, all individuals believe that their own signals are informative. Upon observing signal [Z.sub.i] ([Z.sub.i] = [X.sub.i], [Y.sub.i]), person i updates his posterior mean about [theta] to

[[mu]'.sub.i] = (1 - [beta])[[mu].sub.i] + [beta] [Z.sub.i],

where [beta] = [upsilon]/([upsilon] + [[tau].sub.y]).

Define the function [??] to be the distribution of the variable (1 - [beta])[[mu].sub.i] + [beta] [[epsilon].sup.y.sub.i]. That is,

[??](m) = [[integral].sup.[infinity].sub.-[infinity]] F (m - [beta] [[epsilon].sup.y])/1 - [beta]) n ([[epsilon].sup.y]; [[tau].sub.y]) d [[epsilon].sup.y]

where n(x; [[tau].sub.y].) is the normal density function with mean 0 and variance [[tau].sub.y]. Then, conditional on [theta], the mass of informed agents with posterior mean less than m is [pi][??](m - [beta][theta]). Similarly, let G denotes the distribution of the variable (1 - [beta]) [[mu].sub.i] + [beta] [[epsilon].sup.x.sub.i]:

G (m) = [[integral].sup.[infinity].sub.-[infinity]] F (m - [beta] [[epsilon].sup.x]/1 - [beta]) n ([[epsilon].sup.x]; [[tau].sub.x]) d[[epsilon].sup.x].

The mass of uninformed agents with posterior mean less than m is given by (1 - [pi])G(m). For future reference, let g represent the density functions corresponding to G.

Let [[phi].sup.j] ([theta]) be the ratio of informed to uninformed agents in group j(j = 1, 2, ..., J) under state [theta]. The utility from joining group j for a person with posterior mean [mu]' is given by

(4) [U.sup.j] ([mu]') = E [u ([[phi].sup.j] (0))],

where the expectation is taken using the subjective posterior distribution over [theta]. I assume that the utility function u is linear for the subsequent analysis.

Consider an equilibrium in which people with beliefs [mu]' [member of] [[[??].sub.j-1], [[??].sub.j] belong to group j. (Set [[??].sub.0] = - [infinity] and [[??].sub.j] = [infinity].) Then, Equation (4) can be written as

(5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

The expression for H follows from the fact that the posterior variance of [theta] is (1 - [beta]) [upsilon].

It is straightforward to see from Equation (5) and from the definition of [[??].sub.0] and [[??].sub.j] that [U.sup.1] is monotonic decreasing in [mu]', while [U.sup.J] is monotonic increasing in [mu]'. Thus, people with extreme beliefs tend to prefer the extreme groups. (4) The following lemma characterizes people's preference for the less extreme groups.

LEMMA 2. If the distribution F has a logconcave density, then [U.sup.j] ([mu]'; [[??].sub.j-1], [[??].sub.j]) is single peaked in [mu]' for j = 2, ..., J - 1.

Proof From Equation (5), [partial derivative][U.sup.j]/[partial derivative][mu]' has the same sign as

h ([[??].sub.j-1] - [beta] [mu]')/[[??].sub.j] - [beta] [mu]') - 1.

where h is the density function corresponding to the distribution function H. Note that H is a convolution of [??] and a normal distribution, and [??] in turn is a convolution of F and a normal distribution. Since the normal density is log-concave, and since the class of log-concave densities is closed under convolutions (e.g., Dharmadhikari and Joag-dev 1988), h is log-concave. Log-concavity implies that h ([[mu].sub.j-1] - [beta] [mu]') / h ([[??].sub.j] - [beta][mu]') is decreasing in [*' for all [[mu].sub.j] > [[??].sub.-1]. It follows that the derivative [partial derivative][U.sup.J]/[partial derivative][mu]' can change sign (from positive to negative) at most once. Q.E.D.

Suppose that society is divided into three groups. Lemma 2 implies that preference for Group 2 (the moderate group) is most intense among people with moderate posterior beliefs. However, even though [U.sup.2] reaches a peak at some intermediate value of [mu]', this peak may not be higher than [U.sup.1] or [U.sup.3]. For Group 2 to be viable, the moderates must prefer the moderate group to the extreme groups. Similarly, in a three-group equilibrium, people with extreme beliefs must prefer their respective extreme groups to the moderate group. Existence of equilibrium can be established by making the following assumptions.

ASSUMPTION 1. The function h(x - d)/g(x) is unimodal in x for all d.

ASSUMPTION 2. The function g(x)/h((1 - [beta])x) is unimodal in x for fixed [beta].

ASSUMPTION 3. For any [d.sub.2] > [d.sub.1], [lim.sub.x[right arrow][infinity]] h([d.sub.2] - x)/h([d.sub.1] - x) = [infinity] and [lim.sub.x[right arrow][infinity]] h ([d.sub.2] - x)/h([d.sub.1] - x) = O.

PROPOSITION 3. Suppose that the density function h is log-concave and satisfies Assumptions 1-3. There exists a three-group equilibrium in which individuals with [mu]' [member of [-[infinity], [[??].sub.1]] belong to Group 1, those with [mu]' [member of] ([[??].sub.1], [[??].sub.2]) belong to Group 2, and those with [mu]' [member of] [[[??].sub.2], [infinity]] belong to Group 3. The critical values satisfy [[??].sub.1] < [[??].sub.2] and

(6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The proof of Proposition 3 involves several steps and is relegated to the Appendix. Briefly, Equation (6) is the indifference condition for the critical types:

(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

One can think of these two conditions as defining two implicit functions, [[??]'.sub.1] = [[psi].sup.i]/([[??].sub.1], [[??].sub.2]) for i = 1, 2. The equilibrium cutoff values are characterized by the fixed point of this mapping. Assumption 3 is a technical condition invoked to guarantee that these implicit functions exist. Assumption 1 is used to ensure [[??]'.sub.1] < [[??]'.sub.2] for any [[??].sub.1] < [[??].sub.2]. Assumption 2 is required to ensure that [[??]'sub.i] remains bounded for any hounded [[??].sub.i](i = 1, 2). The fixed point theorem then implies that a solution to Equation (6) exists and satisfies [[??].sub.1] < [[??].sub.2]. Finally, log-concavity of h is used to show that indifference by the critical types implies strict preference by people with beliefs in the interior of the group boundaries. That is, the indifference condition Equation (7) suffices to characterize an equilibrium in which no member of any group has an incentive to move to another group.

Assumptions 1 and 2 are related to the concept of "conditional variability ordering" introduced by Whitt (1980). (5) Since they play an important role in the Proof of Proposition 3, it is useful to discuss their economic interpretation. Recall that H is the (subjective) cross-sectional distribution of the posteriors of informed agents, given by [[mu]'.sub.y] = (1 - [beta])[mu] + [beta]0 + [beta]][epsilon].sup.y], while G is the distribution of the posteriors among uninformed agents, given by [[mu]'.sub.x] = (1 - [beta])[mu] + [beta] [[epsilon].sup.x].

If the distribution of priors is normal, then both H and G are normal as well. In this case, Assumption 1 is satisfied if and only if

(8) var([[mu]'.sub.y])/var([mu]'.sub.x]) < 1.

When the relative variance in Equation (8) is too large, informed agents are dispersed toward the two ends of the distribution, and the chances of meeting an informed person in the moderate group are small. In that case, a three-group equilibrium is not possible because no one wants to join the moderate group. When both H and G are normal, Assumption 2 is satisfied if and only if

(9) var([[mu]'.sub.y])/var([mu]'.sub.x]) > [(1 - [beta]).sup.2].

If the relative variance is too small, there are too many uninformed agents near the two ends of the distribution. Again, a three-group equilibrium cannot be supported because even people with extreme posteriors would leave the extreme groups to join the moderate group.

Numerical calculations based on the case in which F is a standard normal distribution illustrate this intuition. For example, if [[tau].sub.y] = 1 and [upsilon] = 1, a solution to Equation (6) exists only for [[tau].sub.x] [member of] (1.5, 9), which corresponds to the bounds specified in Equations (8) and (9). (6) In this range, [[??].sub.1] decreases while [[??].sub.2] increases with [[tau].sub.x]. In other words, as the bogus signal becomes more noisy, people expect the extreme groups to contain more uninformed agents. Thus, the moderate group becomes relatively more attractive, and its equilibrium size gets larger.

IV. CONCLUDING REMARKS

The sorting of individuals into nonoverlapping social groups is an abstraction of the structure of social networks in the real world. Granovetter (1973) argues that while many social networks exhibit clusters of strong ties which resemble the social groups described in this article), these clusters do have some overlap mediated by what he calls "weak ties." Watts (1999) shows that a few random rewiring of a clustered network is sufficient to connect every individual in society to within a short distance from any other individual. The equilibrium model in this article can be used to account for the stability of cliques in the network structure but does not adequately capture the evolution of these cliques or provide for the role of weak ties as emphasized by Granovetter and by Watts. In this article, an individual chooses to join a social group and interacts with a randomly picked fellow member of his group each period. A more satisfactory description of informal social networks would have social groups emerge endogenously as a result of repeated interactions among a cluster of individuals. A step toward that direction may be to embed the present model in a search framework. For example, a pair of individuals may meet each other at random, but they can choose to maintain or sever their tie depending on their assessment of the probability that the other partner is informed. Another possible extension of the model is to explore the relationship between the core beliefs of a group and its peripheral beliefs as in Murphy and Shleifer (2004). For example, the state variable may be taken to be two-dimensional, and the set of informed persons for one issue may not be the same as the set of informed persons for the other issue. Finally, this article assumes that people speak the truth when they exchange information with one another. The strategic manipulation of information (e.g., Morris 2001; Prendergast 1993) is another area for further investigation. Although none of these extensions is a straightforward exercise, I hope this article will provide a useful framework and starting point for thinking about more complicated problems in the analysis of social networks and social influence.

APPENDIX

Proof of Proposition 3

For [[??].sub.1] < [[??].sub.2], the indifference condition Equation (9) can be written as

(A1) H ([[??].sub.2] - [beta] [[??]'.sub.1]/H [[??].sub.1] - [beta] [[??]'.sub.1] - G([[??].sub.2])/G ([[??].sub.1]) = 0;

(A2) 1 - H ([[??].sub.2] - [beta] [[??]'.sub.2]/1 - H [[??].sub.1] - [beta] [[??]'.sub.2] - 1 - G([[??].sub.2])/1 - G ([[??].sub.1]) = 0;

One can think of Equations (A1) and (A2) as defining two implicit functions, [[??]'.sub.1] = [[psi].sup.i]([[??].sub.1], [[??].sub.2]) (i = l, 2). For [[??].sub.1] = [[??].sub.2], Equations (A1) and (A2) do not uniquely determine [[??]'.sub.1] [[??]'.sub.2] In this case, since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

let [[psi].sup.1] and [[psi].sup.2] be implicitly defined by the solution to, respectively,

(A1') h([[??].sub.1] - [beta][[??]'.sub.1])/H ([[??].sub.1] - [beta][[??]'.sub.1] - g([[??].sub.1])/G([[mu].sub.1] = 0;

(A2') - h([[??].sub.1] - [beta][[??]'.sub.2])/1 - H ([[??].sub.2] - [beta][[??]'.sub.2] + g([[??].sub.2])/1 - G([[mu].sub.1] = 0;

The proof proceeds in several steps.

Step 1. For all [[??].sup.1] [less than or equal to] [[??].sub.2], [[psi].sup.1], ([[??].sub.1], [[??].sub.2])(i = 1, 2) exists and is unique.

When [[??].sup.1] < [[??].sup.2], the left-hand side of Equation (Al) approaches 1 as [[??]'.sup.1] approaches - [infinity] because of the nature of distribution function. This ratio approaches [infinity] as [[??]'.sub.1] approaches infinity by Assumption 3. Since the right-hand side of Equation (A1) is greater than 1, by the intermediate value theorem, there is a finite [[??]'.sub.1] that solves Equation (A1). Furthermore, since h is log-concave, the left-hand side of Equation (A1) is increasing in [[??]'.sub.1]. Hence, the solution is unique. Similar reasoning establishes that the solution to Equation (A2) exists and is unique.

When [[??].sub.1] = [[??].sub.2] Assumption 3 implies that the left-hand side of Equation (A1') approaches 0 and [infinity] as [[??]'.sub.1] approaches minus and plus infinity. Moreover, log-concavity of h implies that this expression is increasing in [[??]'.sub.1]. Hence, a unique solution to Equation (Al') exists. A similar argument suggests that [[??]'.sub.2] exists and is unique when [[??].sub.1] = [[??].sub.2].

Step 2. For all [[??].sub.1] [less than or equal to] [[??].sub.2], [[??]'.sub.1] [less than or equal to] [[??]'.sub.2].

Suppose the opposite is true, that is, [[??]'.sub.1] > [[??]'.sub.2] Then,

(A3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where the inequality follows from the log-concavity of h and the equality follows from Equation (Al).

Now, by Assumption 1, the function h(x - [beta][[??]'.sub.1])/g(x) is unimodal in x. Unimodality of h/g implies unimodality of H/G (see, e.g., Metzger and Ruschendorf 1991). So, if Equation (A1) holds, it must be the case that

H([[??].sub.2] - [beta] [[??]'.sub.1]) G([[??].sub.2] > [lim.sub.x[right arrow][infinity] H(x - [beta] [[??]'.sub.1])/G (x) = 1.

In other words, the expression Equation (A3) is strictly negative, which contradicts Equation (A2).

Suppose [[??].sub.1] = [[??].sub.2] and assume that [[??]'.sub.1] > [[??].sub.2]. Then,

(A3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

When Equation (A1') holds, it must be the case that

H([[??].sub.1] - [beta] [[??]'.sub.1])/G([[??].sub.1] > [lim.sub.x[right arrow][infinity]] H(x - [beta] [[??]'.sub.1])/G(x) = 1

In other words, the expression Equation (A3') is strictly negative, which contradicts Equation (A2').

Step 3. The function [[psi].sup.i] ([[??].sub.1], [[??].sub.2])(i = 1, 2) is increasing in both arguments.

Differentiate Equation (A1) with respect to [[??].sub.1] to get

[partial derivative([H.sub.2]/[H.sub1])/[partial derivative] [[??]'.sub.1] [partial derivative] [[psi].sup.1]/[partial derivative [[??].sub.1] = [H.sub.2] [g.sub.1]/[H.sup.2.sub.1] ([h.sub.1]/[g.sub.1] - [H.sub.1]/[G.sub.1]),

where [H.sub.1] stands for value of the function H(x - [beta][[??]'.sub.1]) at the point x = [[??].sub.1], and so forth. Log-concavity of h implies that [partial derivative]([H.sub.2]/[H.sub.1])/[partial derivative][[??]'.sub.1] > 0. The unimodal property of h(x - d)/g(x) implies that H/G < h/g when H/G is increasing and H/G > h/g when H/G is decreasing. Since H/G is increasing at x = [[??].sub.1], [h.sub.1]/[g.sub.1] > [H.sub.1]/[G.sub.1]. Hence, [partial derivative][[psi].sup.1]/[partial derivative][[??].sub.1] > 0.

Differentiate Equation (A1) with respect to 112 to get

[partial derivative([H.sub.2])/[H.sub.1])/[partial derivative] [[??]'.sub.1] [partial derivative[[psi].sup.1] [partial derivative] [[??].sub.2] = [g.sub.2]/[H.sub.1] ([H.sub.2]/[G.sub.2] - [h.sub.2]/[g.sub.2]).

Since H/G is decreasing at x = [[??].sub.2], unimodality of h(x d)/g(x) implies that [H.sub.2]/[G.sub.2] > [h.sub.2]/[g.sub.2]. Hence, [partial derivative] [[psi].sup.1]/[partial derivative] [[??].sub.2] > 0. The monotonicity of [[psi].sup.2] can be established similarly.

Step 4. The function [[psi].sup.i] ([[??].sub.1], [[??].sub.2])(i = 1,2) is bounded for bounded [[??].sub.1] and [[??].sub.2].

By Assumption 2, g(x)/h((l-[beta])x) is unimodal in x. This in turn implies that G(x)/H((1- [beta])x) and (1 - G(x))/(1- H((1 - [beta])x)) are unimodal in x. Let [m.sub.u], be the mode of G(x)/H((1 - [beta])x) and [m.sub.1] be the mode of (1 - G(x))/(1 H(1 - [beta])x)). Since G(x)/H((1 - [beta])x) reaches a peak when g(x)/h((1 - [beta])x) is falling while (1 - G(x))/(1 - H((1 - [beta])x)) reaches a peak when g(x)/ h((1 - [beta])x) is rising, [m.sub.u] > [m.sub.l]. The remainder of this step establishes [m.sub.l] [less than or equal to] [[??].sub.1] [less than or equal to] [[??].sub.2] [less than or equal to] [m.sub.u] for all [m.sub.l] [less than or equal to] [[??].sub.1] < [[??].sub.2] [less than or equal to] [m.sub.u].

Let [[??].sub.1] = [[??].sub.2] = [m.sub.l] and suppose [[??]'.sub.1] = [m.sub.l]. Then, the left-hand side of Equation (A1') is equal to

(A4) h((1 - [beta])[m.sub.l])/H ((1 - [beta]) [m.sub.l]) - g([m.sub.l])/G ([m.sub.l]).

Notice that Assumption 2 implies that g(x)/h((1[beta])x) > G(x)/H((1 - [beta])x) when x = [m.sub.l] < [m.sub.u]; hence, Equation (A4) is negative. Since [H.sub.2]/[H.sub.1] - [G.sub.2]/[G.sub.1] is increasing in [[??]'.sub.1], in order for Equation (Al') to hold, it must be the case that [[??]'.sub.1] > [m.sub.l]. Finally, by the monotonicity of the [[psi].sup.1], for all [[??].sub.2] [greater than or equal to] [[??].sub.1] [greater than or equal to] [m.sub.l],

[[psi].sup.1] ([[??].sub.1], [[??].sub.2]) [greater than or equal to] [[psi].sup.1] ([m.sub.l], [m.sub.l]) > [m.sub.l].

To show that [[??]'.sub.2] < [m.sub.u], let [[??].sub.1] = [[??].sub.2] = [m.sub.u] and suppose [[??]'.sub.2] = [m.sub.u]. Then, the left-hand side of Equation (A2') is equal to

(A5) - h((1 - [beta]) [m.sub.u])/ 1 - H ((1 - [beta]) [[m.sub.u]) + g([m.sub.u]/1 - G ([m.sub.u]).

Assumption 2 implies that g(x)/h((l - [beta])x) > (1 G(x))/(1 - H((1 - [beta])x) at x = [m.sub.u], > [m.sub.l]; hence, Equation (A5) is positive. Since (1 - [H.sub.2])/(1 - [H.sub.1]) - (1 - [G.sub.2]/(1 [G.sub.1]) is increasing in [[??]'.sub.2], in order for Equation (A2') to hold, it must be the case that [[??]'.sub.2] < m,. Furthermore, for all [[??].sub.1] [less than or equal to] [[??].sub.2] [less than or equal to] [m.sub.u], we have

[[psi].sup.2] ([[??].sub.1], [[??].sub.2], [less than or equal to] [[psi].sup.2]([m.sub.u], [m.sub.u]) < [m.sub.u]

Step 5. The fixed point of the mapping [psi] exists such that [[??].sub.1] < [[??].sub.2].

Let T = {(x, y) : [m.sub.l] [less than or equal to] x [less than or equal to] y [less than or equal to] [m.sub.u]}. The previous steps establish that [psi] is a mapping from T to T. Hence, a fixed point ([[??].sub.1], [[??].sub.2]) [member of] T of [psi] exists. Moreover, this fixed point must be such that [[??].sub.1], < [[??].sub.2]. Suppose otherwise, that is, let [[??].sub.1] = [[??].sub.2] = [??]. Then, Equations (A1') and (A2') require

H([??] - [beta][??])/G ([??]) = h([??] - [beta][??])/g([??]) = 1 - H([??] - [beta] ([??])/1 - G([??].

But the first equality holds only at [??] = [m.sub.u], while the second equality holds only at [??] = [m.sub.l]. Since [m.sub.u] [not equal to] [m.sub.l], this is a contradiction.

Step 6. Indifference by the critical types implies strict preference by the interior types.

Since Equation (Al) holds at [[??]'.sub.1] = [[??]'.sub.1], and since logconcavity of h implies that Equation (Al) is increasing in [[??]'.sub.1], any individual with a posterior less than [[??].sub.1], prefers Group 1 to Group 2. Similarly, log-concavity of h implies that Equation (A2) is increasing in [[??]'.sub.2]. So, any individual with a posterior less that [[??].sub.2], prefers Group 2 to Group 3. This means that Group 1 is the best group for individuals with posteriors in the range [- [infinity], [[??].sub.1],]. Similar reasoning establishes that for [mu]' [member of] [[[??].sub.j-1], [[??].sub.1]](j = 2, 3),

max{[U.sup.1]([mu]'), [U.sup.2]([mu]'), [U.sup.3]([mu]')} = [U.sup.i]([mu]).

Q.E.D.

doi: 10.1111/j.1465-7295.2009.00254.x

REFERENCES

Brewer, M. B., and R. M. Kramer. "The Psychology of Intergroup Attitudes and Behaviors." Annual Review of Psychology, 36, 1985, 219-43.

Brown, J. D. "Evaluations of Self and Others: Self-Enhancement Biases in Social Judgments." Social Cognition, 4, 1986, 353-76.

Byrne, D. The Attraction Paradigm. New York: Academic Press, 1971.

Crawford, V., and J. Sobel. "Strategic Information Transmission." Econometrica, 50, 1982, 1431-51.

Currarini. S., M. O. Jackson, and P. Pin. Forthcoming. "A Economic Model of Friendship: Homophily, Minorities and Segregation." Econometrica,.

Dharmadhikari, S., and K. Joag-dev. Unimodality, Convexity, and Applications. San Diego, CA: Academic Press, 1988.

Gentzkow, M., and J. Shapiro. "Media Bias and Reputation." Journal of Political Economy, 114, 2006, 280-316.

Granovetter, M. S. "The Strength of Weak Ties." American Journal of Sociology, 78, 1973, 1360-80.

McElwee, R. O., D. Dunning, P. L. Tan, and S. Hollmann. "'Evaluating Others: The Role of Who We Are Versus What We Think Traits Mean." Basic and Applied Social Psychology, 23, 2001, 123-36.

Metzger, C., and L. Ruschendorf. "Conditional Variability Ordering of Distributions." Annals of Operations Research, 32, 1991, 127-40.

Morris, S. "Political Correctness." Journal of Political Economy, 109, 2001, 231-65.

Murphy, K. M., and A. Shleifer. "Persuasion in Politics." American Economic Review, 94, 2004, 435-39.

Newcomb, T. M. The Acquaintance Process. New York: Holt, Rinehart, and Winston, 1961. Prendergast, C. "A Theory of 'Yes Men'." American Economic Review, 83, 1993, 757-70.

Suen. W. "The Self-Perpetuation of Biased Beliefs." Economic Journal, 114, 2004, 377-96.

Sunstein, C. Republic.com. Princeton, NJ: Princeton University Press, 2001.

Watts, D. J. Small Worlds. Princeton, NJ: Princeton University Press, 1999.

Whitt, W. "Uniform Conditional Stochastic Order." Journal of Applied Probability, 17, 1980, 112-23.

Wittenbaum, G. M., A. P. Hubbell, and C. Zuckerman. "Mutual Enhancement: Toward an Understanding of the Collective Preference for Shared Information." Journal of Personality and Social Psychology, 77, 1999, 967-78.

(1.) The comedian Groucho Marx was reported to have said, "I don't care to belong to a club that accepts people like me as members."

(2.) This model does not depend on the assumption that everyone believes that his own signal is informative. Suppose each person attaches probability [pi] that his own signal is informative and probability 1 - [pi] that his own signal is bogus. Let

k' = log [[pi].sub.q] + (1 - [pi]) (0.5)/[pi](1 - q) + (1 - [pi]) (0.5) > 0.

Then, each person updates his belief to [rho] + k' upon observing a private signal R or to [rho] - k' upon observing a private signal L. The argument in the next section goes through by replacing k with k'.

(3.) There is also a trivial equilibrium in which the composition of agents in group L is identical to that of agents in group R. The trivial equilibrium is ruled out by the requirement that at least some individuals strictly prefer one group to another.

(4.) For this reason, an equilibrium with two groups always exists.

(5.) See also Metzger and Ruschendorf (1991) for how the unimodality of the ratio of the density functions implies the conditional variability ordering.

(6.) If [[tau].sub.x] = [[tau].sub.y], uncertainty about [theta] implies that the informative signal [Y.sub.i], = [theta] + [[epsilon].sup.y.sub.i] is more variable than the bogus signal [X.sub.i] = [[epsilon].sup.x.sub.i]. In that case, equilibrium can only support two groups since people expect any moderate group to consist of primarily uninformed agents. In a more general setup, one can let [X.sub.i] = [[xi].sub.i] + [[epsilon].sup.x.sub.i], where [[xi].sub.i] is the bias of the bogus signal. If var([[xi].sub.i])= var([theta])> var([theta] | [Y.sub.i]), then a moderate group can be supported in equilibrium even when [[tau].sub.x] = [[tau].[sub.y]

WING SUEN, This work grew out of conversations with Jimmy Chan and Dan Usher. Ronald Chan, Priscilla Man, and Kwan To Wong provided able research assistance. Valuable comments from Parimal Bag and Li Hao are gratefully acknowledged. I thank Alan Siu in particular for suggesting the title of this article.

Suen: Henry G. Leong Professor in Economics, School of Economics and Finance, University of Hong Kong, Pokfulam, Hong Kong. Phone (852) 25481152, Fax (852) 25481152, E-mail wsuen@econ.hku.hk
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有