首页    期刊浏览 2025年07月18日 星期五
登录注册

文章基本信息

  • 标题:Errors in variables and lending discrimination.
  • 作者:DeVaro, Jed L. ; Lacker, Jeffrey M.
  • 期刊名称:Economic Quarterly
  • 印刷版ISSN:1069-7225
  • 出版年度:1995
  • 期号:June
  • 语种:English
  • 出版社:Federal Reserve Bank of Richmond
  • 摘要:One weakness of this approach is that an estimate of the discrimination coefficient may be biased when measures of creditworthiness are fallible. In such situations, distinguishing racial discrimination from unmeasured racial disparities in creditworthiness can be difficult. If true creditworthiness is lower on average for minority applicants, the model may indicate that race adversely affects the probability of denial, even if race plays no direct causal role.
  • 关键词:Bank loans;Credit discrimination;Discrimination in consumer credit

Errors in variables and lending discrimination.


DeVaro, Jed L. ; Lacker, Jeffrey M.


Do banks discriminate against minority loan applicants? One approach to answering this question is to estimate a model of bank lending decisions in which the probability of being denied a loan is a function of a set of creditworthiness variables and a dummy variable for the applicant's race (z = 1 for minorities, z = 0 for whites). A positive coefficient on the race dummy is taken as evidence that minority applicants are less likely to be granted loans than white applicants with similar qualifications. This approach is employed in many empirical studies of lending discrimination (Schill and Wachter 1994; Munnell et al. 1992), in U.S. Department of Justice lending discrimination suits (Seiberg 1994), and in regulatory examination procedures (Bauer and Cromwell 1994; Cummins 1994).

One weakness of this approach is that an estimate of the discrimination coefficient may be biased when measures of creditworthiness are fallible. In such situations, distinguishing racial discrimination from unmeasured racial disparities in creditworthiness can be difficult. If true creditworthiness is lower on average for minority applicants, the model may indicate that race adversely affects the probability of denial, even if race plays no direct causal role.

There are good reasons to believe that measures of creditworthiness are fallible. First, regulatory field examiners report difficulty finding matched pairs of loan files to corroborate discrimination identified by regression models. An applicant's file often yields a picture of creditworthiness different from the one given by model variables. Second, including more borrower financial characteristics generally reduces discrimination estimates, sometimes to zero (Schill and Wachter 1994). Third, studies of default data find that minority borrowers are more likely than white borrowers to default, even after controlling for income, wealth, and other borrower characteristics related to creditworthiness (Berkovec et al. 1994). This finding suggests that there are race-related discrepancies between the true determinants of creditworthiness and the measures available to econometricians.

Our objective is to develop a method for assessing the sensitivity of lending discrimination estimates to measurement error. In particular, we study the classical errors-in-variables model, in which the components of a vector x of observed measures of creditworthiness are, one for one, fallible measures of those in a vector of true qualifications [x.sup.*].(1) The implications of errors in variables in the standard linear regression model are well known (Klepper and Leamer 1984; Goldberger 1984).(2) We briefly review these implications in Section 1. Models of lending discrimination generally specify a nonlinear regression model, such as the logit model, because_ the dependent variable is dichotomous (y = 1 if the loan application is denied; y = 0 if it is accepted). In this article we extend the results for the linear case to cover the nonlinear logit regression model widely used in lending discrimination studies.

Linear errors-in-variables models are underidentified because variation in true qualifications cannot be distinguished from error variance. Assuming that the errors are normally distributed with known parameters, however, the linear model is just-identified, allowing estimation of model parameters depending on the assumed error-variance parameters. Assuming zero error variance yields the standard linear regression model as a special case. By estimating under a range of error-variance assumptions, one can trace out the potential effect of measurement error on model parameter estimates. Note that since the error-variance assumptions make the model just-identified, no one assumption about the error-variance parameters is more likely than any other; that is, estimates of model parameters under alternative error-variance assumptions are all equally consistent with the data. Also note that in the case of normally distributed regressors in the linear model, parameter estimates for alternative error-variance assumptions can be obtained through an algebraic correction to the ordinary least squares estimates.

In Section 2 we examine the logit model under errors in variables and show how estimators depend on assumptions about error variance. Adjusting estimators for error variance is no longer an algebraic correction as it is in the linear setup; the model must be reestimated for each error-variance assumption. For the case in which the independent variables are continuous-valued, we show how to estimate the logit model under various assumptions about error variance. Because of the nonlinearity, the logit model is in some cases identified without error-variance assumptions. In practice, however, the logit model is quite close to underidentified, and little information can be obtained from the data about error-variance parameters. Therefore, we advocate estimating models under a range of error-variance assumptions to check the sensitivity of estimates to measurement error.

In Section 3 we demonstrate our method using artificial data. We show how estimates of a discrimination parameter can be biased when a relatively modest amount of measurement error is present. The magnitude of the bias depends on the model's fundamental parameters. By estimating the model under different assumptions about measurement error variance, we can gauge the sensitivity of the estimators to errors in variables. Section 4 concludes and offers directions for further research.

Bauer and Cromwell (1994) have also studied the properties of logit regression models of lending discrimination, focusing on the small-sample properties of a misspecified model using simulated data. They found that tests for lending discrimination were sensitive to sample size. Our work focuses on the effect of errors in variables on the large-sample properties of otherwise correctly specified logit models of lending discrimination.

1. ERRORS IN VARIABLES

The implications of errors in variables are easiest to see in a linear setup such as the following simple model of salary discrimination.(3) Suppose that an earnings variable (y) is determined according to the following equations:

y =[Beta][x.sup.*] + [Alpha]z + v, (1a)

[x.sup.*] = [x.sub.0] + [Mu]z + u, (1b)

x = [x.sup.*] + e, (1c)

where the scalar [x.sup.*] = true qualification, x = measured qualification, and z is a race dummy (z = 1 for minorities, z = 0 for whites). We take v, u, and e to be mutually independent random variables with zero means and variances [Mathematical Expression Omitted], [Mathematical Expression Omitted], and [Mathematical Expression Omitted], all independent of z. The earnings variable in (1a) is a stochastic function of the true qualifications and race. The parameter [Alpha] represents the independent effect of race on salary, and [Alpha] [less than] 0 represents discrimination against minorities. If better-qualified applicants obtain higher salaries, then [Beta] [greater than] 0. In (1b) qualification is allowed to be correlated with race; the expectation of [x.sup.*] is [x.sup.0] for whites and [x.sub.0] + [Mu] for minorities. The empirically relevant case has [Mu] [less than] 0. Observed qualification in (1c) is contaminated by measurement error e. Consider a regression of y on the observed variables x and z. This estimates

E[y [where] x, z] = bx + az.

Since the variances and covariances are the same for both white and minority applicants, we can use conditional covariances to calculate the regression slopes. We focus on relationships in a population and thus ignore sampling variability. The least squares estimators are

b = cov(x,y [where] z)/v(x [where] z) = cov([x.sup.*],y [where] z)/v(x [where] z) = (1 - [Delta])[Beta]

and

a = E[y [where] z = 1] - E[y [where] z = 0] - b{E[x [where] z = 1] - E[x [where] z = 0]}

= [Alpha] + [Beta][Mu] - b[Mu]

= [Alpha] + [Delta][Beta][Mu],

where

[Mathematical Expression Omitted].

When there is measurement error [Mathematical Expression Omitted], the regression estimator off [Beta] is biased toward zero. To see why, substitute for [x.sup.*] in (1a) using (1c) to obtain y = [Beta]x + [Alpha]z + (v - [Beta]e). The "error" v - [Beta]e in the regression of y on x and z is correlated with x via (1c). Thus a key assumption of the classical linear regression model is violated, and the coefficients are no longer unbiased.

In our case ([Beta] [greater than] 0, [Mu] [less than] 0), the estimator of [Alpha] is biased downward as well. Bias creeps in because z is informative about [x.sup.*], given x;

E[[x.sup.*] [where] x, z] = (1 - [Delta])x + [Delta]([x.sub.0] + [Mu]z).

Given observed qualification x, race can help "predict" true qualification [x.sup.*]. Race can then help "explain" earnings, even in the absence of discrimination ([Alpha] = 0), because race is correlated with true qualifications.

The model (1) is underidentified (Kapteyn and Wansbeek 1983). A regression of x on z recovers the nuisance parameters [x.sub.0] and [Mu], along with [Mathematical Expression Omitted]. Other population moments provide us with a and b, but these are not sufficient to identify [Alpha], [Beta], and [Delta]. No sample can provide us with enough information to divide v(x [where] z) between the variance in true qualifications [Mathematical Expression Omitted] and the variance in measurement error [Mathematical Expression Omitted]. Under the assumptions [Beta] [greater than] 0 and [Mu] [less than] 0, any value of [Alpha] [greater than] a, including the no-discrimination case [Alpha] = 0, is consistent with the data for some [Beta] and [Mathematical Expression Omitted].

If [Mathematical Expression Omitted] were known independently, then we would know [Mathematical Expression Omitted] and could calculate the unbiased estimators [Mathematical Expression Omitted] and [Mathematical Expression Omitted] by correcting the ordinary least squares estimators as follows:

[Mathematical Expression Omitted]

[Mathematical Expression Omitted].

One could use (2) to study the implications of alternative assumptions about the variance of measurement error; different values of [Mathematical Expression Omitted] would trace out different estimates of [Alpha].

In (1) the direction of bias in a is known when the sign of [Beta][Mu] is known. Matters are different when x is a vector of characteristics affecting qualifications. Consider a multivariate model:

y = [Beta][prime][x.sup.*] + [Alpha]z + v, (3a)

[x.sup.*] = [x.sub.0] + [Mu]z + u, (3b)

x = [x.sup.*] + e, (3c)

where [x.sup.*] and x are now k x 1 random vectors and [Beta], [Mu], and [x.sub.0] are k x 1 parameter vectors. We take u and e to be normally distributed random vectors, independent of v, z, and each other, with zero means and covariance matrices [[Sigma].sup.*] and D. The classical assumption is that measurement errors are mutually independent, so D is diagonal.

The least squares estimators are now

b = [([[Sigma].sup.*] + D).sup.-1][[Sigma].sup.*][Beta] (4a)

and

a = [Alpha] + ([Beta] - b)[prime][Mu]. (4b)

The direction of bias is now uncertain, even under the usual assumption that measurement errors are independent (D is diagonal). To see why, suppose that k = 2, [[Sigma].sup.*] has [Rho] as the off-diagonal element, and [[Sigma].sup.*] + D has ones on the diagonal (a normalization of units). Then (4b) becomes

a = [Alpha] + [([D.sub.11][[Beta].sub.1] - [Rho][D.sub.22][[Beta].sub.2])[[Mu].sub.1] + ([D.sub.22][[Beta].sub.2] - [Rho][D.sub.11][[Beta].sub.1])[[Mu].sub.2]]/(1 - [[Rho].sup.2]).

The bias in a could be positive or negative, depending on parameter values. For example, suppose only one component of x is subject to measurement error, say, [x.sub.1] ([D.sub.11] [greater than] 0 and [D.sub.22] = 0). By itself this would bias [b.sub.1] downward, resulting in an upward bias in a. But [b.sub.2] = [Rho][[Beta].sub.1][D.sub.11](1 - [[Rho].sup.2]) + [[Beta].sub.2] is now biased as well, and this would induce downward bias in a if [Rho][Beta][Mu] [greater than] 0. The overall direction of bias is indeterminate (Rao 1973; Hashimoto and Kochin 1980). But again, if the measurement error parameters D were known, then the least squares estimators a and b could be corrected by a simple transformation of (4) (using [[Sigma].sup.*] = [Sigma] - D, where [Sigma] = v(x [where] z)). Each alternative measurement error assumption would imply a different estimator.(4)

2. ERRORS IN VARIABLES IN A LOGIT MODEL OF DISCRIMINATION

In model (3) the dependent variable is a linear function of the explanatory variables. In models of lending decisions the dependent variable is dichotomous: y = 1 if the applicant is denied a loan, and y = 0 if the applicant is accepted. In this case the linear formulation in (3) is unattractive (Maddala 1983). A common alternative is the logit model, shown here without errors in variables:

Pr[y = 1 [where] x, z] = G([[Beta][prime]x + [Alpha]z), (5a)

G(t) = 1/1 + [e.sup.-t], (5b)

where x is a vector of characteristics influencing creditworthiness. The empirically relevant case has [Beta] [less than] 0, so applicants who are more creditworthy are less likely to be denied loans. A value of [Alpha] [greater than] 0 would indicate discrimination against minorities: a minority applicant is approximately [Alpha](1 - G) times more likely than an identical white applicant to be denied a loan.(5)

The parameters [Alpha] and [Beta] can be estimated by the method of maximum likelihood. The log likelihood function for a sample of n observations {[y.sub.i], [x.sub.i], [z.sub.i], i = 1, . . . , n} is

[Mathematical Expression Omitted],

where

Pr([y.sub.i] [where] [x.sub.i], [z.sub.i] = G[([Beta][prime][x.sub.i] + [Alpha][z.sub.i]).sup.[y.sub.i]][1 - G[([Beta][prime][x.sub.i] + [Alpha][z.sub.i])].sup.(1-[y.sub.i])].

Estimators are found by choosing parameter values that maximize log L. The likelihood depends on the parameters of the conditional distribution in (5) as well as on the "nuisance parameters" governing the unconditional distribution of (x, z). Since the nuisance parameters appear only in the second sum in (6), while [Alpha] and [Beta] appear only in the first sum, [Alpha] and [Beta] can be estimated in this case without estimating the nuisance parameters.

Under errors in variables, (5a) is replaced with

Pr[y = 1 [where] [x.sup.*], z] = G([Beta][prime][x.sup.*] + [Alpha]z), (7)

where [x.sup.*] is the vector of true characteristics. The resulting log likelihood function is

[Mathematical Expression Omitted].

The likelihood function now depends on Pr(x [where] [x.sup.*]), the probability that x is observed if the vector of true characteristics is [x.sup.*]. Since x - [x.sup.*] is the vector of measurement errors, Pr(x [where] [x.sup.*]) is the probability distribution governing the measurement error. In the linear model (3) the least squares estimators could be corrected algebraically for measurement error of known variance. In the logit model, however, there is no simple way to adjust maximum likelihood estimators for errors in variables, since the regression function is nonlinear. Instead, we must estimate [Alpha] and [Beta] for each distinct assumption about Pr(x [where] [x.sup.*]).

Unlike the one in (6), the log likelihood function in (8) is not separable in the nuisance parameters of the distribution Pr([x.sup.*], z). Even if we posit an error distribution Pr(x [where] [x.sup.*]), estimating [Alpha] and [Beta] requires estimating the parameters of Pr([x.sup.*], z) as well. The estimation of these nuisance parameters will be sidestepped here by maximizing the conditional likelihood function

[Mathematical Expression Omitted].

We will assume that Pr([x.sup.*] [where x, z), the distribution of true characteristics conditional on observed characteristics and race, is known.

Our model is completed by adding specific assumptions about the distributions Pr(x [where [x.sup.*]) and Pr([x.sup.*] [where] z), which will allow us to derive Pr([x.sup.*] [where] x, z). We will maintain the assumptions embodied in (3b) and (3c):

[x.sup.*] = [x.sub.0] + [Mu]z + u, (10a)

x = [x.sup.*] + e, (10b)

where [Beta], [Mu], and [x.sub.0] are k x 1 parameter vectors and where u and e are normally distributed random vectors, independent of v, z, and each other, with zero means and covariance matrices [[Sigma].sup.*] and D. Given x and z, [x.sup.*] is then normally distributed with mean vector [m.sup.*] and covariance matrix [S.sup.*], where

[m.sup.*] = D[[Sigma].sup.-1][Mu]z + (I - D[[Sigma].sup.-1])x, (11a)

[S.sup.*] = (I - D[[Sigma].sup.-1])D. (11b)

With this result in hand, we find that, conditional on x and z, the argument of G is normally distributed with mean [Beta][prime][m.sup.*] + [Alpha]z and variance [Beta][prime][S.sup.*][Beta]. Therefore, the likelihood in (9) can be written as

Pr(y [where] x, z) = [integral of] G(m + [Sigma]s)[(2[Pi]).sup.-1/2]exp([-s.sup.2]/2)ds, (12)

where

m = [Beta][prime](I - D[[Sigma].sup.-1])x + ([Alpha] + [Beta][prime]D[[Sigma].sup.-1][Mu])z,

[Sigma] = [[[Beta][prime](I - D[[Sigma].sup.-1])D[Beta]].sup.1/2].

When D = 0, m collapses to [Beta][prime]x + [Alpha]z and [Sigma] = 0, which is the error-free model.(6)

Because of the nonlinearity of G, the logit model can potentially be identified without error-variance assumptions, unlike the linear model in Section 1. Thus, in principle, the error-variance parameters could be estimated rather than imposed. In practice, however, the model is so close to linear that the error-variance parameters cannot be estimated; even large samples are uninformative about D. We therefore recommend estimating the model under a range of alternative error-variance assumptions.

To summarize the procedure, first calculate least squares estimators for the parameters [x.sup.0], [Mu], and [Sigma]. These parameters are treated as fixed and combined with an assumed D to obtain the distribution Pr([x.sup.*] [where] x, z), which is used in (12) and (9) to obtain maximum likelihood estimates of [Alpha] and [Beta]. This procedure treats the error variance D as known, just as the error-free model treats D as identically zero. Estimates of [Alpha] can then be traced out under alternative assumptions on D.(7)

Our procedure will misstate the uncertainty about parameter estimates, even conditioning on D. By implicitly assuming that the estimated parameters [x.sub.0], [Mu], and [Sigma] are known, we are neglecting their sampling variability. These parameters appear in (12) and thus influence estimates of [Alpha] and [Beta]. Our procedure therefore misstates their sampling variability as well. When D = 0, the nuisance parameters disappear from (12), and this problem does not arise.(8)

3. EXAMPLES

In the examples in this section, we apply our procedure in a logit model of discrimination to show how the technique is capable of detecting the sensitivity of parameter estimates to errors in variables. We find it convenient to use artificially generated data sets to illustrate our results. Artificial data allow us to isolate important features of the errors-in-variables model for a wide array of cases. Observations are randomly generated under a given, true error variance, and the model is then estimated under various hypothesized error variances.

In the simplest case there is only one explanatory variable besides race (k = 1). We assume [Alpha] = 0, [Beta] = -1, [Mu] = -2, and [Sigma] = 1. (We focus on the no-discrimination case, [Alpha] = 0, solely for convenience.) In this case, if a is significantly different from zero, then it is also significantly greater than [Alpha], and the usual t-statistic on a will also show whether a is significantly biased. The sample was assumed to be half white (z = 0) and half minority (z = 1). Using these values and an assumed true error variance D, we generated 10,000 random observations on [x.sup.*], x, and y using equations (7) and (10). We then estimated the model using maximum likelihood, assuming that the true values of [Mu] and [Sigma] were known and making an assumption about [Mathematical Expression Omitted] (not necessarily the same as D). The results are displayed in Table 1. The sample size of 10,000 was chosen to reduce sampling variance.

For the estimates shown in Panel A of Table 1, the true variance of the measurement error is D = 0.1. This represents one-tenth of the total variance in observed x, a relatively modest amount. The first line reports estimation under the (incorrect) assumption that the error variance is zero. As expected, the estimate b is biased toward zero. Consequently, a is biased upward, toward showing discrimination, and is significant.
Table 1 Coefficient Estimates for


Alternative Error-Variance Assumptions, k = 1


[Mu] = -2, [Sigma] = 1, n= 10,000.


 a b


A. True parameters [Alpha] = 0, [Beta] = -1, and D = 0.1:


Assumed [Mathematical Expression Omitted]


0.0 0.1446 -0.9208
 (2.4380) (-32.3477)
0.05 0.0482 -0.9775
 (0.7780) (-31.8322)
0.1 -0.0607 -1.0418
 (-0.9308) (-31.2626)


B. True parameters [Alpha] = 0.1, [Beta] = -0.9, and D = 0.0:


Assumed [Mathematical Expression Omitted]


0.0 0.1609 -0.9260
 (2.7101) (-32.4378)
0.05 0.0640 -0.9832
 (1.0315) (-31.9159)
0.1 -0.0456 -1.0480
 (-0.6986) (-31.3393)


Notes: t-statistics are shown in parentheses beneath the
coefficient estimates. For each panel, we drew a set of 10,000
random realizations for (y, x): 5,000 with z = 0 and 5,000 with
z = 1. Within each panel, estimation was performed on the same
data set with different assumptions about the error variance
[Mathematical Expression Omitted].


The last two lines in Panel A show estimates assuming positive error variance. For larger values of [Mathematical Expression Omitted], b is closer to one and a is closer to zero, the true value. The discrimination parameter is not significantly different from zero when estimated assuming D is 0.05 or 0.1. In this case, then, our procedure successfully detects the sensitivity of parameter estimates to errors in variables.

In Panel B we examine the case in which no measurement error is present and the true discrimination parameter is positive. The (correct) assumption of no measurement error now yields estimates that are unbiased; they differ from the true parameters only because of sampling error. Imposing the (incorrect) assumption of positive measurement error variance "undoes" a nonexistant bias, resulting in a near zero and a larger negative b.

Table 2 shows how the magnitude of the bias varies with the correlation between components of x when k = 2. [Sigma] has diagonal elements equal to one and off-diagonal elements equal to a scalar [Rho], where -1 [less than] [Rho] [less than] 1. D has diagonal elements all equal to 0.1; the independent variables other than race suffer from measurement error of the same variance. We maintain [Alpha] = 0, [TABULAR DATA FOR TABLE 2 OMITTED] [Beta] = (-1, -1), and [Mu] = (-2,-2). Panel A shows that when the components of x are uncorrelated, the bias is larger than in the comparable k = 1 model: 0.43 versus 0.14. When the components of x are positively correlated ([Rho] = 0.5), the bias is smaller by almost a third but is still significant. When the components of x are negatively correlated ([Rho] = -0.5), the bias is substantially larger. Thus the bias in a varies negatively with [Rho], just as the linear case suggested. A positive value of p implies that measurement error in [x.sub.1] biases the coefficient on [x.sub.2] away from zero, counteracting the effect of measurement error in [x.sub.2]. Although [b.sub.i] is biased toward zero by measurement error in [x.sub.i], the bias is somewhat offset by the effects of measurement error in other components of x.

When k = 1, the direction of bias is determined entirely by the sign of [Beta][Mu]. When k [greater than] 1, the direction of bias depends on [Sigma] and D, even when [Beta][prime][Mu] can be signed. Table 3 illustrates this fact for k = 2, showing a set of parameters for which a is biased against finding discrimination. Both [x.sub.1] and [x.sub.2] are plagued by measurement error, but with a strong positive correlation between the two, each has a dampening effect on the bias in the coefficient of the other variable. The net bias in [b.sub.2] is toward zero, but [b.sub.1] is biased away from zero. Since [x.sub.1] is more strongly correlated with z, the net effect is a negative bias in a. With the correct error-variance assumption, the model detects the lack of discrimination.
Table 3 Coefficient Estimates for
Alternative Error-Variance Assumptions, k = 2


 [Mathematical Expression Omitted]


 [Mathematical Expression Omitted]


 a [b.sub.1] [b.sub.2]


Assumed [Mathematical Expression Omitted]


0.0 -0.2445 -0.2352 -0.7703
 (-3.4442) (-7.0602) (-21.9616)
0.1 0.0312 -0.0887 -0.9962
 (0.2888) (-1.6430) (-17.3444)


Notes: t-statistics are shown in parentheses beneath the
coefficient estimates. For each panel, we drew a set of 10,000
random realizations for (y, x): 5,000 with z = 0 and 5,000 with
z = 1. Within each panel, estimation was performed on the same
data set.


In Table 4 we display results for a model with k = 10, a size that is more like that of the data sets encountered in actual practice. With [Rho] = 0, we see in Panel A that with more correlates plagued by measurement error, the bias in a is larger. With [Rho] = 0.5, the various measurement errors partially offset each other, but a remains significantly biased. Once again, our technique faithfully compensates for known measurement error.

4. SUMMARY

We have described a method for estimating logit models of discrimination under a range of assumptions about the magnitude of errors in variables. Using artificially generated data, we showed how the bias in the discrimination coefficient varies with measurement error and other basic model parameters. Our method successfully corrects for known measurement error, and can gauge the sensitivity of parameter estimates to errors in variables. Our method can be applied to the studies of lending discrimination cited in the introduction. It can also be applied to the empirical models employed in lending discrimination suits and regulatory examinations. Since the stakes are high in such applications, the models ought to be routinely tested for sensitivity to errors in variables.

Further extensions of our method would be worthwhile. Although we allow for errors only in continuous-valued independent variables, studies of lending discrimination often include discrete variables that are likely to be fallible as well. It would be worthwhile to allow for errors in the discrete variables, as Klepper (1988a) does for the linear regression model. In addition, it would be useful to allow for uncertainty about the nuisance distributional parameters that our method treats as known.
Table 4 Race Coefficient Estimates for Alternative Correlation
and Error-Variance Assumptions, k = 10


[Alpha] = 0, [Beta] is a k x 1 vector of -1s, [Mu] is a k x 1
vector of 1s, [Sigma] is a k x k matrix with 1s on the diagonal
and off-diagonal elements equal to [Rho], D is a k x k matrix
with 0.1s on the diagonal and off-diagonal elements equal to 0,
[Mathematical Expression Omitted] is a k x k matrix with elements
[Mathematical Expression Omitted] on the diagonal and
off-diagonal elements equal to 0, and n = 10,000.


 a


A. True parameter [Rho] = 0:
Assumed [Mathematical Expression Omitted]


0.0 1.0033
 (3.3154)
0.1 -0.0339
 (-0.1006)


B. True parameter [Rho] = 0.5:
Assumed [Mathematical Expression Omitted]


0.0 0.2266
 (3.4658)
0.1 0.0645
 (0.5988)


Notes: t-statistics are shown in parentheses beneath the
coefficient estimate. For each panel, we drew a set of 10,000
random realizations for (y, x): 5,000 with z = 0 and 5,000 with
z = 1. Within each panel, estimation was performed on the same
data set.


1 The classical errors-in-variables model is not the only one in which observed variables, taken together, are fallible measures of true creditworthiness. Alternatives include "multiple-indicator" models in which observed variables are fallible measures of a single index of creditworthiness, and "omitted-variable" models in which some determinants of creditworthiness are unobservable. All are alike in that a component of the true model is unobserved by the econometrician; thus, all are latent-variable models. Because errors in variables is one of the simplest and most widely studied models of fallible regressors, it is a useful starting point in examining fallibility in empirical models of lending discrimination.

2 Interest in the errors-in-variables problem has surged since 1970. As Hausman and colleagues (1995) stated, "During the formative period of econometrics in the 1930's, considerable attention was given to the errors-in-variable[s] problem. However, with the subsequent emphasis on aggregate time series research, the errors-in-variables problem decreased in importance in most econometric research. In the past decade as econometric research on micro data has increased dramatically, the errors-in-variables problem has once again moved to the forefront of econometric research" (p. 206).

3 The exposition in this section is based on Goldberger (1984). This model of salary discrimination has a close parallel in the permanent income theory. Friedman (1957) discusses how racial differences in unobserved permanent income (the counterpart of qualifications in the salary model and creditworthiness in the lending model) bias estimates of racial differences in the consumption function intercept.

4 Klepper and Leamer (1984) and Klepper (1988b) show how to find bounds and other diagnostics for the linear errors-in-variables model.

5 The elasticity of G with respect to z is [Alpha]G[prime]/G = [Alpha](1 + [e.sup.-t])[e.sup.-t]/[(1 + [e.sup.-t]).sup.2] = [Alpha][e.sup.-t]/(1 + [e.sup.-t]) = [Alpha](1 - G), where G is evaluated at [Beta][prime]x + [Alpha]z.

6 The joint normality of x and [x.sup.*] given z implies that given x and z, [x.sup.*] is normal with parameters that can be derived algebraically from the parameters of Pr(x [where] [x.sup.*]) and Pr([x.sup.*] [where] z). Other distributional assumptions on x and [x.sup.*] are far less convenient. For example, when [x.sup.*] takes on discrete values, a more general approach is required to derive Pr([x.sup.*] [where] x, z). Given a distribution of the observables Pr(x, z), recover Pr([x.sup.*] [where] z) using Pr(x [where] z) = [integral of] Pr(x [where] [x.sup.*])Pr([x.sup.*] [where] z)[dx.sup.*], and then use Bayes's rule to obtain Pr([x.sup.*] [where] x, z) = Pr(x [where] [x.sup.*])Pr([x.sup.*] [where] z)/Pr(x [where] z). The first of these steps involves inverting a very large matrix.

7 In related work, Klepper (1988a) extended the diagnostic results of Klepper and Leamer (1984) and Klepper (1988b) to a linear regression model with dichotomous independent variables. These earlier approaches attempted to characterize the set of parameters that maximize the likelihood function. Levine (1986) extended the results of Klepper and Leamer (1984) to the probit model.

8 Specifically, the hessian of the log likelihood function is then block diagonal across ([Alpha],[Beta]) and ([x.sub.0], [Mu], [Sigma]).

REFERENCES

Bauer, Paul W., and Brian A. Cromwell. "A Monte Carlo Examination of Bias Tests in Mortgage Lending," Federal Reserve Bank of Cleveland Economic Review, vol. 30 (July/August/September 1994), pp. 27-44.

Berkovec, James, Glenn Canner, Stuart Gabriel, and Timothy Hannan. "Race, Redlining, and Residential Mortgage Loan Performance," Journal of Real Estate Finance and Economics, vol. 9 (November 1994), pp. 263-94.

Cummins, Claudia. "Fed Using New Statistical Tool to Detect Bias," American Banker, June 8, 1994.

Friedman, Milton. A Theory of the Consumption Function. Princeton, N.J.: Princeton University Press, 1957.

Goldberger, Arthur S. "Reverse Regression and Salary Discrimination," Journal of Human Resources, vol. 19 (Summer 1984), pp. 293-318.

Hashimoto, Masanori, and Levis Kochin. "A Bias in the Statistical Estimation of the Effects of Discrimination," Economic Inquiry, vol. 18 (July 1980), pp. 478-86.

Hausman, J. A., W. K. Newey, and J. L. Powell. "Nonlinear Errors in Variables: Estimation of Some Engel Curves," Journal of Econometrics, vol. 65 (January 1995), pp. 205-33.

Kapteyn, Arie, and Tom-Wansbeek. "Identification in the Linear Errors in Variables Model," Econometrica, vol. 51 (November 1983), pp. 1847-49.

Klepper, Steven. "Bounding the Effects of Measurement Error in Regressions Involving Dichotomous Variables," Journal of Econometrics, vol. 37 (March 1988a), pp. 343-59.

-----. "Regressor Diagnostics for the Classical Errors-in-Variables Model," Journal of Econometrics, vol. 37 (February 1988b), pp. 225-50.

-----, and Edward E. Leamer. "Consistent Sets of Estimates for Regressions with Errors in All Variables," Econometrica, vol. 52 (January 1984), pp. 163-83.

Levine, David K. "Reverse Regressions for Latent-Variable Models," Journal of Econometrics, vol. 32 (July 1986), pp. 291-92.

Maddala, G. S. Limited-Dependent and Qualitative Variables in Econometrics. Cambridge: Cambridge University Press, 1983.

Munnell, Alicia H., Lynn E. Browne, James McEneaney, and Geoffrey M. B. Tootell. "Mortgage Lending in Boston: Interpreting the HMDA Data," Working Paper Series No. 92. Boston: Federal Reserve Bank of Boston, 1992.

Rao, Potluri. "Some Notes on the Errors-in-Variables Model," American Statistician, vol. 27 (December 1973), pp. 217-28.

Schill, Michael H., and Susan M. Wachter. "Borrower and Neighborhood Racial and Income Characteristics and Financial Institution Mortgage Application Screening," Journal of Real Estate Finance and Economics, vol. 9 (November 1994), pp. 223-39.

Seiberg, Jaret. "When Justice Department Fights Bias by the Numbers, They're His Numbers," American Banker, September 14, 1994.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有