Payment certainty in discrete choice contingent valuation responses: results from a field validity test.
Welsh, Michael P.
1. Introduction
Markets do not exist to provide the information necessary for
conducting benefit-cost analyses in many public policy decision-making situations. When desired estimates of benefits or costs "are not
manifest in the market" (Arrow 1999, p. vi), economists have
increasingly turned to contingent valuation surveys to elicit the values
that individuals would place on public goods and externalities (Mitchell and Carson 1989; Cropper and Oates 1992; Deacon et al. 1998).
Although discrete choice, take-it-or-leave-it methods of eliciting
preferences have gained favor on theoretical grounds (Arrow et al. 1993;
Carson, Groves, and Machina 1999) and realism (Hanemann 1994),
accumulated evidence from a number of laboratory and field contingent
valuation validity studies suggests that these methods overstate actual
willingness to pay (WTP) for private and public goods (e.g., Cummings,
Harrison, and Rustrom 1995; Brown et al. 1996; Cummings et al. 1997;
Balistreri et al. 2001; Champ and Bishop 2001). That is, respondents are
more likely to say "yes" to hypothetical commitments than
actual commitments, reflecting "hypothetical bias" and the
need for "calibrating" contingent valuation responses
(Harrison 2002).
Two recent papers offer possible methods for calibrating
hypothetical discrete-choice responses by considering payment certainty
levels reported by respondents. In what we term the "follow-up certainty question" (FCQ) method, Champ et al. (1997) ask
"yes" dichotomous choice respondents to indicate how certain
they are, on a scale from 1 ("very uncertain") to 10
("very certain"), that they would pay the stated dollar amount
if the program were actually offered. Separate WTP functions are
estimated for each certainty level. Welsh and Poe (1998) instead adopt a
"multiple-bounded discrete choice" (MBDC) approach that
directly incorporates certainty levels through a two-dimensional
decision matrix: One dimension specifies dollar amounts that individuals
would be required to pay on implementation of the policy, and the second
dimension allows individuals to express their level of voting certainty
through "definitely no," "probably no," "not
sure," "probably yes," and "definitely yes"
response options. A multiple-bo unded logit model is used to estimate
separate WTP functions for each certainty level.
In this paper, we use a field validity test of contributions to a
green electricity pricing program to further explore these methods and
address several validity issues. First, using actual sign-up data as a
criterion, we derive "optimal" correction strategies for the
two methods. Previous laboratory research on private goods suggests that
"yes" hypothetical dichotomous choice responses from those who
are "definitely sure" (Blumenschein et al. 1998) or at least
"probably sure" (Johannesson et al. 1999) closely predict
actual purchase decisions. Johannesson, Liljas, and Johansson (1998)
find that respondents who are "absolutely sure" of their
decision provide a conservative estimate of real purchases. These
laboratory results are replicated in public goods contingent valuation
field validity research using FCQ methods, suggesting that models that
only use "yes" responses with certainty values on a 1-to-10
scale of "7 and higher" (Ethier et al. 2000), "8 and
higher" (Champ and Bishop 2001), or "10" (Champ et al .
1997) best predict actual contributions. We are the first to provide
correction strategies for the MBDC approach.
Second, we examine if the experimental "classroom"
results reported in Welsh and Poe can be replicated in the field. In
that paper, the authors compare estimated logistic response
distributions from dichotomous choice questions and MBDC "not
sure" responses and find that they are not statistically different.
This suggests that respondents who are uncertain of their values will
tend to "yea-say" when asked a single dichotomous choice
question, a result that has been replicated elsewhere (e.g., Ready,
Navrud, and Dubourg 2001).
Finally, in an examination of convergent validity, we compare the
MBDC and FCQ methods. Specifically, we compare mean WTP, hypothetical
participation rates at $6 (the actual offer price for the program), and
the underlying WTP distributions estimated from various models based on
the two methods, using both parametric and nonparametric estimation techniques. Conceptually, the FCQ and MBDC methods offer alternative
approaches to account for respondent uncertainty in modeling contingent
valuation questions. The primary difference between approaches is that
the MBDC framework incorporates the certainty correction directly into
the discrete choice decision framework, whereas the FCQ method can be
regarded as an ex post adjustment to the dichotomous choice response.
Although these questions seek the same type of information--how certain
an individual is that he or she would actually pay a specified dollar
amount--tests of procedural invariance have not been conducted in either
the field or the laboratory.
2. Certainty Corrections within the Discrete Choice Framework
The questioning approaches examined in this paper build on previous
research indicating that contingent valuation respondents may have a
distribution or range of possible WTP values rather than a single point
estimate. Here we use the term "certainty" in the same sense
as that in Opaluch and Segerson (1989); Dubourg, Jones-Lee, and Loomes
(1994); and Ready, Whitehead, and Blomquist (1995). In this framework,
when the referendum dollar threshold falls at or below the lower end of
the individual's range of WTP values, then the respondent is likely
to be very certain that he or she would vote in favor of the referendum.
At very high amounts, the respondent might be very certain of voting
against the referendum. At intermediate amounts, the respondent is less
certain of how he or she actually would vote, with the level of payment
certainty being inversely related to the dollar amount.
Dichotomous Choice with FCQ
Response certainty in the FCQ framework is incorporated as follows.
Individuals first respond to a standard dichotomous choice (DC)
question. For "yes" respondents, a follow-up question is
asked:
So you think that you would sign up. We would like to know how sure
you are of that. On a scale from "1" to "10," where
"1" is "very uncertain" and "10" is
"very certain," how certain are you that you would sign up and
pay the extra $6 a month if the program were actually offered?
Respondents are asked to circle a response on the 1-to-l0 scale. As
empirical evidence suggests that respondents who are uncertain about
their willingness to pay tend to respond "yes" (Champ et al.
1997; Welsh and Poe 1998; Champ and Bishop 2001), a follow-up question
is not asked of "no" respondents. Modeling of this approach
follows well-known DC procedures in which "yes" responses are
recoded for each level of certainty and separate WTP functions are
estimated. For instance, one can code all responses of, say, 7 and
higher as "yes" and all other responses as "no" and
then employ standard DC modeling techniques.
MBDC
The MBDC approach contains elements of and builds on both the
payment card (PC) and DC approaches widely used in contingent valuation
studies. In a PC question, respondents are presented with several dollar
values and asked to circle the maximum value they would be willing to
pay. However, rather than circling a single value or interval as an
indication of maximum WTP for the referendum, the MBDC approach provides
a "polychotomous choice" response option including, say,
"definitely no," "probably no," "not
sure," "probably yes," and "definitely yes."
The respondent then chooses a response option for each of the dollar
amounts. In this manner, the context of the good-to-cost trade-off is
expanded beyond traditional DC or PC questions by including additional
dollar amounts and the likelihood of voting yes, respectively. In some
sense, the MBDC model might be thought of as a general framework from
which the DC and the PC techniques can be derived as special cases.
Analysis of WTP data collected using the MBDC technique is
conducted using a multiple-bounded generalization of single- and
double-bounded DC models in which the sequence of proposed dollar values
divides the real number line into intervals (Harpman and Welsh 1999). An
individual's response pattern reveals the interval that contains
his or her WTP at a given level of certainty. Defining [X.sub.iL] as the
maximum amount that the ith individual would vote for and [X.sub.iU] to
be the lowest amount that the ith individual would not vote for,
[WTP.sub.i] lies somewhere in the switching interval [[X.sub.iL]
[X.sub.iU]]. Let F([X.sub.i]; [beta]) denote a statistical distribution
function for [WTP.sub.i] with parameter vector [beta]. The probability
that an individual would vote against a specific dollar amount,
[X.sub.i], is simply F([X.sub.i]; 3). Therefore, the probability that a
respondent would vote "yes" at a given dollar amount,
[X.sub.i] is 1 - F([X.sub.i]; [beta]). The probability that [WTP.sub.i]
falls bet ween the two price thresholds, [X.sub.iL] and [X.sub.iU] is
F([X.sub.iU]; [beta]) - F([X.sub.iL]; [beta]), resulting in the
following log-likelihood function:
lnL = [summation over (n/i=1) ln[F([X.sub.iU]; [beta]) -
F([X.sub.iL]; [beta])].
When the respondent says "yes" to every amount,
[X.sub.iU] 4. Likewise, when the respondent says "no" to every
amount, [X.sub.iL] = -4. It should be apparent that the previous
equation represents the log-likelihood function for discrete choice
models in general, including the DC model (Welsh and Poe 1998). This
likelihood function also parallels that used for analysis of interval
data from payment cards (Cameron and Huppert 1989).
Within this framework, WTP functions can he estimated based on any
of the voting certainty levels. For example, a "definitely
yes" model corresponds to modeling the lower end of the switching
interval at the highest amount the individual chose the "definitely
yes" response category and the higher end of the switching interval
at the next dollar threshold.
3. Description of Data
Data for this paper are taken from a field validity study that
collected actual and hypothetical participation commitments to a green
electricity program that would fund investments in renewable energy. In
1995-1996, the Niagara Mohawk Power Corporation (NMPC), a public utility
in New York State, launched Green Choice[TM], the largest program in the
country for the green pricing of electricity (Holt 1997). NMPC's
1.4 million households were offered the opportunity to fund a green
electricity program that would invest in renewable energy projects
(e.g., landfill gas reclamation, wind power) as substitutes for
traditional energy sources and a tree planting program. Such green
pricing programs have generated substantial interest as utilities come
under increasing pressure to provide alternative sources of electricity
for customers who prefer environmentally friendly energy sources (Wiser,
Bolinger, and Holt 2000).
Building on the mechanism design recommended by Schulze (1994),
NMPC's Green Choice provision mechanism incorporated three key
features: a provision point, a money-back guarantee, and extended
benefits if excess funds are collected. NMPC customers had the option of
signing up for the program at a fixed cost of $6 per month, paid through
a surcharge on their electricity bill. If at least $864,000 (the
provision point) is collected in the first year, the program is
implemented. NMPC would then plant 50,000 trees and fund a landfill gas
project that could replace fossil fuel-generated electricity for 1,200
homes. However, if participation were less than $864,000, NMPC would
cancel the program and refund all the money that was collected. Any
funds collected in excess of the provision point would be applied toward
increasing the scope of the program by planting additional trees and
hence would extend benefits. The characteristics of the program itself
were based on prior market research for NMPC (Wood et al. 1994) . The
improved demand revelation characteristics of the program's funding
mechanism relative to the standard voluntary contributions mechanisms
used in prior field validity research (e.g., Champ et al. 1997) are
further discussed in an experimental context in Rondeau, Schulze, and
Poe (1999) and Rose et al. (2002). The Rose et al. paper provides
additional information on the actual NMPC program and participation
levels. In addition, Marks and Croson (1998) provide a detailed
discussion of alternative rebate rules for excess contributions and
demonstrate empirically that the extended benefits approach used in this
research leads to higher contribution rates than no rebate and
proportional rebate alternatives.
In the summer of 1996, a telephone survey was conducted using a
random sample of households with listed telephone numbers from the NMPC
service territory within Erie County. Participants in the phone survey
were offered the opportunity to actually sign up for the program at $6
per month, with the charge to appear on their monthly bill. This
sign-up-now/pay-later approach follows standard green pricing methods
(Holt 1997). Furthermore, the phone solicitation corresponded with the
"keep it simple" approach adopted by NMPC, which allowed
either phone or mail sign-ups. Because of restrictions by the New York
public utilities commission, only a single actual sign-up price of $6
per month was allowed.
In the fall of 1996, a split-sample mail survey was conducted using
the same sample population and involved separate DC and MBDC
questionnaires in which respondents were asked, hypothetically, whether
they would participate in the Green Choice program. Various dollar
values were employed, using established bid design methods. In the DC
questionnaire, individuals were asked whether they "would sign up
for the program if it cost you $___ per month," where the dollar
amount was randomly assigned across respondents to be 50Cents, $1, $2,
$4, $6, $9, or $12. If they answered "yes," they were asked
the follow-up certainty question described previously. MBDC respondents
were asked if they "would join the Green Choice program if it would
cost you these amounts each month": 10Cents, 50Cents, $1, $1.50,
$2, $3, $4, $6, $9, $12, $20, $45, or $95. At each amount, respondents
were asked to make a "definitely no," "probably no, not
sure," "probably yes," or "definitely yes"
response choice. Copies of the questionnaires are available from the
authors. Appendix A provides copies of the survey questions. Appendix B
provides the distributions of responses to the actual choice, DC, and
MBDC questions.
Implementation of the survey instruments followed the Dillman Total
Design Method (Dillman 1978). The survey was pretested by administering
successive draft versions by phone until respondents clearly understood
the instrument. Established multiple contact survey techniques,
including a $2 incentive, were used in all versions with Cornell
University as the primary correspondent. A private survey research firm,
Hagler Bailly, Inc., administered all versions. After adjusting for
"list errors" (undeliverables, not NMPC customers, moved out
of area, and deceased), adjusted response rates for the hypothetical
mail surveys were 66% for the MBDC version and 67% for the DC with
follow-up certainty question. The adjusted response rate for the
telephone survey was just over 70%. These response rates approximate the
70% response rate guideline established by the NOAA panel report (Arrow
et al. 1993).
In each survey version, respondents were first screened to
establish that they were NMPC customers and to determine their previous
knowledge of the Green Choice program. A description of this program
followed, with questions to aid the respondents' understanding. The
program description followed the NMPC Green Choice brochure as closely
as possible and emphasized various components of the good (trees and
renewable energy) and the provision point mechanism. The description was
followed by either an actual choice or a CV question, and the survey
concluded with demographic questions.
As shown in Table 1, contingency table analyses indicates that the
observable demographic characteristics of survey respondents (age,
gender, income, completion of a college degree, and whether the
respondent has contributed to any environmental group in the last two
years) are not statistically different across the three sample groups at
the 5% significance level. (1) Hence, any procedural variance observed
can be attributed to how respondents answer different questions and not
to sample selection.
4. Empirical Results
Logistic response functions for the FCQ responses are reported in
the top portion of Table 2. Corresponding estimates for MBDC responses
are presented in Table 3. Estimates of participation at $6 (the cost of
actually signing up) and mean WTP estimates for nonnegative values,
following Hanemann (1984, 1989), are reported for each model.
Ninety-five percent confidence intervals for the participation and mean
WTP estimates from the parametric models are estimated using the Krinsky
and Robb (1986) procedure with 10,000 random draws. In the bottom
portion of Tables 2 and 3, nonparametric estimates of participation at
$6 and mean WTP are calculated using Kristrom's (1990) approach.
(2) Confidence intervals for the nonparametric estimates are obtained by
creating 10,000 normally distributed random draws using the mean and
variance of the estimates. In the following subsections, we examine
different hypotheses about criterion validity, replicability, and
convergent validity, using the participation rates at $6, mean WTP
estimates, and WTP distributions as the respective measures of interest.
Criterion Validity: A Comparison with Actual Participation
Decisions
In the telephone survey actual sign-ups were collected, resulting
in a participation rate at $6 of 20.4%. This value serves as a criterion
for assessing the predictive power of each method. It should be noted
that the actual participation rates used here greatly exceed expected
sign-ups for green electricity programs in the field because our sample
is, by necessity, completely aware of the existence of the program. Such
100% awareness greatly differs from the limited consumer awareness
typically associated with green pricing programs. Also, a potential
concern is the possible differences between phone and mall elicitation methods. Phone contingent valuation responses were collected as part of
a larger research effort (see Ethier et al. 2000), and comparability
between hypothetical phone and mail responses suggests that the
differences in elicitation formats are not a problem.
Using the 20.4% actual sign-up rate as the reference criterion, we
see that the MBDC "probably yes" (parametric: 19.8%;
nonparametric: 17.8%) and DC Cert [greater than or equal to] 7
(parametric: 22.0%; nonparametric: 19.3%) models are the closest
predictors of actual sign-ups. To assess significance, a distribution of
actual participation was simulated using the binomial distribution, and
the convolutions method (Poe, Severance-Lossin, and Welsh 1994) was
employed to compare distributions. These methods indicate that the
Pr(yes) at $6 for the MBDC "probably yes" model are not
significantly different from the actual participation rate (parametric
[[p.sub.p]]: [p.sub.p] = 0.903; nonparametric [[p.sub.np]]: [p.sub.np] =
0.539). The DC Cert [greater than or equal to] 6 ([p.sub.p] = 0.310;
[p.sub.np] = 0.805), DC Cert [greater than or equal to] 7 ([p.sub.p] =
0.682; [p.sub.np] = 0.789), and DC Cert [greater than or equal to] 8
([p.sub.p] = 0.532; [p.sub.np] 0.306) models were also not significantly
different from actual participation rates, although the DC Cert [greater
than or equal to] 7 provides the best predictor under both the
parametric and nonparametric specifications. All other comparisons of
calibrated hypothetical responses with actual responses are
significantly different at the 5% level.
Replication of Welsh and Poe
In their recent empirical investigation, Welsh and Poe found that
DC response patterns corresponded closely with the "not sure"
MBDC model, suggesting that individuals who are unsure about their
response to a dollar amount would tend to vote "yes" to a DC
question. A potential concern about the Welsh and Poe study is that it
was conducted in a classroom setting. Here we examine if these results
are replicated in the field.
In contrast to the Welsh and Poe study, DC values do not correspond
with the MBDC "not sure" model but instead lie between the
point estimates of the "probably yes" and the "not
sure" models. Using the convolutions approach, the null hypothesis of identical mean WTP between the "not sure" model and the DC
model is rejected for both the parametric and nonparametric
specifications ([p.sub.p] < 0.001; [p.sub.np] = 0.000). Equality of
mean WTP between the DC and the "probably yes" ([p.sub.p] <
0.001; [p.sub.np] < 0.001) and "definitely yes" ([p.sub.p]
= 0.000; [p.sub.np] = 0.000) models is also rejected. The Pr(yes) at $6
from the DC models are also significantly different from the
"definitely yes" ([p.sub.p] = 0.000; [p.sub.np] = 0.000),
"probably yes" ([p.sub.p] < 0.001; [p.sub.np] 0.001), and
"not sure" ([p.sub.p] < 0.001; [p.sub.np] = 0.403) model
estimates except when the nonparametric DC and "not sure"
model values are compared. The correspondence between the nonparametric
DC and "not sure" model is coincid ental, however, as the
empirical cumulative density functions are really very different. Using
the Smirnov test (Conover 1980), we reject the null hypothesis of
identical distributions (D = 0.151, p < 0.01). Thus, although our
specific results do not concur with those of Welsh and Poe, the critical
message from their article remains: DC response patterns correspond with
values that have a relatively low level of voting certainty.
Convergent Validity: Comparing Certainty Corrections across Methods
We now compare certainty corrections across the FCQ and MBDC
methods. For example, does the "definitely yes" response to
the MBDC question format correspond with high levels of certainty in the
FCQ and so on? Consistent with expectations, the Pr(yes) at $6 declines
as the certainty level increases, and the mean WTP and Pr(yes) at $6 is
inversely related to certainty levels. A comparison of these models
indicates that the MBDC "definitely yes" model corresponds
closely with the DC Cert [greater than or equal to] 9 model (mean WTP:
[p.sub.p] = 0.820, [p.sub.np] = 0.532; Pr(yes) at $6: [p.sub.p] = 0.100;
[p.sub.np] = 0.595). The mean WTP and Pr(yes) at $6 of the MBDC
"probably yes" parametric and nonparametric models most
closely corresponds with the DC Cert [greater than or equal to] 7 models
(mean WTP: [p.sub.p] = 0.852, [p.sub.np] = 0.624; Pr(yes) at $6:
[p.sub.p] 0.468; [p.sub.np] = 0.699) and are also not statistically
different at the 5% level from the DC Cert [greater than or equal to] 6
(mean WTP: [p.sub .p] = 0.283, [p.sub.np] = 0.562; Pr(yes) at $6:
[p.sub.p] = 0.145; [p.sub.np] = 0.330) and DC Cert [greater than or
equal to] 8 models (mean WTP: [p.sub.p] = 0.176, [p.sub.np] = 0.010;
Pr(yes) at $6: [p.sub.p] = 0.527; [p.sub.np] = 0.650) except when mean
WTP is compared between the "probably yes" and Cert [greater
than or equal to] 8 nonparametric models. As indicated earlier, the
"not sure" model already exceeds the standard DC analysis and
thus is not comparable to any of the corrected measures. In general, the
models that are good predictors of the actual participation rate--the
"probably yes" model and the DC Cert [greater than or equal
to] 6, DC Cert [greater than or equal to] 7, and DC Cert [greater than
or equal to] 8 models--seem to correspond closely with each other.
Even though it appears that there is a close correspondence between
MBDC and DC models in terms of their certainty-corrected responses, this
similarity is merely coincidental and dependent on the values (i.e., the
nonnegative mean WTP and Pr(Yes) at $6) examined. Using the Smirnov
test, the equality of the "definitely yes" and DC Cert
[greater than or equal to] 9 nonparametric distributions is strongly
rejected ([D.sub.np] 0.189, p < 0.01) even though we found equality
between the nonnegative mean WTP and Pr(yes) at $6. Using a
Kolmogorov-Smirnov test (Conover 1980), the equality of the parametric
distributions for these same models is also rejected ([D.sub.p] = 0.214,
p < 0.01). Equality of distributions is likewise strongly rejected
when comparing the "probably yes" model with the DC Cert
[greater than or equal to] 6 ([[D.sub.p] = 0.l67, p < 0.01;
[D.sub.np] = 0.151, p < 0.01), the DC Cert [greater than or equal to]
7 ([D.sub.p] = 0.15l, p < 0.01; [D.sub.np] 0.l93, p < 0.01), and
DC Cert [greater than or eq ual to] 8 ([D.sub.p] = 0.271, p < 0.01;
[D.sub.np] = 0.28 1, p < 0.0l) models.
To further demonstrate the difference in underlying WTP
distributions, the top portion of Figure 1 shows the positive domain of
the estimated parametric distributions for the different DC certainty
levels. Figure 2 shows the estimated distributions for the different
multiple-bounded models. As the certainty levels increase, the DC
response functions shift downward, and the Pr(yes) at $0 and other
values shift downward dramatically. In general, as the DC certainty
level increases, the "constant" of the model decreases, while
the "slope" is largely unchanged. In contrast, it appears that
as the certainty level increases within the multiple-bounded format, the
response function shifts inward and becomes much steeper. The downward
effect on the Pr(yes) at $0 is not as notable, with even the
"definitely yes" model crossing the axis above the 50th
percentile. In general, as the MBDC certainty level increases, the
change in the "constant" is ambiguous, while the
"slope" consistently increases. Thus, although both me thods
seek to measure a certainty-corrected value, it is clear that the
response functions they elicit are fundamentally different, as the DC
correction affects primarily the "constant," and the MBDC
correction impacts the "slope."
The equality of certainty corrections with each other and with
actual participation at $6 appears to be merely coincidental. This point
is demonstrated in Figure 3, which overlays the multiple-bounded
"probably yes" model with the DC Cert [greater than or equal
to]7 model. As depicted, the percentage of "yes" responses is
much lower for the DC Cert [greater than or equal to] 7 model at low bid
amounts than the multiple-bounded "probably yes" model. The
reverse is true for high dollar amounts. The two functions cross at
around $5.23, and the difference between the two distributions is small
only for very limited range of bids, including $6.
5. Concluding Remarks
Two methods for calibrating discrete choice contingent valuation
responses--the dichotomous choice with follow-up certainty question
method of Champ et al. (1997) and the multiple-bounded method of Welsh
and Poe (1998)--are evaluated using data from a field validity
comparison of hypothetical and actual participation decisions in a green
electricity pricing program. Treating MBDC "probably yes"
responses and DC responses with an associated certainty level of 6 and
higher, 7 and higher, or 8 and higher to be "yes" responses
leads to hypothetical program participation rates that are not
statistically different from actual participation rates. As such, our
findings coincide with those of other researchers who find that
hypothetical responses tend to overstate WTP and that appropriate
certainty corrections correspond with a moderate to high rate of
certainty.
Contrary to Welsh and Poe, our MBDC "not sure" model does
not coincide with the DC model. However, we do find that DC responses
reflect low levels of certainty if we take the uncertainty expressed in
MBDC responses as truth. Hence, while the specific statistical
correspondence observed in Welsh and Poe does not apply here, the basic
result that DC responses correspond with relatively low levels of
payment certainty is replicated.
Further exploration of the various discrete choice models reveals
that even though some MBDC models and DC models with certainty
corrections are not statistically different in terms of their program
participation rate predictions and mean WTP estimates, the underlying
WTP distributions are significantly different. This suggests that the
underlying behavioral models are fundamentally distinct and that the two
correction methods do not coincide. Because regulatory restrictions
prevented the collection of actual program sign-ups at multiple prices,
we are unable to examine how actual contributions vary across prices in
this research. Based on our results, however, it appears that such
comparisons offer a critical area of future research.
Of obvious interest is which of these methods should be used in
future studies. On the basis of this single study, it would be premature
for us to provide a definitive answer. We instead conclude by
highlighting some theoretical and empirical trade-offs that may be
important when considering alternative correction methods.
Given that the DC with follow-up certainty question and the MBDC
methods suggest different implied WTP distributions, one way of
comparing these methods is to look at how well the implied distributions
fit with a priori theoretical expectations. In this case, it seems
reasonable to expect that no one would have a negative WTP for the Green
Choice program and that the predicted probability of a "yes"
response at $0 would be quite high. The DC model with Certainty [greater
than or equal to] 7 produces an estimated probability(yes) of about 0.5
at a price of $0, while the MBDC "probably yes" model produces
a probability(yes) of about 0.7. On this basis, we might judge the
revealed demand of the MBDC "probably yes" model as having
greater consistency with theoretical expectations. At the same time, the
DC format, which offers a single "yes/no" decision
opportunity, is most consistent with the theoretical concept of
incentive compatibility (Carson, Groves, and Machina 1999). As such,
there is no clear preference b etween methods on theoretical grounds.
In a similar vein, practical and empirical considerations involve
trade-offs across methods. The DC format is less demanding on
respondents, who are familiar with take-it-or-leave-it decision making
in standard market transactions. However, the MBDC format allows the
researcher to observe information on several points rather than a single
point of the respondent's WTP distribution. This increases the
statistical efficiency of WTP estimates. The opportunity to elicit each
respondent's WTP for several dollar values also decreases the
importance of optimal bid design, which is a widely discussed issue in
DC methodology (for a definitive review of these issues, see Hanemann
and Kanninen 1999). Whether the dollar values presented to respondents
in a MBDC format influence their answers to the individual amounts
remains an open empirical question. Whereas Roach, Boyle, and Welsh
(2002) and Vossler et al. (2002) provide evidence against such bid
design effects by comparing groups of survey respondents receiving diffe
rent dollar values, Alberini, Boyle, and Welsh (in press) find that the
response distributions and WTP estimates are quite sensitive to whether
dollar values are presented in ascending versus descending order.
Appendix A
Actual Sample (Phone):
You may need a moment to consider the next couple of questions.
Given your household income and expenses, I'd like you to think
about whether or not you would be interested in the Green Choice
program. If you decide to sign up, we will send your name to Niagara
Mohawk and get you enrolled in the program. All your other answers to
this survey will remain confidential.
Does your household want to sign up at a cost of $6 per month?
1. Yes
2. No.
Hypothetical Dichotomous Choice with Follow-Up Choice Question
(mail Sample $6)
Given your household's income and other expenses, we would
like you to think about whether or not you would be interested in
joining the Green Choice program.
10. Would your household sign up for the program if it cost you $6
per month? (Please circle ONE response)
1 Yes
2 No ---------> Skip to Question 12 on the next page.
11. So you think that you would sign up. We would like to know how
sure you are of that. On a scale from '1' to '10',
where '1' is 'Very Uncertain' and '10'
'Very Certain', how certain are you that you would sign up and
pay the extra $6 a month if the program were actually offered? (Please
circle ONE response)
Very Very
Uncertain Certain
1 2 3 4 5 6 7 8 9 10
Hypothetical Multiple Bounded Discrete Choice Question (mail)
Given your household's income and other expenses, we would
like you to think about whether or not you would be interested in
joining the Green Choice program.
10. Would you join the Green Choice program if it would cost you
these amounts each month?
(Please circle ONE letter for EACH dollar amount to show if you would
join)
Cost to You Definitely Probably Not Probably Definitely
per Month No No Sure Yes Yes
10cents A B C D E
50cents A B C D B
$1 A B C D E
$1.50 A B C D E
$2 A B C D E
$3 A B C D E
$4 A B C D E
$6 A B C D E
$9 A B C D E
$12 A B C D E
$20 A B C D E
$45 A B C D E
$95 A B C D E
Appendix B
Distribution of Survey Responses
Actual Phone Responses
Price % Yes
$6 20.42 (29/142)
Discrete Choice Discrete Choice Responses
Responses with Certainty
Corrections
% Yes
Price % Yes With Cert [greater than
or equal to] 5
$0.50 65.04 (80/123) 62.60 (77/123)
$1 62.04 (67/108) 59.26 (64/108)
$2 47.90 (57/119) 45.38 (54/119)
$4 21.93 (25/114) 19.30 (22/114)
$6 38.53 (42/109) 33.94 (37/109)
$9 20.39 (21/103) 18.45 (19/103)
$12 13.68 (16/117) 11.97 (14/117)
% Yes
20.42 (29/142)
Discrete Choice Responses with Certainty Corrections
% Yes % Yes
Price With Cert [greater than or equal With Cert [greater than
to] 6 or equal to] 7
$0.50 60.16 (74/123) 54.47 (67/123)
$1 50.00 (54/108) 47.22 (51/108)
$2 41.18 (49/119) 37.82 (45/119)
$4 17.54 (20/114) 16.67 (19/114)
$6 25.69 (28/109) 22.02 (24/109)
$9 16.50 (17/103) 13.59 (14/103)
$12 11.11 (13/117) 11.11 (13/117)
% Yes
20.42 (29/142)
Discrete Choice Responses with Certainty
Corrections
% Yes % Yes
Price With Cert [greater than With Cert [greater than
or equal to] 8 or equal to] 9
$0.50 48.78 (60/123) 38.21 (47/123)
$1 37.04 (40/108) 27.78 (30/108)
$2 33.61 (40/119) 23.53 (28/119)
$4 13.16 (15/114) 10.53 (12/114)
$6 19.27 (21/109) 10.09 (11/109)
$9 9.71 (10/103) 5.83 (6/103)
$12 9.40 (11/117) 4.27 (5/117)
% Yes
20.42 (29/142)
Discrete Choice Responses
with Certainty
Corrections
% Yes
Price With Cert [greater than
or equal to] 10
$0.50 28.46 (35/123)
$1 21.30 (23/108)
$2 16.81 (20/119)
$4 6.14 (7/114)
$6 7.34 (8/109)
$9 2.91 (3/103)
$12 2.56 (3/117)
Multiple-Bounded Discrete Choice Responses
Price % Not Sure % Probably Yes % Definitely Yes
$0.10 80.66 (196/243) 76.54 (186/243) 60.91 (148/243)
$0.50 80.18 (182/227) 74.45 (169/227) 54.19 (123/227)
$1 74.34 (168/226) 65.93 (149/226) 47.35 (107/226)
$1.50 66.22 (147/222) 52.70 (117/222) 35.14 (78/222)
$2 59.91 (133/222) 45.95 (102/222) 28.83 (64/222)
$3 48.64 (107/220) 33.64 (74/220) 18.18 (40/220)
$4 44.14 (98/222) 27.03 (60/222) 15.77 (35/222)
$6 33.79 (74/219) 17.81 (39/219) 8.22 (18/219)
$9 21.56 (47/218) 11.93 (26/218) 4.59 (10/218)
$12 14.22 (31/218) 5.96 (13/218) 2.29 (5/218)
$20 7.37 (16/217) 1.84 (4/217) 0.92 (2/217)
$45 3.67 (8/218) 0.00 (0/218) 0.00 (0/218)
$95 2.29 (5/218) 0.00 (0/218) 0.00 (0/218)
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
[FIGURE 3 OMITTED]
Table 1
Comparisons of Respondent Characteristics across Samples
Chi-Squared Actual Mean MBDC Mean
Variable (d.f.) N (n) (n)
Age 31.326 (a) 1177 55.66 51.58
(22) (135) (252)
Gender 4.067 1209 44.37% male 52.31% male
(2) (142) (260)
Income 14.980 (b) 1107 $41,849 $44,071
(10) (119) (240)
College degree 3.439 1190 45.00% 35.55%
(2) (140) (256)
Give to environmental 1.213 1203 19.15% 19.62%
groups (2) (141) (260)
DC Mean
Variable (n)
Age 52.52
(790)
Gender 53.53% male
(807)
Income $41,188
(748)
College degree 38.41%
(256)
Give to environmental 22.19%
groups (802)
(a) Age is a continuous variable but is converted to the following
categories: [less than or equal to]30, 31-35, 36-40, 41-45 ... 76- 80,
and above 80. The upper and lower age categories are wider so that there
are enough phone survey responses ([greater than or equal to]5) in them.
(b) In the survey, income categories are as follows: under $15,000,
$15,000 to $30,000, $30,000 to $50,000, $50,000 to $75,000, $75,000 to
$100,000, $100,000 to $150,000, $150,000 to $250,000, and $250,000 or
over. The highest three categories are pooled for the chi-squared test,
as there are very few phone survey responses in them.
* and ** correspond with 5% and 1% levels of significance, respectively.
In this case, none of the chi-squared values are significant at these
levels.
Table 2
Dichotomous Choice with Certainty Corrections
Cert [greater than
Model Uncorrected or equal to] 5
Parametric estimation (logit)
"Constant" 0.455 0.356
([alpha]) (0.117) ** (0.117) **
"Slope" -0.207 -0.215
([beta]) (0.022) ** (0.023) **
Wald statistic 85 ** 85 **
N 793 793
Pr(yes) at $6 0.313 0.282
[95% CI] [0.276, 0.352] [0.246, 0.321]
Mean WTP 4.57 4.13
[95% CI] [4.01, 5.35] [3.61, 4.84]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.300 0.265
[95% CI] [0.240, 0.359] [0.207, 0.321]
Mean WTP 4.47 4.09
[95% CI] [3.99, 4.95] [3.63, 4.54]
Cert [greater than
Model or equal to] 6
Parametric estimation (logit)
"Constant" 0.147
([alpha]) (0.118)
"Slope" -0.213
([beta]) (0.024) **
Wald statistic 76 **
N 793
Pr(yes) at $6 0.244
[95% CI] [0.209, 0.281]
Mean WTP 3.61
[95% CI] [3.13. 4.28]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.215
[95% CI] [0.161, 0.268]
Mean WTP 3.62
[95% CI] [3.17, 4.05]
Cert [greater than
Model or equal to] 7
Parametric estimation (logit)
"Constant" -0.014
([alpha]) (0.120)
"Slope" -0.209
([beta]) (0.025) **
Wald statistic 68 **
N 793
Pr(yes) at $6 0.220
[95% CI] [0.187, 0.256]
Mean WTP 3.28
[95% CI] [2.82, 3.94]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.193
[95% CI] [0.141, 0.243]
Mean WTP 3.33
[95% CI] [2.90, 3.75]
Cert [greater than
Model or equal to] 8
Parametric estimation (logit)
"Constant" -0.272
([alpha]) (0.124) *
"Slope" -0.208
([beta]) (0.027) **
Wald statistic 58 **
N 793
Pr(yes) at $6 0.180
[95% CI] [0.149, 0.215]
Mean WTP 2.73
[95% CI] [2.31, 3.34]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.161
[95% CI] [0.113, 0.208]
Mean WTP 2.81
[95% CI] [2.41, 3.20]
Cert [greater than
Model or equal to] 9 Cert = 10
Parametric estimation (logit)
"Constant" -0.621 -0.989
([alpha]) (0.137) ** (0.154) **
"Slope" -0.253 -0.277
([beta]) (0.035) ** (0.044) **
Wald statistic 51 ** 39 **
N 793 793
Pr(yes) at $6 0.106 0.066
[95% CI] [0.081, 0.137] [0.046, 0.093]
Mean WTP 1.70 1.14
[95% CI] [1.42, 2.13] [0.92, 1.478]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.101 0.067
[95% CI] [0.045, 0.156] [0.035, 0.099]
Mean WTP 1.87 1.34
[95% CI] [1.54, 2.20] [1.09, 1.57]
Standard errors are in parentheses.
* and ** correspond to 5% and 1% significance levels, respectively.
Table 3
Multiple-Bounded Discrete Choice Models
Definitely Probably
Model Yes Yes
Parametric estimation (logit)
"Constant" 0.258 0.866
([alpha]) (0.117) * (0.120) **
"Slope" -0.471 -0.377
([beta]) (0.036) ** (0.026) **
Wald statistic 166 ** 213 **
N 260 260
Pr(yes) at $6 0.071 0.198
[95% CI] [0.049, 0.104] [0.156, 0.248]
Mean WTP 1.76 3.23
[95% CI] [1.49, 2.11] [2.80, 3.73]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.082 0.178
[95% CI] [0.046, 0.118] [0.128, 0.227]
Mean WTP 2.00 3.46
[95% CI] [1.79, 2.20] [3.17, 3.74]
Not
Model Sure
Parametric estimation (logit)
"Constant" 0.745
([alpha]) (0.113) **
"Slope" -0.159
([beta]) (0.011) **
Wald statistic 202 **
N 260
Pr(yes) at $6 0.448
[95% CI] [0.398, 0.500]
Mean WTP 7.14
[95% CI] [6.16, 8.31]
Nonparametric estimation (Kristrom)
Pr(yes) at $6 0.338
[95% CI] [0.275, 0.399]
Mean WTP 8.35
[95% CI] [7.10, 9.58]
Standard errors are in parentheses.
* and ** correspond to 5% and 1% significance levels, respectively.
Received November 2001; accepted August 2002.
(1.) Age is statistically different across samples at the 10%
level. However, when we compare just the MBDC and DC samples, no
characteristic is different at the 10% level. Most of the focus in this
paper is on analyzing survey responses from these two groups.
(2.) We estimate the variance of WTP (and, subsequently, confidence
intervals) based on Haab and McConnell's (1997, p. 259) derivation of the variance for the Tumbull estimator. That is, we treat the
proportion of "yes" responses to each dollar amount as random
variables. This is contrary to the approach of Boman, Bostedt, and
Kristrom (1999), which treats the dollar amounts themselves as random
variables. Formally, using Haab and McConnell's notation, the
variance is given by
[summation over
(M+1/j=1)][[0.5*[c.sub.j-1]+0.5*[c.sub.j]].sup.2](V([F.sub.j]+[F.sub.
j-1])]-2[summation over
M/j=1)][0.5*[c.sub.j-1]+0.5*[c.sub.j]*[0.5*[c.sub.j]+0.5*[c.sub.j+1]]
*V([F.sub.j])
where [c.sub.j] denotes the dollar amount, [F.sub.j] denotes the
cumulative density function at dollar amount j, and V(*) denotes a
variance.
References
Alberini, Anna, Kevin J. Boyle, and Michael P. Welsh. 2003.
Analysis of contingent valuation data with bids and response options
allowing respondents to express uncertainty. Journal of Environmental
Economics and Management. In press.
Arrow, Kenneth J. 1999. Foreword. In Valuing environmental
preferences: Theory and practice of the contingent valuation method in
the US, EU and developing countries, edited by Ian J. Bateman and
Kenneth G. Willis. Oxford: Oxford University Press, pp. v-vii.
Arrow, Kenneth, Robert Solow, Edward Learner, Paul Portney, Roy
Radner, and Howard Schuman. 1993. Report of the NOAA Panel on Contingent
Valuation. Federal Register 58(10):4602-14.
Balistreri, Edward, Gary McClelland, Gregory Poe, and William
Schulze. 2001. Can hypothetical questions reveal true values? A
laboratory comparison of dichotomous choice and open-ended contingent
values with auction values. Environmental and Resource Economics
18:275-92.
Blumenschein, Karen, Magnus Johannesson, Glenn C. Blomquist, Bengt
Liljas, and Richard M. O'Conor. 1998. Experimental results on
expressed certainty and hypothetical bias in contingent valuation.
Southern Economic Journal 65:169-77.
Boman, Mattias, Goran Bostedt, and Bengt Kristrom. 1999. Obtaining
welfare bounds in discrete-response valuation studies: A non-parametric
approach. Land Economics 75(2):284-94.
Brown, Thomas C., Patricia A. Champ, Richard C. Bishop, and Daniel
W. McCollum. 1996. Which response format reveals the truth about
donations to a public good? Laud Economics 72(2):152-66.
Cameron, Trudy A., and Daniel D. Huppert. 1989. OLS versus ML
estimation of non-market resource values with payment card interval
data. Journal of Environmental Economics and Management 17:230-46.
Carson, Richard T., Theodore Groves, and Mark J. Machina. 1999.
Incentive and informational properties of preference questions. Plenary Address, European Association of Environmental and Resource Economists,
Oslo, Norway, June.
Champ, Patricia A., and Richard C. Bishop. 2001. Donation payment
mechanisms and contingent valuation: An empirical study of hypothetical
bias. Environmental and Resource Economics 19(4):383-402.
Champ, Patricia A., Richard C. Bishop, Thomas C. Brown, and Daniel
W. McCollum. 1997. Using donation mechanisms to value non-use benefits
from public goods. Journal of Environmental Economics and Management
33(2):151-63.
Conover, W. J. 1980. Practical nonparametric statistics. 2nd
edition. New York: John Wiley & Sons.
Cropper, Maureen L., and Wallace E. Oates. 1992. Environmental
economics: A survey. Journal of Economic Literature 30: 675-740.
Cummings, Ronald G., Steven Elliott, Glenn W. Harrison, and James
Murphy. 1997. Are hypothetical referenda incentive compatible? Journal
of Political Economy 105:609-21.
Cummings, Ronald G., Glenn W. Harrison, and E. Elizabet Rutstrom.
1995. Homegrown values and hypothetical surveys: Do dichotomous choice
questions elicit real economic commitments? American Economic Review
85:260-6.
Deacon, Robert T., David S. Brookshire, Anthony F. Fisher, Allen V.
Kneese, Charles D. Kolstad, David Scroggin, V. Kerry Smith, Michael
Ward, and James Wilen. 1998. Research trends and opportunities in
environmental and natural resource economics. Environmental and Resource
Economics 11(3-4):383-97.
Dillman, Donald A. 1978. Mail and telephone surveys: The total
design method. New York: John Wiley & Sons.
Dubourg, W. Richard, Michael W. Jones-Lee, and Graham Loomes. 1994.
Imprecise preferences and the WTP-WTA disparity. Journal of Risk and
Uncertainty 9:115-33.
Ethier, Robert G., Gregory L. Poe, William D. Schulze, and Jeremy
E. Clark. 2000. A comparison of hypothetical phone and mail contingent
valuation responses for green pricing electricity programs. Land
Economics 76(l):54-67.
Haab, Timothy C., and Kenneth E. McConnell. 1997. Referendum models
and negative willingness to pay: Alternative solutions. Journal of
Environmental Economics and Management 32(2):251-70.
Hanemann, W. Michael. 1984. Welfare evaluation in contingent
valuation experiments with discrete responses. American Journal of
Agricultural Economics 66:332-41.
Hanemann, W. Michael. 1989. Welfare evaluations in contingent
valuation experiments with discrete responses: Reply. American Journal
of Agricultural Economics 71:1057-61.
Hanemann, W. Michael. 1994. Valuing the environment through
contingent valuation. Journal of Economic Perspectives 8(4):19-43.
Hanemann, Michael, and Barbara Kanninen. 1999. Statistical analysis
of discrete-response CV data. In Valuing environmental preferences:
Theory and practice of the contingent valuation method in: the US, EU
and developing countries, edited by Ian J. Bateman and Kenneth G.
Willis. Oxford: Oxford University Press, pp. 302-441.
Harpman, David A., and Michael P. Welsh. 1999. Measuring goodness
of fit for the double-bounded logit model: Comment. American Journal of
Agricultural Economics 81:235-7.
Harrison, Glenn. 2002. Experimental economics and contingent
valuation. Paper presented at the 2nd World Congress of Environmental
and Resource Economists. Monterey, CA, June.
Holt, Ed A. 1997. Green pricing resource guide. Gardiner, ME: The
Regulatory Assistance Project.
Johannesson, Magnus, Glenn C. Blomquist, Karen Blumenscheia,
Per-Olav Johansson, Bengt Liljas, and Richard M. O'Conor. 1999.
Calibrating hypothetical willingness to pay responses. Journal of Risk
and Uncertainty 8:21-32.
Johannesson, Magnus, Bengt Liljas, and Per-Olav Johansson. 1998. An
experimental comparison of dichotomous choice contingent valuation
questions and real purchase decisions. Applied Economics 30:643-7.
Krinsky, Itzhak, and A. Leslie Robb. 1986. On approximating the
statistical properties of elasticities. Review of Economics and
Statistics 68:715-9.
Kristrom, Bengt. 1990. A non-parametric approach to the estimation
of welfare measures in discrete response valuation studies. Land
Economics 66(2):135-9.
Marks, Melanie, and Rachel Croson. 1998. Alternative rebate rules
in the provision of a threshold public good: An experimental
investigation. Journal of Public Economics 67:195-220.
Mitchell, Robert C., and Richard T. Carson. 1989. Using surveys to
value public goads: The contingent valuation method. Washington, DC:
Resources for the Future and The Johns Hopkins University Press.
Opaluch, James J., and Kathleen Segerson. 1989. Rational roots of
"irrational" behavior: New theories of economic
decision-making. Northeastern Journal of Agricultural and Resource
Economics 18(2):81-95.
Poe, Gregory L., Eric K. Severance-Lossin, and Michael P. Welsh.
1994. Measuring the difference (X-Y) of simulated distributions: A
convolutions approach. American Journal of Agricultural Economics
76:904-15.
Ready, Richard C., Stale Navrud, and W. Richard Dubourg. 2001. How
do respondents with uncertain willingness to pay answer contingent
valuation questions? Land Economics 77(3):315-26.
Ready, Richard C., John C. Whitehead, and Glenn C. Blomquist. 1995.
Contingent valuation when respondents are ambivalent. Journal of
Environmental Economics and Management 29(2):181-96.
Roach, Brian, Kevin J. Boyle, and Michael Welsh. 2002. Testing bid
design effects in multiple-bounded contingent-valuation questions. Land
Economics. 78(l):121-31.
Rondeau, Daniel, William D. Schulze, and Gregory L. Poe. 1999.
Voluntary revelation of the demand for public goods using a provision
point mechanism. Journal of Public Economics 72(3):455-70.
Rose, Steven K., Jeremy Clark, Gregory L. Poe, Daniel Rondeau, and
William D. Schulze. 2002. The private provision of public goods: Tests
of a provision point mechanism for funding green power programs.
Resource and Energy Economics 24:131-55.
Schulze, William D. 1994. Green pricing: Solutions for the
potential free rider problem. Unpublished paper, prepared for Niagara
Mohawk Power Corporation, Comell University.
Vossler, Christian A., Gregory L. Poe, Robert G. Ethier, and
Michael P. Welsh. 2002. Assessing position bias in multiple bounded
discrete choice valuation questions. Unpublished paper, Comell
University.
Welsh, Michael P., and Gregory L. Poe. 1998. Elicitation effects in
contingent valuation: Comparisons to a multiple bounded discrete choice
approach. Journal of Environmental Economics and Management 36:170-85.
Wiser, R., M. Bolinger, and E. Halt. 2000. Customer choice and
green power marketing: A critical review and analysis of experience to
date. Proceedings ACEEE 2000 Summer Study on Energy Efficiency in
Buildings, Pacific Grove, CA.
Wood, Lisa L., William H. Desvousges, Anne E. Kenyon, Mohan V.
Bala, F. Reed Johnson, R. Iachan, and Em E. Fries. 1994. Evaluating the
market for "green products": WTP results and market
penetration forecasts. Working Paper No. 4, Center for Economics
Research, Research Triangle Institute, NC.
Christian A. Vossler *, Robert G. Ethier +, Gregory L. Poe ++ and
Michael P. Welsh (ss)
* Department of Applied Economics and Management, Cornell
University, 157 Warren Hall, Ithaca, NY 14853, USA; E-mail
cav22@cornell.edu.
+ ISO-New England, Holyoke, MA 01040, USA; E-mail rge4@cornell.edu.
++ Department of Applied Economics and Management, Cornell
University, 454 Warren Hall, Ithaca, NY 14853, USA; E-mail
GLP2@cornell.edu; corresponding author.
(ss) Christensen Associates, 4610 University Avenue, Madison, WI
53705, USA; E-mail MichaelW@Irca.com.
We are grateful to William Schulze, Jeremy Clark, Daniel Rondeau,
Steve Rose, and Eleanor Smith for their input into various components of
this researeh. We also wish to thank Theresa Flaim, Janet Dougherty,
Mike Kelleher, Pam Ingersol, and Mana Ucchino at Niagara Mohawk Power
Corporation for facilitating this research and Pam Rathbun and
colleagues at Hagler Bailly, Inc., Madison, Wisconsin, for their survey
expertise. Any errors, however, remain our responsibility. Funding for
this project was provided by NSF/EPA Grant R 824688 and USDA Regional
Project W-133, Cornell University.