摘要:Objectives. We examined potential nonresponse bias in a large-scale, population-based, random-digit-dialed telephone survey in California and its association with the response rate. Methods. We used California Health Interview Survey (CHIS) data and US Census data and linked the two data sets at the census tract level. We compared a broad range of neighborhood characteristics of respondents and nonrespondents to CHIS. We projected individual-level nonresponse bias using the neighborhood characteristics. Results. We found little to no substantial difference in neighborhood characteristics between respondents and nonrespondents. The response propensity of the CHIS sample was similarly distributed across these characteristics. The projected nonresponse bias appeared very small. Conclusions. The response rate in CHIS did not result in significant nonresponse bias and did not substantially affect the level of data representativeness, and it is not valid to focus on response rates alone in determining the quality of survey data. Declining survey response rates over the last decade have raised concerns regarding public health research that uses population-based survey data. Response rates are commonly considered the most important indicator of the representativeness of a survey sample and overall data quality, and low response rates are viewed as evidence that a sample suffers from nonresponse bias. 1 , 2 Recent survey research literature, however, suggests that response rates are a poor measure of not only nonresponse bias but also data quality. 3 – 7 The decline in survey response rates over the past several decades has led to a number of rigorous studies and innovative methods to explore the relationship between survey response rates and bias. A meta-analysis that examined response rates and nonresponse bias in 59 surveys found no clear association between nonresponse rates and nonresponse bias. 8 Some surveys with response rates under 20% had a level of nonresponse bias similar to that of surveys with response rates over 70%. This is because nonresponse bias is either a function of both the response rate and the difference between respondents and nonrespondents in a variable of interest, 9 or it is a function of covariance between response propensity and a variable of interest. 10 Therefore, response rates alone are not the determinant of nonresponse bias of the survey estimates. Although it may be convenient to use the response rate as a single indicator of a survey's representativeness and data quality, nonresponse bias is a property of a particular variable, not of a survey. Nonetheless, declining survey response rates increase the potential for nonresponse bias and have raised questions about the representativeness of inferences made from probability sample surveys. Inferences from surveys are based on randomization theory and assume a 100% response from the sample. Although the gap between theory-based assumptions and the reality of survey administration has always been a concern, the increasing deviation from the full response assumption increases this concern. Nonresponse is multidimensional, not a unitary outcome, and is roughly divided into 3 components: noncontact, refusal, and other nonresponse. 9 Most examples of nonresponse compose the first 2 components. A study by Curtin et al. found that refusal rates in a telephone survey remained constant between 1979 and 2003, although the contact rates decreased dramatically. 11 Another study by Tuckel and O'Neill found the same pattern. 12 Arguably, different dynamics lead to noncontact and refusal. 13 , 14 Noncontact (e.g., unanswered phone calls in random-digit-dialed surveys) is related to accessibility. Call screening devices, phone usage, and at-home patterns affect accessibility, and calling strategy (e.g., number of call attempts and timing of calls) directly influences contact rates. 7 , 12 Refusal occurs only after contact is made. The decision to participate or not is an indicator of the respondent's amenability to the survey and is also influenced by other factors. Noncontact and refusal may affect different types of potential biases, and these biases may offset one another. 7 , 15 For example, measures on volunteerism may be biased through noncontact because those who spend much time volunteering may be hard to reach in random-digit-dialed surveys. On the other hand, those who refuse to participate in the same survey may have opinions and behaviors related to volunteerism that differ dramatically from those of persons who are never contacted. Because aggregating noncontact and refusal may obscure our understanding of nonresponse bias, understanding detailed response behaviors along with overall nonresponse bias is important. The decline in response rates is more rapid for random-digit-dialed telephone surveys than for other survey types. The difficulties inherent in examining nonresponse bias arise from the absence of data on nonrespondents. Unlike face-to-face surveys, in which interviewers make direct observation of the sampled individual and have an opportunity to gather contextual information regardless of response status, such information is scarce in telephone surveys because interviewers do not visit the individual and the interviewer–respondent interaction, if any, remains oral and over the telephone. Follow-up with nonrespondents in a telephone survey can be conducted to study its nonresponse bias, but such efforts are resource intensive. Additionally, unless 100% participation is achieved, there still remains some level of nonresponse. Alternatively, nonresponse can be studied through the use of the geographic identifiers associated with sampled telephone numbers. Phone numbers from random-digit-dialed sampling frames can be readily associated with a limited number of geographic identifiers, such as zip codes. In addition, most phone numbers can be matched to a postal address and consequently to a census tract and county, which provides a unique opportunity to evaluate patterns of nonresponse as a function of neighborhood characteristics. A few recent nonresponse bias studies have used such contextual data. 16 – 19 We examined potential nonresponse bias in the 2005 CHIS, a large random-digit-dialed telephone survey, by comparing a wide range of census tract–level neighborhood characteristics by response behavior as well as examining response rates across neighborhood characteristics. Although these characteristics are not specific to individual cases (households), neighborhood characteristics at the census tract level serve as useful proxy indicators of differences in the population. This is because census tracts are relatively permanent small geographic divisions with 1500 to 8000 people that are designed to be homogeneous with respect to sociodemographic characteristics. 20 Unlike previous studies that focused on statistical significance, we discuss substantive significance. We explored nonresponse bias in a large, population-based telephone health survey in California. We linked data from the California Health Interview Survey (CHIS) to US Census data at the tract level to compare respondents and nonrespondents across a broad range of neighborhood characteristics.