首页    期刊浏览 2024年09月14日 星期六
登录注册

文章基本信息

  • 标题:Analysis: improving the accuracy of election forecasts
  • 作者:Donald P. Green
  • 期刊名称:Campaigns & Elections
  • 出版年度:2004
  • 卷号:May 2004
  • 出版社:Campaigns and Elections

Analysis: improving the accuracy of election forecasts

Donald P. Green

Suppose you want to forecast the outcome of an election. If the election is a week or two away, a survey is typically your best bet.

What sorts of people should you try to survey? If you ask most media or academic pollsters, the proper way to draw a representative sample of the adult population is to use random digit dialing (RDD). By placing calls to randomly generated telephone numbers, so the argument goes, the pollster gives every adult an equal chance of being interviewed.

If you ask most political pollsters, they'll say this approach is needlessly expensive. Rather than troll randomly for registered voters, political pollsters draw their target sample by selecting names from registration lists. This approach uses registration-based samples (RBS).

Both camps harrumph at each other when they meet at conferences of public opinion researchers, each dismissing the other's polls as unreliable. But what if the two approaches were pitted against one another in a head-to-head comparison? Which type of poll would provide the most accurate forecast?

That's the question we sought to answer with financial support from the Smith Richardson Foundation, a Connecticut-based organization that funds policy related research. In collaboration with pollsters at The Washington Post, Quinnipiac University in Hamden, Conn., and CBS News, we orchestrated four parallel RDD and RBS polls during the weeks leading up to the November 2002 elections. The Post conducted simultaneous RDD and RBS polls in anticipation of the Maryland governor's race. Quinnipiac did likewise for the gubernatorial races in Pennsylvania and New York. CBS used both types of surveys to forecast gubernatorial and congressional races in South Dakota.

Give the edge to RBS. Not only did RBS surveys perform better in terms of predictive accuracy, they also were substantially less expensive.

A Thumbnail Sketch of RDD and RBS

Before turning to the results, let's review the arguments for and against both polling techniques.

Sampling from a registration list has two main advantages. First, it streamlines the interview process. An RDD survey begins awkwardly. In order to select someone at random from the household, callers ask to speak with the adult who will be having the next birthday. If that person can be coaxed to the phone, the next challenge is to determine whether they are registered and likely to vote. With RBS, one need not spend valuable interview time sifting through random phone numbers in search of registered voters, nor does one need to ask questions about age or other background characteristics that are already recorded in the registration records.

Second, the registration list often contains information, such as previous election turnout, that can be very helpful in predicting who will vote. The RBS samples we've included were weighted to favor people whose past voting behavior suggested a higher likelihood of voting. Again, because interviewing time is expensive, using information from voter registration records is a lot cheaper than quizzing people about their likelihood of voting.

The disadvantage of RBS is that registered voters sometimes have unknown phone numbers. Those with missing phone numbers typically share some of the same characteristics as nonvoters in that they tend to be young,

move often or live in cities. If young, mobile or urban voters have distinctive voting patterns, omitting them from an RBS sample may introduce bias.

In our samples, about two-thirds of the registered voters were listed in telephone directories. The Maryland RBS pollindicated that blacks might have been under-represented in as much as the RBS poll had fewer black respondents than the parallel RDD poll. The Pennsylvania and New York RBS polls also show some under-representation of city dwellers in comparison with their RDD counterparts.

But demographics don't tell the whole story. It could be that the RDD polls are the ones that are off. The question is which type of poll better predicts the election results.

Comparing results

The table on page 51 reports the poll results for each race and compares them to the actual division of the vote. RBS did a better job of forecasting the vote margin in Maryland and Pennsylvania, but RDD had the edge in forecasting the lopsided, three-way New York governor's race and the South Dakota governor's race. RBS correctly predicted the dead heat in the South Dakota Senate race and the decisive Republican victory in the House. All told, with a 4-to-2 record, RBS did somewhat better in terms of predicting election outcomes.

Given the expense of RDD surveys, even a forecasting tie would be as good as a win for RBS. The polling firms that conducted the parallel RDD and RBS surveys estimated that RBS required 40 percent less time because interviewers didn't have to plow through interviews in search of likely voters. Although some of these savings were offset by the cost of purchasing voter files, the total cost of an RBS poll was roughly 25 percent less than an RDD poll of similar size.

The cost advantages of RBS are magnified when pollsters attempt to study specialized populations. In most states, absentee voters constitute a small yet important fraction of the electorate. It is prohibitively expensive to conduct an RDD survey of those voters. For example, if 50 percent of the registered voters in a state take part in federal midterm elections and only 20 percent of those who vote cast absentee ballots, RDD requires about 10 screening interviews with registered voters in order to reach a single absentee voter. In states where it is possible to obtain lists of absentee voters, RBS is the obvious choice.

Similar arguments apply to surveys of voters who are minorities or who live in political jurisdictions that do not fall neatly within the boundaries covered by telephone exchanges. RBS seems particularly valuable for journalists or political marketers intent upon studying narrow segments of the electorate. To conduct a survey with 18-to 24-year-old voters using RDD would break the bank, given the number of screening calls necessary to generate a meaningful number of interviews.

Conclusion

Having proven the usefulness of RBS in state elections, the next challenge is the national electorate. Faced with registration lists of varying quality across the country, pollsters may develop a hybrid of RBS and RDD, using registration lists where they are available and RDD where they are not. By and large, the most accurate lists of voters should be found in battleground states, because database vendors scramble to supply these lucrative phone lists to political campaigns. As media and academic pollsters look to make their pre-election polling budgets go farther, expect to see them turn to RBS.

For a fuller description and statistical analysis of these polls, see http://www.yale.edu/isps/publications/voter.html

Election Forecasts Versus Actual Results

ELECTION                 ACTUAL VOTE  RBS    RDD    SMALLER ERROR

Maryland (Governor)
D                             48%     48.3%  49.8%         RBS
R                             52      51.8   50.2
Pennsylvania (Governor)
D                             54.6    54.7   60.7          RBS
R                             45.4    45.3   39.3
New York (Governor)
D                             34.5    31.2   33            RDD
R                             50.8    47.9   51.1
I                             14.7    20.9   15.9
South Dakota
D (Governor)                  42.5    39     42.9          RDD
R (Governor)                  57.5    61     57.1
D (Senate)                    50.1    50     52.2          RBS
R (Senate)                    49.9    50     52.2
D (House)                     46      46.6   49.2          RBS
R (House)                     54      53.4   50.8

Source: Green, Donald P., and Alan S. Gerber, 2003. Enough Already with
Random Digit Dialing: Can Registration-Based Sampling Improve the
Accuracy of Election Forecasts? Report submitted to the Smith Richardson
Foundation

Donald P. Green is a professor of political science at Yale University, where he has taught since 1989. He directs Yale's Institution for Social and Policy Studies. His research interests include public opinion, campaign finance and voter turnout.

Alan Gerber is a professor of political science at Yale University, where he has taught since receiving his Ph.D. in economics from the Massachusetts Institute of Technology in 1994. He directs Yale's Center for the Study of American Politics. His research interests include elections, campaign finance, voter turnout and representation.

COPYRIGHT 2004 Campaigns & Elections, Inc.
COPYRIGHT 2004 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有