BUFFALO, N.Y. -- Anticipating what is likely to be one of the
most interesting elections in modern history, University at Buffalo
professor of political science James E. Campbell and Michael S.
Lewis-Beck, professor of political science at the University of
Iowa, have assembled the insights of prominent election forecasters
in a special issue of the International Journal of Forecasting
published this month.
Each of the articles demonstrates the challenges of election
forecasting, according to Campbell, chair of UB's Department of
Political Science, who since 1992 has produced a
trial-heat-and-economy forecast of the U.S. presidential election.
His forecast uses the second-quarter growth rate in the gross
domestic product and results of the trial-heat (preference) poll
released by Gallup near Labor Day to predict what percentage of the
popular vote will be received by the major party candidates.
The articles range from descriptions of diverse election
forecasting models, such as those that use political futures
markets and historical analysis, to articles that evaluate the
success of election forecasting in past elections.
Two of the articles address a topic particularly pertinent to
the 2008 presidential election: whether open seat and incumbent
elections should be treated differently by election
"One of the biggest misunderstandings about election forecasting
is the idea that accurate forecasts must assume that the campaign
does not matter," Campbell explains. "This is not true.
"First, one of the reasons that forecasts can be accurate is
that they are based on measures of the conditions that influence
campaigns. So campaign effects are, to a significant degree,
"Second, forecasters know that their forecasts are not perfect.
Forecasts are based on imperfect measures and may not capture all
of the factors affecting a campaign. Some portion of campaign
effects is always unpredictable."
Though some campaign effects are unpredictable "the extent of
these effects is usually limited," Campbell points out.
In the historic contest between presumptive presidential
nominees Barack Obama and John McCain one thing is certain:
"Forecasting this election will be more difficult than usual,"
"First, there isn't an incumbent. Approval ratings and the
economy are likely to provide weaker clues to an election's outcome
when the incumbent is not running. Second, Democrats had a very
divided nomination contest and it is unclear how lasting the
divisions will be.
"Third, many Republicans are not very enthusiastic about McCain
and it is unclear how strong Republican turnout will be for
Of the six different forecast models described in the journal
articles, only two have a forecast at this point. The other four
will have forecasts between late July and Labor Day. The journal
articles can be downloaded at
. Below are brief descriptions:
• In "U.S. Presidential Election Forecasting: An
Introduction" journal co-editors Campbell and Lewis-Beck provide a
brief history of the development of the election forecasting field
and an overview of the articles in this special issue.
• In "Forecasting the Presidential Primary Vote: Viability,
Ideology and Momentum," Wayne P. Steger of DePaul University takes
on the difficult task of improving on forecasting models of
presidential nominations. He focuses on the forecast of the primary
vote in contests where the incumbent president is not a candidate,
comparing models using information from before the Iowa Caucus and
New Hampshire primary to those taking these momentum-inducing
events into account.
• In "It's About Time: Forecasting the 2008 Presidential
Election with the Time-for-Change Model," Alan I. Abramowitz of
Emory University updates his referenda theory-based "time for a
change" election forecasting model first published in 1988.
Specifically, his model forecasts the two-party division of the
national popular vote for the in-party candidate based on
presidential approval in June, economic growth in the first half of
the election year, and whether the president's party is seeking
more than a second consecutive term in office.
• In "The Economy and the Presidential Vote: What the
Leading Indicators Reveal Well in Advance," Robert S. Erikson of
Columbia University and Christopher Wlezien of Temple University
ask what is the preferred economic measure in election forecasting
and what is the optimal time before the election to issue a
• In "Forecasting Presidential Elections: When to Change
the Model?" Michael S. Lewis-Beck of the University of Iowa and
Charles Tien of Hunter College, CUNY ask whether the addition of
variables can genuinely reduce forecasting error, as opposed to
merely boosting statistical fit by chance. They explore the
evolution of their core model – presidential vote as a
function GNP growth and presidential popularity. They compare it to
a more complex, "jobs" model they have developed over the
• In "Forecasting Non-Incumbent Presidential Elections:
Lessons Learned from the 2000 Election," Andrew H. Sidman, Maxwell
Mak, and Matthew J. Lebo of Stony Brook University use a Bayesian
Model Averaging approach to the question of whether economic
influences have a muted impact on elections without an incumbent as
a candidate. The Sidman team concludes that a discount of economic
influences actually weakens general forecasting performance.
• In "Evaluating U.S. Presidential Election Forecasts and
Forecasting Equations," UB's Campbell responds to critics of
election forecasting by identifying the theoretical foundations of
forecasting models and offering a reasonable set of benchmarks for
assessing forecast accuracy. Campbell's analyses of his trial-heat
and economy forecasting model and of Abramowitz's "time for a
change" model indicates that it is still at least an open question
as to whether models should be revised to reflect more muted
referendum effects in open seat or non-incumbent elections.
• In "Campaign Trial Heats as Election Forecasts:
Measurement Error and Bias in 2004 Presidential Campaign Polls,"
Mark Pickup of Oxford University and Richard Johnston of the
University of Pennsylvania provide an assessment of polls as
forecasts. Comparing various sophisticated methods for assessing
overall systematic bias in polling on the 2004 U.S. presidential
election, Johnston and Pickup show that three polling houses had
large and significant biases in their preference polls.
• In "Prediction Market Accuracy in the Long Run," Joyce E.
Berg, Forrest D. Nelson, and Thomas A. Reitz from the University of
Iowa's Tippie College of Business, compare the presidential
election forecasts produced from the Iowa Electronic Market (IEM)
to forecasts from an exhaustive body of opinion polls. Their
finding is that the IEM is usually more accurate than the
• In "The Keys to the White House: An Index Forecast for
2008," Allan J. Lichtman of American University provides an
historian's checklist of 13 conditions that together forecast the
presidential contest. These "keys" are a set of "yes or no"
questions about how the president's party has been doing and the
circumstances surrounding the election. If fewer than six keys are
turned against the in-party, it is predicted to win the election.
If six or more keys are turned, the in-party is predicted to lose.
Lichtman notes that this rule correctly predicted the winner in
every race since 1984.
• In "The State of Presidential Election Forecasting: The
2004 Experience," Randall J. Jones, Jr. reviews the accuracy of all
of the major approaches used in forecasting the 2004 presidential
election. In addition to examining campaign polls, trading markets,
and regression models, he examines the records of Delphi expert
surveys, bellwether states, and probability models.