The Difficulties of Predicting an Election

Michael Kowal

Michael Kowal

 

One of the lessons learned from the 2016 presidential race is how difficult it is to predict an election. Many forecasters and pollsters ended up with egg on their faces, most notably perhaps, Sam Wang from Princeton University, who ate a bug on live television when he lost a bet that Hillary Clinton would win the election. Wang had given her a ninety five percent chance of victory.

It’s a subject of great interest to Visiting Fellow of Digital Computational Studies Michael Kowal.  Kowal is a political scientist by training with a focus on voting patterns. He’s currently teaching Forecasting and Predictions (DCS 2020 / GOV 2901), which examines how statistics, computation, and the Internet have led to increased attention on prediction in elections. Kowal recently spoke with Bowdoin News about the difficulties of predicting an election.

What made the 2016 election so hard to predict?
One of the things that struck me about this election was how different it was from many previous ones, and that was due to the “Trump factor.” For example, this is the first time we’ve had a candidate who has never held a government position or been a general. Also many of the comments made by Donald Trump during his campaign, and the scandals that followed, would have destroyed earlier “normal” candidates. Remember Gary Hart in 1988? John Edwards in 2008? Both of their candidacies were undone by sex scandals.

Also, back in 2004, Democratic candidate Howard Dean waved goodbye to his political career when he was filmed screaming at a rally. People thought he had gone crazy. Contrast this with 2016, when Trump has been able to survive scandal after scandal, from his use of racially charged language, to the release of a tape of him making sexually demeaning comments about women. 

So this election really ripped up the rule book when it comes to making predictions?
To an extent, but also remember there are other longer-term trends to consider: One of the difficulties with polling nowadays is the huge decline in the number of people of who respond to polls. Going from a high in the 1970s of around 80 percent, we’re now down to single digits. So that makes us ask questions such as—Who responds to polls nowadays? Are they like regular voters? If not, are they systematically biased?

ht-combo

Source: Wikipedia commons

In the lead-up to this election, Trump often talked about the unreliability of polls and the importance of “the silent majority,” and in some ways he may have been right. Voters might have actually been unwilling to admit they were planning to vote for Trump, and this goes back to something in political science known as the social desirability bias. So if you’re called by a pollster, there’s a certain socially acceptable answer that you’re expected to give. For example, it may not be acceptable to say “I’m not going to vote for Hillary Clinton because she’s a woman,” but that could be what many people were thinking. The evidence supporting this theory has been mixed, but there was an interesting survey done in 2008 that indicated one in four people would be made angry by a female president. Some people might also be concerned about being labeled with some of words attached to Trump by his critics, such as “racist” or “bigoted,” if they admit to supporting him.

How are pollsters adapting to the fact that most people do not respond to polling?
It’s a challenge and it’s been a struggle since the rise of cellphones and the decline of landlines. It’s harder to poll cellphones because you cannot legally autodial them, and you also have to reimburse people for cellphone minutes. Cellphones are also a lot more difficult to track: twenty years ago, you could see where people lived because of their landline number. Nowadays it’s harder because a cellphone number does not necessarily indicate where you reside. For example, I have a Massachuetts cellphone but I live in Maine. Online polling is one possible solution to this problem, but there’s a risk that this approach could under-represent some people in rural areas or those with lower incomes and less access to a computer.

Do we still need polls?
The short answer is yes. Polls are still useful, and although most pollsters were off in their final predictions by about two percentage points, it should be noted that the polls did predict a Clinton win in the popular vote. Part of the problem is the electoral college system, which is complicated and harder to predict. Also remember that Polls are not perfectly accurate, and a lot of results were within their margin of error. One big question that remains for pollsters is: “Who is likely to turn out to vote?”And this might be one of the big lessons from this election. We need to do a better job of predicting who will turn out.

polls2

How pollsters 538 (bottom) and Upshot/NYT (top) predicted the 2016 election

What ways are there, other than polling, to predict elections?
One method involves the use of tools like Google Trends. It was originally used to predict instances of the flu. If you start to feel sick, you tend to start googling it, so there was this idea around 2012, that you could predict a flu outbreak before the CDC [Centers for Disease Control], based on Google practices. In economics, researchers have used this method to study unemployment and look for indications of potential unemployment spikes.

So what were people googling about during the election campaign? Interestingly, people were searching for Donald Trump a lot more than Hillary Clinton, which to me shows a lack of enthusiasm for Clinton and suggests that the election was more about Trump. Likewise, a look at Twitter sentiment also shows that people were talking about Trump much more than Clinton. Finding ways to measure enthusiasm might be key to understanding which people turn out to vote.

Other prediction methods can reveal some pretty strange factors being cited: For example in contests between males, some suggest the candidate with the deeper voice will prevail. There’s also a link between height and success with one study finding taller presidential candidates tend to get more popular votes, although that didn’t happen in the latest election. And then there is the perhaps even more questionable correlation with various sporting results. For example, if the Lakers make it to the NBA finals in an election year, the GOP will probably win. This has been the case in every election since 1960, except for 2008. The problem with these types of prediction techniques though, is that it could all just be coincidence.

Why do you believe that failure of prediction a good thing?
Because it makes us re-examine our models and improve. Failure forces us go back, retool, and try to become more accurate. In the long run it’s beneficial.

 

 

 

thumb: