Well - there goes one of my topics in my Statistics classes. When introducing the topic of random sampling, I often refer to opinion polls to illustrate how an inference about a population can be made with a sample. I pointed to the incredibly accurate opinion polls for the 2012 US Presidential Election - but yesterday's election (plus the recent Brexit vote) are causing Opinion Poll companies to reconsider their strategies and techniques.
Image source: Well This is What I Think. |
Nathan Bomey, writing in USA Today, asks "How did pollsters get Trump, Clinton election so wrong?". Various arguments are put forward: "some voters were apparently sheepish about admitting to a human pollster that they were backing Trump", that "many pollsters may have incorrectly ruled out the prospect that people who didn't vote in 2012 would nonetheless cast ballots in 2016", and that pollsters "overestimated Clinton's support among minorities and underestimated Trump's support among white voters".
All pools seemed to predict a comfortable win for Clinton, with one exception: the LA Times/USC poll. The LA Times polls saw "what other surveys missed: A wave of Trump support". Their polls show increasing support for Trump after the 3rd Debate with Clinton.
Here's what the LA Times says about why their methodology is different from other polls:
The poll asks a different question than other surveys. Most polls ask people which candidate they support and, if they are undecided, whether there is a candidate they lean to. The Daybreak poll asks people to estimate, on a scale of 0 to 100, how likely they are to vote for each of the two major candidates or for some other candidate. Those estimates are then put together to produce a daily forecast.
Basically, in the LA Times poll, someone who is 100% sure of their vote counts more heavily than someone who is only 60% sure? And someone who says she is 100% certain to vote weighs more heavily than someone only 70% certain. This is an interesting way to gauge public opinion, and their's is an on-line poll rather than a face-to-face or telephone poll. They also got the 2012 election almost spot on - they predicted a margin of 3.32% for Obama over Romney - the actual was 3.85%. This is bound to influence the way polls are taken in the future, and in turn cause questions to be asked about inferential statistics.
No comments:
Post a Comment