Historically, polls have shown that although they give a guide to the voting intentions of the British people they are not 100 percent accurate. The most notorious instance of inaccuracy perhaps was the 1992 election during which the polls consistently showed Kinnock’s Labour ahead and yet was won by John Major’s Conservatives. How can they it so wrong? I’ll attempt to explain.”

As I guess regular readers know now, I am an amateur psephologist, amongst other skills, and I actually like looking at the numbers associated with elections and opinion polls.  Not that the numbers actually “do” anything for me, but one can analyse both those numbers and how they are arrived at to help identify real world patterns.

Election data is easy, providing it is a straight election, with no stuffing of postal ballot boxes and no rigging of the count: fortunately, most of us don’t live in places like Tower Hamlets. You just look at sets of numbers, combine them in different ways, and analyse what it means for the different parties.

Opinion Polls, and their publication in newspapers, are another matter entirely. Behind the simplistic result you see in the paper, perhaps this ComRes Poll for the Sunday Mirror and Independent on Sunday, 16 November …

Independent Poll 16 Nov cut down (1)

…are pages and pages of this, the data tables for that poll, just one of them illustrated:

Independent Poll 16 Nov Data

Complicated and confusing? Not half! I try to extract what goes on behind on those polls and then try and help others understand it. The pollsters present the raw data, that is the initial answers that the voters they asked gave, and then their manipulations of it, and they do indeed manipulate that data in different ways. What I have discovered is that newspapers can and do choose which analysis they use to provide the answer they most desire. Indeed, they can choose a different pollster, each of whom has subtle variations in the way they handle the data.

Before they manipulate the raw data in any way, shape or form, they have to set the question. As you will recall from the arguments over the Scottish Referendum, getting the question right is absolutely fundamental, as it has been scientifically proven that it affects the outcome of the poll. So, for the main question of voting intentions in the next General Election, most pollsters ask this question:

“If there were a General Election tomorrow, would you vote Conservative, Labour, Liberal Democrat (randomly rotated between the three of them in order when asking the question) or for some other party?”

UKIPpers call this the ‘unprompted’ question. We cry out, with some justification, that it is unfair on us, in terms of prompting waverers or ‘Don’t Knows’ to nominate UKIP. Survation, and now ComRes (sometimes), include UKIP in the party prompts.

This was proven recently when ComRes conducted two polls on the same day (with different sets of voters), one unprompted and one prompted. The latter gave UKIP an extra 5%, and knocked a little under off 2% from Labour, and 3% from the Tories.

It doesn’t end there. They also ask questions on how likely the voter is to vote, what their recollection of their 2010 vote was, which party they regard to be their natural home, and what their long-term loyalty is, rather than how they feel on the day they’re asked the question.  Many also ask a question based on the Australian voting system – if you were required by law to vote, who would you vote for? In the poll illustrated in this article, 63% still said “Don’t Know” or refused to answer the question.

Without going into a lot of statistical mathematics, each pollster then combines the answers to these extra questions with the original data to come up with different analyses, often more than one. They also modify the result for any ‘error’ their polling has in terms of the voter mix across the social classes. It is here that a newspaper can pick and choose which analysis it wants to use, although the pollster may make it easy for them by picking out one and posting it on a front page of their detail analysis.

It is here that UKIP ‘suffers’ at the hands of the polls. In terms of recollection of the 2010 vote, only 3.6% of voters voted for UKIP in 2010. That figure will therefore suppress any manipulations of voting intent for us. Also, the ‘natural home’ will tend to mitigate against us, being a shorter-lived party which has not yet achieved as big a base of permanent support as the other parties.

What is really useful, though, is that the polls show us the regional spread of votes, the best being where they break it down to the EU regions. So, in the exampled poll, on the table used by the papers, UKIP ranged from 5% in Scotland and 9% in Wales (on very small samples though) to 30% in East Anglia and 31% in the South West, to average out at 19%.

Unscrambling these manipulations (they call them weightings) and trying to come up with a fairer interpretation for UKIP is a nightmare and one could so easily mislead by doing that.

Finally, we need to consider this: There are two possible ways of looking at this manipulation. One way is that if our published poll figures are lower than they really are, it worries the other parties less, and makes every member of UKIP hungrier for higher poll figures.

On the other hand, if our published poll figures were ‘bang on the money’, it may make us complacent while ringing bigger warning bells with LibLabCon to spur them on to making bigger and bigger promises to the electorate.  We know they won’t keep such promises, but does the electorate?

Print Friendly, PDF & Email