We all know there's a problem with political polls. Every time a new poll is published, we have 24 hours of headlines, questions shouted at politicians at every doorstop, denials, pundits attributing causes to every "dip" or "surge" and the polls themselves dismissed as meaningless or "dodgy". But the fault isn't usually with the poll. Here are just a few reasons why:
- Political polls are conducted transparently and with robust, documented, methodology. Poll methodology is typically provided (if the media organisations or critics care to ask for it) along with the findings. Indeed, it's in the interests of market research companies to provide details of their methodology in order to support the credibility of their findings. It's just that the media and others aren't usually that interested. Of course there are technical flaws and merits in every approach - pollsters and psephologists may argue at length among themselves about methods and significance, but you'll rarely see those arguments played out in the mainstream media. Margins of error are also provided; but they too attract scant attention while tiny changes in voting percentage are discussed ad nauseam.
- Stated intention today is not a reliable predictor of future behaviour. This is very well documented in the academic marketing literature - all sorts of factors have been shown to have the potential to cause the consumer not to follow through with his or her intention. A poll that asks "If a Federal election were held today..." sets up an artificial situation, making it even less likely to be a good predictor of the future. Even with the best design - and the best will - the findings of such a poll can only be used as a very rough guide to what might happen in the (impossible) event of an election being held today.
- Local factors may significantly alter intentions. Voting involves a complex process - it's very different from answering a phone call or completing a survey online. For example, a significant proportion of those who provide opinion poll answers probably have no clear understanding of who their local candidates are, or will be, at a future Federal election. What if I told the pollsters that I intended to vote Liberal but when I eventually see the picture of the Liberal candidate on the How to Vote card, I don't like the look of him or her? Or I was intending to vote Labor but I recognise the Greens candidate as a former local Councillor who did some good things for the community? These may be small effects, but when countless column inches are devoted to differences of 1 or 2 per cent, they may well be very relevant.
- Correlation does not equal causation. A movement in poll numbers - even a change that falls within the margin or error and thus can't even be thought of as a "real" change - invariably provokes a flood of analysis seeking to explain its cause. No change in voting intention should ever be attributed to any specific piece of political business - an announcement, a scandal, a photo op or a fuck-up - based on a piece of descriptive research like an opinion poll. Only a properly-controlled piece of causal research, where the impact of one particular factor can be investigated while others are controlled for, can provide strong evidence of causation; for example, was voting intention statistically significantly different among those who had seen a particular speech or interview, as compared with those who hadn't? Even then, there might well be confounders - people who saw the speech might be better informed about politics than those who didn't and this might be related to their long-term voting intentions.
My point is, when polls are discussed, dissected and ultimately despised, don't dismiss the research as "dodgy" unless you've examined and understood the methodology. To put it bluntly, don't blame the polls or the pollsters - blame the pollies and the press.
No comments:
Post a Comment