What role for polling in modern campaigns?
The British general election result came at the worst possible time for the polling industry.
Over the previous decade - fuelled by American campaigns’ obsession with microtargeting and pundits’ obsession with “big data” - it seemed as if the number crunchers’ true place at the centre of politics had finally arrived. Had the pollsters called the election right, there would have been a clamour for campaigns to minimise the role of human judgement in the political process.
In such a scenario, modern political professionals would be more likely to be Oxbridge PhDs than hard-bitten political professionals. Polls and consumer and financial databases would work out who thought what – and social media would help deliver perfectly personalised messages to individual voters.
The 2015 election brought everyone back down to earth with a bump.
Pollsters are beginning their Westminster rehabilitation process, explaining what went wrong (and how things cannot happen again). They are expected to focus particularly heavily on the need to correct their sample. It is thought that pollsters spent too much time talking to political obsessives – those who can think of nothing more fun than to speak to anyone that will listen about politics. Their samples may have become dangerously close to being self-selecting samples.
This may have meant that key groups were under-represented – for example, those people that were quietly and boringly Tory.
Correcting their samples is obviously vital – and we should expect headline poll results to return to accuracy. But post-2015 we are unlikely to see the reputation of pollsters and other number crunchers reach the heights expected of them pre-election.
The reality is that polling has never been an entirely satisfactory experience for political strategists. As the best pollsters themselves admit, vast amounts of polling on some of the most fundamental questions is flawed – often flatly contradicted by polls where the question wording is slightly different.
Let us put aside the election results for the moment and consider some key issues.
Take the polling around the recession and Government cuts. Here, the public essentially said the following: (a) the cuts were necessary; (b) the cuts were good for the economy; (c) the cuts were being done unfairly (with many agreeing they were being done too quickly); (d) Labour’s high-spending was largely to blame; and (e) they would sooner have David Cameron and George Osborne in charge than their Labour equivalents.
Or take the polling around the Government’s general record during its last term. Again, the public essentially said: (a) the coalition was doing a bad job; (b) the coalition was not good for “people like me”; (c) David Cameron personally was not doing a good job; and (d) David Cameron did not understand the lives of ordinary people. And then the public voted Tory in huge numbers.
On the recession and public finances, my personal view is that the public have not been won around to a small state and low levels of spending.
Yes, they have blamed Labour for excessive levels of debt and the poor state of the economy, but they have blamed Labour for “waste” and unnecessary spending rather than for high levels of spending on public services (which they support). The public cannot be expected to back major cuts that affect public services, therefore, and a competent Labour campaign against such cuts could secure widespread public sympathy.
Furthermore, the public’s apparent suspicion of debt may exist because they have not heard credible voices making the sort of case that US economist Paul Krugman makes – that austerity in a downtown makes no sense.
Is this analysis of public opinion right? Maybe, maybe not. But the point is this: human judgement really matters because polls cannot give definitive answers on public opinion on complex issues.
Polls provide us with large amounts of often confusing and apparently contradictory data that needs analysing by the human brain. While focus groups help us connect the dots, ultimately it is down to politicians and consultants to decide what the numbers mean and what messages should best be deployed to secure public support.
The best political strategists are able distinguish between what the public say they support and what they actually consider to be important (an entirely different proposition); they are able to work out which figures are untrustworthy because of a lack of real understanding of an issue; and they are able to work out which messages have public support because they have never been stress-tested.
So, polling has serious limitations. But there is no contradiction in that statement and by also saying that campaigns should spend as much as they possibly can on polling, data analysis and microtargeting tools. The more numbers a campaign can generate the better.
The point is not that data is unimportant – but rather that, in politics, the data needs to be owned by experienced political professionals that can use it in an intelligent way and spot anomalies where they exist.
To quote a highly-regarded American professional – politics is not a science, but an art that uses scientific tools.
James Frayne is Director of Policy & Strategy at Policy Exchange and author of Meet the People, a guide to public opinion.
PHOTO: PA - Labour party campaign organisers Alastair Campbell, Dave Hill, and Peter Mandelson prior to Labour's 1997 manifesto launch.