Since leaving GfK and setting up on my own I have worked on a number of polling projects for an international research company. They have been on the scene for a long time, and polling is only a small part of what they do. In the last election they polled weekly for a media client, and I helped them out with weighting and analysis. When this election was called the MD rang me to ask if I’d do the same again, though he warned he wasn’t sure exactly what polls he would be doing. He went to see his client – one of the UK’s biggest national daily papers – who wanted a survey done straight away, but they weren’t prepared to pay more than £1,200 for it. Despite this being pretty much what his out of pocket data collection costs would be he agreed to it - election polling is fun after all.
But when it came to the next week the newspaper announced that they expected to get the poll entirely for free, and if he didn’t want to do it they had been in contact with another pollster who was prepared to do it for nothing.
This is far from a unique situation: there are a number of media clients who think that the marketing value the pollster gets from having his name on the front page/on television is such that they should be prepared to make a financial loss on the deal. When I was still doing predictive polling at NOP and GfK this was already happening, with clients refusing to pay enough to cover our costs, and saying if we didn’t want to do the work for such a low price there were others who would. I guess expecting polls to be done for nothing is a logical extension of this.
And as long as there are pollsters who are prepared to subsidise their clients in this way it will carry on. If I were setting up an entirely new polling business I may well decide paying for data collection and analysis out of my own pocket might be a worthwhile investment. But my pal runs a long-established business that has good recognition in the areas that provide most of its work, and turned down the client’s suggestion of a free poll, and who can blame him?
If polling were as easy as making ball-bearings then cost-cutting and loss leaders may make long-term economic sense. But as we have seen from recent elections, polling isn’t easy, and requires constant self-investigation and revision of methods to keep reacting to changing problems.
The 2015 election was a shock to the whole industry, which reacted quickly by engaging Professor Patrick Sturgis and a team of outside experts to look at what went wrong, and make recommendations on how errors could be minimized in future. At the same time as the enquiry team was doing its work many of the pollsters were spending a lot of money doing their own internal analysis and thinking up new sampling, questioning, and weighting approaches.
The pollsters duly made the changes recommended by the enquiry, plus others of their own. It turned out in 2017 that had they not done this, and instead carried on doing exactly as they had been in 2015, then the 2017 polls would have been more accurate. This is not in any way intended as a criticism of the enquiry, which did an excellent job, but just illustrates how hard it is for pollsters to cope with changes in attitudes and behaviours that might make people more or less likely to vote, more or less likely to take part in a survey, or more or less likely actually to vote in the way they said they intended to.
For this reason pollsters are never in a position to assume that they have finally cracked it, because the chances are that as soon as they do finally crack it, social or population or other changes mean that there is now a new problem that needs cracking. They thus always have to spend time looking at the data in great detail, trying to work out if minor tweaks to their method might make things better.
What makes things worse is that a lot of the problems pollsters face apply particularly to elections (there were a lot more “shy Tories” in 1992 than there were “shy Persil buyers”) and the impact of any changes in methodology can only be tested at a general election. If elections only come around every 4 or 5 years there’s a real chance that the changes will have been overtaken by events. In a way the pollsters have been lucky, in that the Fixed Term Parliament Act was so badly drafted as to be meaningless. But if we stop having elections every 2 years this “gap between tests” problem, which I think is a significant one, will reappear.
All of this costs money, and if clients aren’t prepared to pay anything like the real cost of conducting an election poll, then the longer-established pollsters will start thinking they can’t justify working at a loss. e risk a situation where there is constant churn of new pollsters appearing on the scene and prepared to work for nothing just to get their names on the map. They have built up enough expertise and accuracy to build a wider business on the back of it they too decide they don’t want to do polls at a loss, and their expertise is then lost and the whole process starts again from scratch. This hasn’t happened yet – not least because there is a core of clients prepared to pay a realistic price – but to my mind it is a worrying possibility.
AUTHOR BIO: Nick Moon joined NOP Research as a graduate trainee in 1977 and spent 39 years there, surviving two changes in ownership and several company name changes, the last being to GfK. He spent most of his career there as MD of the Social Research Division, and worked extensively on media polls, exit polls and private polling for the Labour Party. In 2017 he left to set up his own consultancy, Moonlight Research, and continued his involvement in political research, working on the 2017 exit poll, a number of other election polls, and spent another two years doing polling for the Labour Party.