Sloppy surveys and the public’s understanding of social research

 

The SRA blog team welcomes proposals for anonymous posts from researchers who would like to write freely about poor research practice, ethical dilemmas, data protection disasters and messy methodology. In this post, our first secret researcher, ‘anon.’, describes the impact of non-random-probability multi-country surveys on the public’s understanding of social research statistics.

We have recently learned from the press that UK citizens are more likely than other Europeans to take drugs before sex. Apparently, 13% of people surveyed from the UK had had sex after taking cocaine, and 20% after taking MDMA, compared with 8% and 15% respectively in the rest of Europe. Around the same time we have been told that about one in four people in France and Poland hold ‘populist’ attitudes, compared with one in ten or fewer in Denmark and Sweden. And we are told  that the average Belgian believes that over one in four of Belgium’s population is Muslim, whereas the British believe the figure to be ‘only’ one in six. 

It’s fascinating seeing how behaviours, views and beliefs vary across countries, and such findings often generate eye-catching headlines. Depending on the finding, our stereotypes will be reinforced or contradicted, or we will learn about something we had never previously thought about. We can celebrate the fact that social research is receiving well-deserved public visibility. 


The trouble is that it isn’t. Yes, it’s getting publicity, but it certainly isn’t well deserved.


Why this downbeat assessment? It is because the methods used to generate findings such as these are incapable of delivering the accuracy that is claimed for them. All the findings described  above were generated by multi-country online surveys fielded in non-random-probability, opt-in samples. While there is nothing inherently wrong with multi-country online surveys, there is a good deal wrong with fielding them across non-random-probability, opt-in samples. 


In 2008 the American professional body for survey research, the American Association for Public Opinion research (AAPOR), charged a task force with “reviewing the current empirical findings related to opt-in online panels utilised for data collection and developing recommendations for AAPOR members.” The first conclusion drawn by the task force in its 2010 report was:


‘Researchers should avoid nonprobability online panels when one of the research objectives is to accurately estimate population values.  There currently is no generally accepted theoretical basis from which to claim that survey results using samples from non-probability online panels are projectable to the general population.  Thus, claims of “representativeness” should be avoided when using these sample sources.’

If we take this conclusion seriously it becomes impossible to take seriously between-country survey comparisons such as those referenced above. If the survey result for each country cannot be safely interpreted as providing an estimate for that country’s population, it can no longer be used for comparing countries. 


Of course, it can be argued that although findings may not be correct in an absolute sense, data collection biases will be constant across countries, and therefore country differences more or less hold up.  Maybe this is true, but it is surely the responsibility of the researcher to demonstrate that it is – to show us why we should believe biases to be constant across countries. There are many ways in which the kinds of people who get recruited to online opt-in survey samples could differ systematically from country to country given the methods used to recruit people to opt-in panels. For a researcher to  assume this away is at best naive, at worst deliberately mis-leading.  


Another argument  might be that the AAPOR conclusion cited above is out-of-date, and that with modern adjustment methods non-random probability opt-in survey samples are now properly representative of their survey populations. Unfortunately, there is no evidence that this is the case.  Two recent studies – one in the USA and one in Australia - have compared data from non-probability online surveys and probability sample surveys to gold-standard benchmarks. After weighting, probability survey estimates were more accurate and less variable than non-probability survey estimates, thereby closely replicating a much cited study by Yeager, et al from 2011. The US study showed that the non-probability survey estimates had not improved in accuracy at all since the Yeager et al. study.  


Does it matter that poor quality survey findings are being put in the public domain, often with considerable publicity? What does a little inaccuracy matter if making these ‘results’ visible highlights important social and policy issues in a manner that encourages debate? Yes it does matter. If social research is to do more than provide unreliable PR material, it is essential that its findings reflect reality accurately, and that researchers are  seen to be trustworthy by the public. The reporting of poor quality, inaccurate, data degrades the whole enterprise of social research and undermines the efforts of its more scrupulous practitioners.

 

[1] Pennay, D.W., Neiger, D., Lavrakas, P.J., Borg, K. (2018). The Online Panels Benchmarking Study: A Total Survey Error comparison of findings from Probability-based surveys and Non-probability online panel surveys in Australia.
http://csrm.cass.anu.edu.au/sites/default/files/docs/2018/6/CSRM_MP2_2018_ONLINE_PANELS.pdf

MacInnis, B., Krosnick, J. A., Ho, A. S. and Cho, M (2018). The accuracy of measurements with probability and nonprobability survey samples: replication and extension. Public Opinion Quarterly, 82, 707-744.

[2] Yeager, D. S., Krosnick , J. A., Chang, L., Javitz, H. S., Levendusky, M. S., Simpser, A., & Wang, R. (2011). Comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples. Public Opinion Quarterly, 75, 709-747.



If you would like to submit a proposal for an anonymous article for the SRA Blog, please get in touch with [email protected] (Digital Communications Manager).