Ipsos Facto – a new AI assistant for social researchers.

Generative AI has assumed many roles recently, but what about social research assistant? Imagine having someone who could summarise your transcripts, write first drafts of commentaries, and talk to you about your research design.  That’s exactly what Ipsos has created. Ipsos Facto is Ipsos’ proprietary generative AI platform, built for rigorous social and market research. Ed Allen, Associate Director and Head of Innovation at Ipsos Public Affairs, introduces Ipsos Facto, beginning a series of blogs that critically examines the application and potential ramifications of generative AI in social research.
What is Ipsos Facto?
 
Ipsos Facto is much more than a standard AI assistant. It is built for social research, has the highest security standards and hosts a variety of large language models (LLMs), including those from Google, OpenAI, Anthropic and Mistral. These have the capacity to comprehend and generate human-like text, images and, soon, video. Ipsos Facto’s purpose is to assist researchers in their day-to-day tasks and provide a new lens through which to collect, manage, view and interpret data. 
 
What does Ipsos use it for?
 
We are in an exploratory phase, constantly evolving and innovating, safely testing and experimenting with how generative AI, and consequently Ipsos Facto, can be effectively integrated into social research.
 
Ipsos Facto has become an important part of Ipsos’ work, summarising data, creating stimuli, detecting errors in datasets, synthesising published literature, creating easy-to-read research materials, coding and cleaning verbatims and helping with multi-source analysis. It also helps us with a range of administrative and project management duties, as well as enhancing copy-writing skills. We see Ipsos Facto not as a methodology, but as vital research infrastructure that can be incorporated into various stages of our research process. Most recently, we used Ipsos Facto during the UK general election to help us understand how election promises engaged the public.
 
Origins
 
We have been using analytical AI over the past decade, conducting topic modelling on large datasets and sentiment analysis on social media, among other uses. In November 2022, the launch of ChatGPT marked our shift towards generative AI and building Ipsos Facto. 
 
Through our established Innovation Networks, researchers consulted with their teams to identify use cases and risks. Our Innovation Leads for each research service facilitated numerous discussions about the potential for generative AI in social and market research. The testing of generative AI was then formalised, leading to the creation of a comprehensive repository encompassing use cases, prompts, and quality checks.
 
To build Ipsos Facto, we partnered with top LLM providers, including Google, OpenAI, Anthropic, and Mistral. Our technologists extensively tested different models. The Beta version underwent a rigorous review for compliance with industry standards and ethical guidelines. 
 
We launched Ipsos Facto in the summer of 2023. Despite being a novel concept, the response was encouraging with 80% of our 20,000 colleagues using the tool within the first year.
 
Challenges
 
However, there was still one significant barrier to social researchers at Ipsos using it. The LLMs Ipsos Facto uses mostly had their servers based in the US. This meant we weren’t able to use it for projects relating to many of our clients who specify that data needs to stay within the UK or EU. 
 
Still, during this time, we focused on building our AI infrastructure and knowledge base. We tested the tool with our own data and with some clients in the private sector. We developed a prompt engineering framework and prompt library and initiated a comprehensive training program. We also formed a network of AI enthusiasts for additional support and established feedback mechanisms so social researchers could say what additional features would help them. For example, after feedback, we introduced metrics to show the strengths and weaknesses of different LLMs. 
 
Then came the moment we had been waiting for. Some Google and Mistral servers were available to us in the UK and EU. This meant we could start talking seriously to clients about trialling it in a way which guaranteed adherence to their security standards.  
 
These conversations, with government departments and public bodies, are inspiring and challenging. Social researchers are now grappling with AI technology more than ever before, with its jargon, sense of inevitability and endless possibilities – both exciting and challenging. 
 
What we’re doing now
 
Our strategy is for humans to work together with AI. AI will not make decisions for us.
 
We have started working with our clients to assess how generative AI could be applied to social research. For example, we are currently conducting a review of generative AI qualitative data management trials, with an aim to establish best practice and assess the performance of different models. One of these trials is an experiment where we are creating two generative AI-managed datasets alongside a human-managed dataset, assessing the differences in quality and speed, and assessing the bias inherent in both human and AI interpretations. 
 
What we’ve learned
 
We have five key learnings from starting to apply Ipsos Facto to social research:
  1. Start with the basics – data security and compliance. Always ensure the platform is safe and not accessed by third parties.
  2. Collaborate with as many colleagues as possible. Generative AI is for people to use, so it can’t be built in isolation.
  3. Build a platform that can be updated flexibly. Generative AI technology is changing week by week, so the platform will never be finished.
  4. Think carefully about your prompts to avoid the LLMs ‘hallucinating’(i.e. creating nonsensical or inaccurate outputs). The outputs are only as strong as the inputs.
  5. Quality assure your results, ideally conducting side by side comparisons and spot checks. It takes expertise and experience to instinctively be able to spot where something is not quite right.

Sharing our learning

Alongside the creation of our Ipsos Facto product, we’ve been driving the really hard questions around ethics, bias and quality - such as those explored in our paper on Responsible AI. For example, our early discussions on bias are complex. On the one hand, data LLMs are trained on inherent biases, and different LLMs may hold different sets of biases, and sometimes these biases are overcorrected. On the other hand, social researchers are aware that we hold our own biases and LLMs have the potential to challenge them – becoming a tool for reflexivity. This is just one example of the complexity facing our industry. 
I hope the story of Ipsos Facto has shown you how generative AI can be responsibly harnessed for the future of social research. We want to continue this conversation and bring focus to important issues around ethics, bias, and quality. While the area is too new to provide definitive answers, we hope to share our experiences and learnings using Ipsos Facto to help foster a nuanced understanding of these complex issues within the social research community. We look forward to sharing more insights, lessons learned and creating a space for meaningful discussions. Stay tuned for our upcoming posts in this series, and don't hesitate to reach out to us with your thoughts or questions.

Author Bios: 

Ed Allen is an Associate Director at Ipsos UK. He is Head of the Innovation Network in Public Affairs where we source, test, and introduce new research and evaluation methods, ideas, tools and technologies to our clients in the public sector and to colleagues across Ipsos.
Email: [email protected]