Handling requests for interim findings during an evaluation

25/04/24

In this blog, Rebecca Gossage-Worrall explores the needs of commissioners of research and evaluation to receive interim reports and findings with the methodological challenges of producing these outputs during an evaluation, and suggests some strategies for both quantitative and qualitative research practitioners. 

Research and evaluation helps us to understand, measure and explore the impact of new or revised practice and interventions. Evaluators at NHS Arden & GEM CSU deliver a range of evaluations for NHS organisations, including trusts and Integrated Care Baorads to networks of GP practices. We also conduct projects for research funders, other public sector bodies and in partnership with private sector partners.

 The pressures on capacity and demand in health services are well documented. Commissioners often need to make decisions quickly. In this blog, we explore strategies for handling requests for interim research and evaluation findings from commissioners and the methodological challenges of different types of requests. 

Types of interim reporting

It is important to consider the needs of commissioners of research and evaluation when they require interim reports. These generally fall into two main types:
- Type A: Monitor progress and performance against a specification – often managed through standard project management reporting e.g. highlight report which includes a summary of upcoming milestones, activity completed and planned during the next reporting period, issues and risks. These pose fewer methodological challenges, in comparison to Type B.
- Type B: Interim results to provide an ‘early look’ at the findings of an evaluation. This is understandable. Commissioners are often working to tight timelines, where work is commissioned in-year and outputs will inform guidance and/or policy for the next financial year. Anyone who works in or closely with the NHS understands the pressures of in-year delivery whilst being in a position to plan for the next.
 
There are challenges of producing interim reports from bothquantitative and qualitative data perspectives (although these are not always mutually exclusive within an evaluation design). Below, I suggest somestrategies to support practitioners. 

Quantitative data

Some types of  quantitative data may be readily available in order to produce an interim report of analysis. This is because:
1) established activity data sets are being used e.g. Secondary Use Services (SUS) data; or 
2) a bespoke data specification has been structured at the onset of the collection period, meaning analysis processes / pipelines can be developed earlier on.

Existing data sets in health are widely used by, or familiar to,  clinicians, NHS managers and commissioners. Similarly, analysts know data’s limitations and can often code known issues into routine analysis and more easily provide caveats as part of dashboards / reports. 

However, it is more difficult when a study involves sampling and power calculations. More broadly there is an issue of timespan when reporting outcomes, including insufficient sample size, but there could also be a data lag in the desired outcomes e.g. readmissions through SUS might not be observed until beyond the interim period. This is a problem with outcomes evaluation generally - not just in interim reporting - often the really important outcomes we want to address are long term. 

Randomised controlled trials (RCTs) can include interim analysis but this is usually to measure progress against recruitment and retention targets which may impact on sample size and the feasibility to continue the study. Any (planned) analysis and expected recruitment/retention targets should be specified in the protocol with appropriate clinical and statistical reasoning, and with controls in place to maintain blinding. With the exception of independent data monitoring and oversight this activity generally falls into Type A. 

For trend analysis there are best practice guidelines about identifying statistically significant changes in trend. For example, best practice is set out for Statistical Process Control (SPC) which is used regularly as part of improvement and transformation projects across health and social care. 

Managing seasonality is particularly important within the NHS and is a key consideration if interim analysis and reporting covers a short time-span.

Additionally, importance of control groups or otherwise understanding contextual factors which might be driving observed trends, whilst applicable generally to evaluation is important when producing interim reports. 

Qualitative data

Collecting and analysing qualitative data simultaneously while an evaluation is ongoing (outside of an approach like grounded theory) can attract  criticism from researchers with strong  positivist views of the way research should be undertaken. 

I have conducted a lot of qualitative studies, often as part of mixed methods evaluations from RCTs to service evaluations. I cannot deny that I do start to analyse data, if only in my head (!) whilst still conducting interviews or facilitating focus groups. 

However, to complete a proportion of interviews, say 30-40% of the target sample, and then begin to analyse and theme that data to a stage where it can be presented to a customer is challenging practically and in some cases methodologically. 

  • If more than one researcher is conducting data collection then each interviewer may have a bias (at least greater familiarity) towards the data they have collected. 
  • There may be limited time to summarise or transcribe interviews, re-/read transcripts let alone code and start to consider emerging themes from the data collected to date. 
  • Interviews at that point are highly unlikely to have reached data saturation – arguably the elixir of whether data collection is complete from a qualitative perspective – and risk under developed themes due to lack of depth in the analysis.

These are key steps within a robust qualitative evaluation so it can make evaluators nervous to canter through, or possibly skip altogether, this often well-trodden path.

However, pragmatic evaluations are vital to translate effective interventions into practice. A 3 year RCT whilst important, and often considered the  ‘gold standard’ evidence for interventions, can take several more years to publish and  translate into guidance and practice. Hence, efforts to find new methods to conduct robust studies more quickly are ongoing including NIHR funding for novel research designs and consideration of rapid qualitative evaluation

Specific approaches such as (participatory) action research, often used in healthcare (and a range of settings) to improve practice and enhance quality of care are designed to consider data earlier in the research and evaluation process.  These approaches use a systematic, collaborative and continuous improvement approach to consider problems and generate solutions in specific contexts. Similarly, change management methods such as Plan-Do-Study-Act cycles share similarities with this type of method. 

Early findings can bring advantages. They can inform whether the evaluation or research is examining the right question or point to additional information required. To an extent, research and evaluation can accommodate exploration of closely related questions within the overall scope. For example, semi-structured interview topic guides are just that, with questions and prompts/probes often reviewed after a small number of interviews. 

Some methods are more conducive to changes in focus than others; however, significant shifts in the focus of an evaluation would require review of the entire project including methodology, methods, governance and project management and ultimately require a formal change to the contract between parties. 

Strategies for practitioners

In specifying a requirement to provide interim/early findings, commissioners of research and evaluation take some responsibility for the limitations presented by simultaneous collection and analyses of data and partial presentation of results. 

However, it is also the responsibility of evaluators to consider these in their design decisions and when presenting any interim findings. This is especially important if decisions are being made based on partial data and partial analyses.

There is a balance to be found between meeting customer requirements and maintaining quality and credibility of findings. Releasing findings too early can have unintended consequences. 

Researchers/evaluations should consider:

  • The type of interim report that is being requested. This should be explicitly stated in specifications. t is best not to assume what type of report is required given the time, budget and methodological implications of interim analysis and reporting..
  • The additional resources required to produce one or more interim reports of findings during an evaluation. The time needed to partially analyse data and present it in a format suitable for the intended audience should not be underestimated - ot just in terms of readability but also consideration of appropriate caveats, a level of quality assurance and presenting in a format suitable for a customer to receive. 
  • Selecting methods that are more appropriate for producing interim/early findings especially if this type of output is particularly important to the customer. 
  • The timing of reports - are they aligned to business needs, timespan of data collection/availability and if so, how do they align to the planned stage(s) of the completed work. Review timings if the project is delayed and communicate implications to the customer. 
  • Practical strategies that can mitigate the risk of interim/early findings being considered out of context: e.g. greater weight being applied to partial analysis /findings. Strategies to mitigate these risks can include:

 - circulating early findings with a ‘draft’ (or similar) watermark.
 - protecting documents from being edited and/or printed.
 - being clear about the number of participants/data points included and the proportion this makes up of the final sample/total to be included in the full evaluation. 
 - state any caveats about the robustness of data included in the analysis.

There will be different views on the appropriateness of interim reports of findings and additional strategies that you have found through your practice. These are likely to vary across the sectors represented by the SRA community so get in touch if you have any thoughts or additional strategies. 

Author Bio: Rebecca Gossage-Worrall is Sociologist with experience in mixed methods health services research and is currently working at NHS Arden & GEM CSU (AGCSU) leading national evaluations. AGCSU provide system support across health and care services and integrated commissioning. Whilst working as an academic researcher Rebecca managed NIHR-funded randomised controlled trials (RCTs) of complex interventions, and conducted mixed methods process evaluations alongside RCTs. Rebecca has also worked in local government as a housing research officer commissioning and conducting housing and accommodation-related research to support placed-based and local strategy development. Rebecca is a member of British Sociological Association and UK Evaluation Society.


Acknowledgements: Sheena Dungey (Senior Health Economist) and Adrian Pratt (Analytics Lead) at NHS AGCSU contributed to the points raised in the quantitative aspects of this post.