The measuring change conundrum

 

How can pragmatic evaluators working in pressurised outcomes focused public services come to terms with complexity theory?  In this post Leanne Teichner (Social Researcher, Data Cymru)  writes about the challenge faced by Welsh public service practitioners to fully understand all facets of well-being and the implications for evidencing change and evaluating interventions. Leanne details the tensions between traditional, prediction based and quantitative evaluation approaches, and evaluation approaches that acknowledge the complex and unpredictable systems and landscapes in which change occurs. Part 2, coming soon, offers a reconciling position and some practical tips for evaluation delivery.

Evaluating well-being 

Like other parts of the UK, Welsh public services are well aware of the monumental task they face: to make people’s and communities’ lives better by scrutinising and tackling all factors  that contribute to well-being and happiness.  These include: physical and mental health; employment and income; social connections and networks; the local environment; and ACEs (Adverse Childhood Experiences). In Wales this has been given extra impetus in recent years following the introduction of the Well-being of Future Generations Act in 2015 (Act). The Act places specific duties on public bodies to address well-being and the challenges our communities face. 

Change measurement or performance management?

In a recent Data Cymru blog, Duncan Mackenzie presents how until very recently, we have attempted to navigate all the things that contribute to well-being by undertaking performance management and monitoring. We report how many courses we run; the number of peer support groups set up; footfall at events, and so on. While this information does have its place, we have been unable to fully convey the change and impact resulting from these efforts. We have used evaluation to show our hard work and competency, but not to comprehensively demonstrate what it means for people’s long-term quality of life. How well and happy are people as a result of our well-planned interventions?

Conflicting standpoints

Suggestions on how to go about measuring change do exist but there is a lack of consensus that can be attributed to distinct evaluation approaches and theoretical positions. Very broadly, they can be divided into two standpoints: 

1. Orthodox approach: plans and attempts to predict change; (Cairney and Geyer, 2017; El-Ghalayini, 2017; Lowe and Wilson, 2015)
2. Complexity approach:  observes and embraces the complex and uncontrollable world and systems we are working in ( El-Ghalayini, 2017; Knight et al, 2017; Lowe and WIlson 2015)

The first of these approaches, largely advocated in local government and public services, identifies the “wicked” issues and clearly articulates the precise change that needs to happen by quantifying the need and quantifying how it can be improved. This provides clarity and direction. Existing data (usually quantitative) is used to set the scene and establish baseline measures, then we rely on this information to set “achievable” and credible outcome targets. In Wales we are seeing an increasing use of logic models and Theories of Change to demonstrate that specific contributions, key actions and efforts are planned to make progress toward well-being. Using these tools allows us to address the risky assumption that “our” interventions are responsible for change.

The second of these standpoints is the complexity approach, which finds many problems with the first. Complexity theory was borne out of multiple philosophies and social theories and its overarching position is that we live and work in a complex world with systems that are not

a) Simple, with one dimensional issues requiring one dimensional interventions and solutions. For example, obesity is caused by a poor diet and can therefore be improved by improving people’s eating. And neither are systems;
b) Complicated, where numerous causes need addressing to tackle one issue. For example, obesity is caused by poor diet; poor attitude and mental health; poor fitness levels and therefore can be solved by improving each one of these things.  (Clark, 2017)

Rather, complexity theorists recognise that an issue like obesity is influenced by well recognised but also subtle, unseen, unplanned and ever-changing circumstances. A person that is progressing well on a weight management programme may experience an unexpected relationship breakdown which can scupper the intervention’s attempts to manage their comfort eating. The same person may also experience a new health condition which makes it impossible for them to exercise.

Complexity theory also responds to context: an obesity intervention in a small Welsh village may be a roaring success but placed in a large Welsh city it may fail miserably for poorly understood reasons, even with identical delivery methods, resources and staff expertise.

Complexity theorists have been known to make the bold statement “outcomes are not delivered by organisations” (Lowe, 2019)  and add that this it is not a reasonable expectation.

Where does this leave us?

Complexity theory questions change prediction because it states that prediction is impossible. It also questions the relevance of quantifying change when unrecognised and uncountable factors affect change. Some complexity theorists raise eyebrows at the use of logic models because they argue that multiple, unplanned and unseen factors can support or hinder progress and change. 

Many complexity theorists suggest an alternative evaluation approach where emergent and unplanned narratives can be included in the analysis of well-being progress. This is where research methods such as narrative interviewing, creative methods and Most Significant Change can find their place in evaluation because they allow intricate narratives and contexts to be revealed and explored, deepening insight into why an intervention has failed or succeeded.

But there is a public practice pink elephant that this theoretical stance leaves unaddressed which takes us back to the initial challenge I highlighted: we need to credibly measure the difference public interventions make. If we accept that we cannot predict, control or measure well-being improvements, how are we able to reliably measure the impact of our interventions? If our role is to “see what happens” and try to understand why, within complex and unpredictable social systems, do public sector efforts to influence and measure well-being become obsolete? How can those delivering public service interventions understand complexity theorist arguments, which, if taken literally, can question the value and effectiveness of their efforts? In my next post I will put forward an argument that could help the pragmatists among us come to terms with complexity theory in a pressured outcome focused delivery environment.

AUTHOR:  Leanne is a social researcher at Data Cymru where her role is to spotlight effective public engagement and social research within public service delivery. She is currently specialising in evaluation and journey planning and is managing a number of client projects. Leanne has a background in community development and co-production and has focused her qualitative research skills in voluntary and public service environments.  Leanne has an MSc in Research Methods and Education from Cardiff University, and has an academic and work history in equality and diversity. She is also a qualified Career coach and Adult Educator. CONTACT: Leanne.Teichner@data.cymru 

References
Cairney, P and Geyer, R. (2017). “A Critical Discussion of Complexity Theory: How does “Complexity Thinking” improve our Understanding of Politics and Policymaking?“ Complexity, Governance & Networks. Vol 3. No2. pp1-11
El-Ghalayini, Y. (2017) “Complexity Theory: A New Way to Look at New Public Management” .  Network and Complex Systems.  Vol 7. No 1.
Knight, A, D., Lowe, T., Brossard, M., Wilson, J. (2017)  “A Whole New World: Funding and Commissioning in Complexity”. Collaborate for Social Change.
Lowe, T and Wilson, R. (2015)  “Playing the game of Outcomes-Based Performance Management. Is gamesmanship inevitable? Evidence from theory and practice.” Social Policy  and Administration. Vol.51. No7.