We’ve all been here…
In a review meeting, staring at charts and traffic-light dashboards. On paper, it looks like success. But someone is silently wondering: “Did this actually make a difference?”
It’s a familiar moment. For me, it sparks a spinning concern in my head – Have we measured the right thing? Have we measured everything? In our review meeting, we tend to frantically explore the numbers to reassure ourselves, but quietly wonder if our numbers can ever really tell the whole story of the impact we know we are having.
Measuring What Matters
In the social good space, we’re rightly encouraged to measure impact. But impact is complex and often subjective. What is considered ‘impactful’ really depends on who is asking. It’s a challenge I have worked on regularly within Data for Children Collaborative, as the programme reports to a wide range of partners, each with their own views on what ‘impact’ truly is. It’s always good practice to ask ourselves:
“Are we measuring the right things?”
But, as my own learning evolves, I find myself asking:
“Are we giving enough space to what can’t be neatly measured?”
We often default to what is quantifiable: number of participants, increase in income, volume of opportunities. These are useful. Quantifiable change is even better: Increase in attendance, Change in income stream, diversity of opportunities — but even these don’t always capture the deeper, messier outcomes that matter to people. Things like:
· Does this feel different from before?
· Do people feel safer, more hopeful, more in control?
· Has the tone of the conversation shifted?
These are real impacts. But they rarely appear on a dashboard. For a few years, my team at the Data for Children Collaborative has had the pleasure of working closely with The Promise Scotland, helping them establish the “What Matters” questions. These human-centred questions and composite stories marked a shift towards measuring experiential change. Our role was to help use these questions as a framework to map complex data related to those impacted by the care system. The Promise Scotland continues to do amazing work in this space, including The Promise Story of Progress. I encourage you to explore it.
One example that particularly stands out from their work is Isla’s story, told twice — one reflecting a composite reality, and then again, told as the story she should be able to tell. This contrast becomes a powerful measure of impact: Are we now telling the second story? If so, we’ve had the intended impact. No dashboards. Just a new story — and real impact.
The Anxiety of the Unmeasurable
As professionals, we often feel a certain anxiety when facing something we can’t quantify. There’s a pressure — internal and external — to show our work. To back it up with data. To produce ‘evidence’ that can be benchmarked, reported, or audited.
I recently listened to a great podcast on procrastination. It explored the tension between measurement and output — especially in knowledge-based work. The idea of measuring an output when the basis of work is intangible ‘knowledge’. Their argument was that, by human nature, this type of work can create anxiety. How do you measure progress when there isn’t a clear yardstick? In this situation, we often default to the easiest metric: time. Unfortunately, in our rush to ease this professional anxiety, it’s at this point that we can lose the essence of what we’re really trying to measure.
And yet, some of the most meaningful changes I’ve witnessed resist measurement. They live in tone, in trust, in a subtle shift in how people talk about a problem. Sometimes, the real evidence is someone saying, “This is the first time I’ve thought about it that way”.
Trying to turn that into a metric can feel like squeezing something human into a spreadsheet cell.
When Measurement Becomes a Distraction
There’s also a risk of making things more complex than they need to be. I’ve seen projects and programmes build elaborate evaluation frameworks only to lose sight of what they were trying to achieve in the first place. And to be honest, I’ve been guilty of building some of them myself! Measurement should support impact, not overshadow it. It’s meant to back the story, not replace it. Providing, where feasible, numbers behind the story.
I’m a big fan of using Theory of Change as a tool for aligning strategy with impact evaluation. Emphasising the components that steer towards an agreed impact. Once we have an agreed-upon series of pathways, we can then determine how to measure progress across each path. But I often look at a Theory of Change and wonder — does this framework give equal value to unmeasurable change? Ultimately, a Theory of Change should provide the building blocks of a great story – the beginning (input), middle (output) and end (outcome).
A Different Kind of Evidence
What if we recognised that evidence can be relational, emotional, even intuitive? What if “it feels different now” was not only valid, but essential?
That doesn’t mean abandoning rigour. I love data and the insights it can bring. However, it does mean broadening what we consider as impact — and acknowledging when traditional metrics fall short.
At Edinburgh Futures Institute, we are exploring the concept of ‘recognition events’ – This fascinating concept essentially flips evaluation on its head, starting with the asserted future (outcome) and working back to the present, to get a measure of the drivers enabling change. I’ll delve deeper into my experience with this concept in a future article. But…it’s fascinating.
So here’s a small challenge…
Next time you’re in that project review meeting, ask: “What has changed in how this feels?” Not instead of, but alongside your traditional metrics.
Because sometimes the clearest impact isn’t in numbers — it’s in a shift that’s hard to describe, but impossible to ignore.