The Challenge of Judging Evidence Over Outcomes

The Challenge of Judging Evidence Over Outcomes

On Monday 10 June a fairly unusual awards ceremony took place. 10 youth projects, all working across the capital, were awarded up to £5,000, not in recognition of the work that they do with young Londoners but in recognition specifically of the way they tried to assemble and use evidence around the impact they have on their service users' lives.

So whether their project has been a raving success or otherwise, it was the evidence we were interested in. I was one of the judges for the Project Oracle Evidence Competition, a role both a little tricky, and time absorbing but also very uplifting. As chief executive of NPC I am used to working in an organisation where evidence of impact - both quantitative and qualitative- is held to be paramount, so it was great to see so many other organisations dipping their toes in the water.

The point of Project Oracle is to get people to move up the evaluation "ladder". For example, as a first step an organisation might start thinking about what they are trying to do and what a good outcome would be like. Next, consider how they can collect evidence and analyse it, which then leads to questions about how this evidence can be used, both to think about further research and how to improve their services. Towards the top end of the ladder there are more in depth approaches to evaluation, such as control groups and randomised control trials (RCTs), which over time may be worth considering, depending on your size and resources.

So how does one judge such an evidence competition? We're so used to looking at the results of something that it is quite a challenge to take that step back and review what it is that actually substantiates the outcomes. Perhaps counter intuitively we were not specifically looking for projects which had been hugely successful, as sometimes it can be just as interesting to know why it is that a project has not been successful. So I was looking for good evaluations of work, or in some cases good plans for evaluations which were yet to take place.

What we wanted to see was transparent reporting, honesty about the method and processes and the thinking behind them, as well as evidence of what worked, what didn't and how we know this. There were over 50 entries competing across three award categories, all from organisations working to improve the life chances of children and young people, and one thing which stood out to me was how differently people interpreted the idea of evidence. Many of the entries 'talked the talk' but I was always keen to see how they backed this up, what data they could present me with to back their claims up or what plans they had to evaluate effectively. In the end the judges chose to reward 10 organisations across the three categories: winners included larger charities such as St Giles' Trust as well as some much smaller projects, such as Street Doctors, which teaches high-risk young people the skills to deliver life-saving first aid at the scene of a stabbing or collapse.

But a competition such as this is only the start. The winning organisations have all shown what they can do when it comes to measurement, but now must build on this and improve. Moving forward, I really hope that other groups will be inspired by how thoroughly these organisations are approaching their measurement and evaluation work, and start adopting it themselves. Here at NPC, where we charities of all types try to understand and improve their impact, we'll be keeping a keen eye on this as it is vital and can really help transform the sector.

Close

What's Hot