On 25th October 2018 I had the pleasure of being asked to join a panel to discuss evaluation in the context of working with the arts with children and young people in challenging circumstances at the New Horizon’s Conference, hosted by The Garage in Norwich. The whole day was planned to support discussion, debate and importantly opportunities to learn. I was joined on the panel by the inspirational Deborah Bullivant, Founder of Grimm & Co, Dr. Matthew Hill, Head of Research and Learning and Deputy Director for the Centre for Youth Impact, and our excellent panel Chair, Lucy Marder, Strategic Manager at ArtsWork.
Here is an outline of what I talked about in the session.
How do we best evaluate the impact of work by and with children in challenging circumstances?
Good evaluation for me is underpinned by three core principles.
Firstly, good evaluation is collaborative.
Good evaluation has stakeholders at the heart of the process, not just as people to fill in forms but to help plan the methodology, set the questions that are most important to them and reach out to their networks where appropriate. Evaluation is concerned with making judgements, but we need to think carefully about who makes those judgements about what works. Our most vulnerable groups are most likely to be the least powerful, and so why would we as evaluators add to their powerlessness by assuming as the experts we have all the answers about how an evaluation would work for them. I’m not suggesting this is always easy. In one project I evaluated, we started off with a participatory action research project at the centre of the evaluation, but needed to move away from this approach to an engaged but researcher led evaluation after the first year. This was as a result of funders requiring larger samples sizes than a truly participatory evaluation could achieve, and restrictions on how much I could direct the research, let alone the community researchers. This became a quite an ethical dilemma for me. Why were we asking volunteers to give up their time to work on a well-funded project that they had very little real power to influence? Don’t get me wrong, the funders and project were very keen to help people shape the programme, and we were pragmatic in our solutions, but ultimately the systems of ‘evaluation’ got in the way of stakeholders being able to direct and genuinely influence the evaluation in any meaningful way.
Secondly, good evaluation is embedded deep into the activity of the project.
So often evaluation is thought of something that fits onto the end of a project. An afterthought. Or something that a project pulls out annually in the form of interim reports. For me good evaluation is stealthy. It isn’t an add on, but embedded throughout the whole cycle of planning, listening and learning, analysing and sharing. It is about evaluation activities being immersed in the day to day activity. In my experience artists are some of the best people to run effective evaluations because artists don’t do boring. And once given permission to throw away their evaluation forms they go to town using creative activities to help stakeholders reflect on impact in exciting and engaging way. Which bring me to my last principle.
Good evaluation is creative and engaging.
Who says an evaluation has to be a long list of questions asked at the end of a session? Who says an evaluation has to be written up by an evaluator into a report that gathers dust on a shelf for 5 years until the next office move when it is thrown in the bin? In my experience using creative evaluation tools can help attract people to tell you their thoughts. In creative projects we have limitless ways of both gathering and disseminating our evaluation findings. And quite often the people best able to decide what that should be are those I talked about in principle number 1.
I think it is important to note that we can’t beat ourselves up over this if we can’t always achieve this balance. In some cases, as is illustrated in the examples I have just mentioned, I took the decision to focus more on creative engagement than full participation of community researchers. At other times I have been asked to evaluate projects three months before it is due to end. And that is ok. I’m also not denying the importance of data and clear and consistent evaluation frameworks. But these three principle help us reflect on work: Whose voice are we hearing? How well does evaluation fit with inside the project itself? And will our evaluation be a passive observation of practice or play a part in deliberately addressing some of the power inequalities experienced by the young people we work with?