Qualitative evaluations: methods, data and analysis
Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.
Evaluating programmes and projects are an essential part of the feedback loop that should lead to better services. In fact, programmes should be designed with evaluations in mind, to make sure that there are defined and measurable outcomes.
While most evaluations generally include numerical analysis, qualitative data is often used along-side the quantitative, to show richness of project impact, and put a human voice in the process. Especially when a project doesn’t meet targets, or have the desired level of impact, comments from project managers and service users usually give the most information into what went wrong (or right) and why.
For smaller pilot and feasibility projects, qualitative data is often the mainstay of the evaluation data, when numerical data wouldn’t reach statistical analysis, or when it is too early in a programme to measure intended impact. For example, a programme looking at obesity reductions might not be able to demonstrate a lower number of diabetes referrals at first, but qualitative insight in the first year or few months of the project might show how well messages from the project are being received, or if targeted groups are talking about changing their behaviour. When goals like this are long term (and in public health and community interventions they often are) it’s important to continuously assess the precursors to impact: namely engagement, and this is usually best done in a qualitative way.
So, what is best practice for qualitative evaluations? Fortunately, there are some really good guides and overviews that can help teams choose the right qualitative approach. Vaterlaus and Higgenbotham give a great overview of qualitative evaluation methods, while Professor Frank Vanclay talks at a wider level about qualitative evaluations, and innovative ways to capture stories. However, there was also a nice ‘tick-box’ style guide produced by the old Public Health Resource Unit which can still be found at this link. Essentially, the tool suggests 10 questions that can be used to assess the quality of a qualitative based-evaluation – really useful when looking at evaluations that come from other fields or departments.
But my contention is that the appraisal tool above is best implemented as a guide for producing qualitative evaluations. If you start by considering the best approach, how you are going to demonstrate rigour, choosing appropriate methods and recruitment, you’ll get a better report at the end of it. I’d like to discuss and expand on some of the questions used to assess the rigour of the qualitative work, because this is something that often worries people about qualitative research, and these steps help demystify good practice.
- The process: Start by planning the whole evaluation from the outset: What do you plan to do? All the rest will then fall into place.
- The research questions: what are they and why were these chosen? Are the questions going to give the evaluation the data it needs, and will the methods capture that correctly?
- Recruitment: who did you choose, and why? Who didn’t take part, and how did you find people? What gaps are there likely to be in representing the target group, and how can you compensate for this? Were there any ethical considerations, how was consent gained, and what was the relationship between the participants and the researcher(s)? Did they have any reason to be biased or not truthful?
- The data: how did you know that enough had been collected? (Usually when you are starting to hear the same things over and over – saturation) How was it recorded, transcribed, and was it of good quality? Were people willing to give detailed answers?
- Analysis: make sure you describe how it was done, and what techniques were used (such as discourse or thematic analysis). How does the report choose which quotes to reproduce, and are there contradictions reported in the data? What was the role of the researcher – should they declare a bias, and were multiple views sought in the interpretation of the data?
- Findings: do they meet the aims and research questions? If not, what needs to be done next time? Are there clear findings and action points, appropriate to improving the project?
Then the final step for me is the most important of all: SHARE! Don't let it end up on a dusty shelf! Evaluations are usually seen as a tedious but necessary internal process, but they can be so useful to people as case-studies and learning tools in organisations and groups you might never have thought of. This is especially true if there are things that went wrong, help someone in another local authority not make the same mistakes!
At the moment the best UK repositories of evaluations are based around health and economic benefits but that doesn’t stop you putting the report on your organisation’s website – if someone is looking for a similar project, search engines will do the leg work for you. That evaluation might save someone a lot of time and money, and it goes without saying, look for any similar work before you start a project, you might get some good ideas, and stop yourself falling into the same pit-holes!