By Dr Ann O’Sullivan
I attended this AHRC workshop, on Wednesday 11th June to discuss some of the problems related to evaluating the value of arts and cultural activities. There were discussions around improving how we conceptualise cultural value and related methodologies. For the project we are currently undertaking we have used a number of tools from the anthropologist’s kit including focus groups, qualitative interviewing and the use of film alongside the working group meetings. The main issues to emerge from the workshop were:
- The type of research process that we have been undertaking is difficult if not impossible to replicate by others at a future date and therefore questions of reliability and validity emerge as our research can be considered by those lacking understanding of our methods as purely anecdotal.
- This creates difficulties within the current funding and appraisal frameworks and feeds into our work of how best to equip artists and cultural practitioners with suitable materials and techniques to represent the cultural value of their work. Basically the arts practitioner faces the same dilemma as the ethnographer. The work will always represent the perspective of the researcher or the artist and we can never attain the longed for objective God’s eye view of the world.
- Using a range of methods and an interdisciplinary focus was seen to be essential and this is a key strength of our study.
- Quantification is important in the public/funding domain but this needs to be underpinned by nuanced quantifiable measures gathered through a qualitative engagement with the phenomena.
- We also need to challenge the resistance to personal and emotional responses by academic and public audiences. Basically qualitative researches need to stop apologising for the richness of our data.
- The notion of art as a catalyst to transformation in both artist and audience. This is .something that we have been exploring in our own study and may become a key theme. This is matched by what some members of the discussion described as a move towards more socially engage methods for data gathering.
- The traditional output of an research project might be a paper research report and the group suggested experimenting with video and other visual modes that might suit the creative sensibilities of practitioners.
- The group reported that from artists and arts organisations the methods employed to examine the ‘value’ of their work such as exit questionnaires and on line surveys were very limited in the quality and quantity of information that they offer. They fail to offer any significant way of understanding the sensory, experiential and affective dimensions of the audience experience.
- The mismatch between the rich and singular nuanced stories that we work with as researchers and arts practitioners do not really fit the culture of evidence required by funders.
There was a long and fruitful debate regarding how we replicate and validate our work to ensure its rigour. It was clear that all members of the discussion felt that ethnographic/qualitative research methods need their own conception of validity and reliability. Another research team could not replicate what we have undertaken over the past months in conducting individual interviews and focus groups. We did not conduct a randomised control group in which all variables were under our control, but there is evidence from ‘guinea pig narrative’ the randomised control trials may not meet this criteria either. There was a discussion on such pseudo-scientific concepts as inter-rater reliability rates. This is a technique whereby a group of coders will independently code a transcript using a standardised coding frame and a measure is taken of how often they agree. Given that coders all belong to the same community of language users it comes as no surprise that quite high levels of agreement are reached on the content or themes in qualitative data. The reason we can all communicate with each other in nuanced ways on a day to day basis is that we have all reached a consensus on what we actually mean when we say something. This suggests that validity or reliability of qualitative data is merely a consensus among a community of language users as to what a ‘thing’ actually means and not some ‘statistic’ that is true for all time. Who knows the same may be true of quantitative data although the positivists amongst you may not like this idea. What is validity or rigour but something that has been agreed upon through the use of language. It was once agreed that research carried out on the use of sun screen and skin cancer was valid and reliable but this is currently being questioned. It suggests that validity or reliability are not fixed for all time but are merely concepts that are agreed upon until further notice.