Selling impact: peer-reviewer projections of what is needed and what counts in REF impact case studies. A retrospective analysis

Richard Watermeyer, Adam Hedgecoe

Research output: Contribution to journalArticle

  • 5 Citations

Abstract

The intensification of an audit culture in higher education is made no more apparent than with the growing prevalence of performance based research funding systems (PRFS) like the UK’s Research Excellence Framework (REF) and the introduction of new measures of assessment like ‘impact’ or more specifically, the economic and societal impact of research. Detractors of this regulatory intervention, however, question the legitimacy and credibility of such a system for, and focus within, the evaluation of research performance. Within this study, we specifically sought to understand the process of evaluating the impact of research by gaining unique access as observers of a simulated impact evaluation exercise populated by senior academic peer-reviewers and user-assessors, undertaken within one UK research intensive university prior to and in preparation of its submission to REF2014. Over an intensive two-day period we observed how peer-reviewers and user-assessors grouped into four over-arching disciplinary panels went about deliberating and scoring impact, presented in the form of narrative-based case studies. Among other findings, our observations revealed that in their efforts to evaluate impact, peer-reviewers were indirectly promoting a kind of impact mercantilism, where case studies that best sold impact were those rewarded with the highest evaluative scores.
LanguageEnglish
Pages651-665
JournalJournal of Education Policy
Volume31
Issue number5
Early online date18 Apr 2016
DOIs
StatusPublished - 2016

Fingerprint

selling
projection
mercantilism
evaluation
audit
credibility
performance
legitimacy
funding
narrative
university
economics
education

Cite this

@article{3ab836f0ca4142929e36aa0c8111668a,
title = "Selling impact: peer-reviewer projections of what is needed and what counts in REF impact case studies. A retrospective analysis",
abstract = "The intensification of an audit culture in higher education is made no more apparent than with the growing prevalence of performance based research funding systems (PRFS) like the UK’s Research Excellence Framework (REF) and the introduction of new measures of assessment like ‘impact’ or more specifically, the economic and societal impact of research. Detractors of this regulatory intervention, however, question the legitimacy and credibility of such a system for, and focus within, the evaluation of research performance. Within this study, we specifically sought to understand the process of evaluating the impact of research by gaining unique access as observers of a simulated impact evaluation exercise populated by senior academic peer-reviewers and user-assessors, undertaken within one UK research intensive university prior to and in preparation of its submission to REF2014. Over an intensive two-day period we observed how peer-reviewers and user-assessors grouped into four over-arching disciplinary panels went about deliberating and scoring impact, presented in the form of narrative-based case studies. Among other findings, our observations revealed that in their efforts to evaluate impact, peer-reviewers were indirectly promoting a kind of impact mercantilism, where case studies that best sold impact were those rewarded with the highest evaluative scores.",
author = "Richard Watermeyer and Adam Hedgecoe",
year = "2016",
doi = "10.1080/02680939.2016.1170885",
language = "English",
volume = "31",
pages = "651--665",
journal = "Journal of Education Policy",
issn = "0268-0939",
publisher = "Routledge",
number = "5",

}

TY - JOUR

T1 - Selling impact

T2 - Journal of Education Policy

AU - Watermeyer,Richard

AU - Hedgecoe,Adam

PY - 2016

Y1 - 2016

N2 - The intensification of an audit culture in higher education is made no more apparent than with the growing prevalence of performance based research funding systems (PRFS) like the UK’s Research Excellence Framework (REF) and the introduction of new measures of assessment like ‘impact’ or more specifically, the economic and societal impact of research. Detractors of this regulatory intervention, however, question the legitimacy and credibility of such a system for, and focus within, the evaluation of research performance. Within this study, we specifically sought to understand the process of evaluating the impact of research by gaining unique access as observers of a simulated impact evaluation exercise populated by senior academic peer-reviewers and user-assessors, undertaken within one UK research intensive university prior to and in preparation of its submission to REF2014. Over an intensive two-day period we observed how peer-reviewers and user-assessors grouped into four over-arching disciplinary panels went about deliberating and scoring impact, presented in the form of narrative-based case studies. Among other findings, our observations revealed that in their efforts to evaluate impact, peer-reviewers were indirectly promoting a kind of impact mercantilism, where case studies that best sold impact were those rewarded with the highest evaluative scores.

AB - The intensification of an audit culture in higher education is made no more apparent than with the growing prevalence of performance based research funding systems (PRFS) like the UK’s Research Excellence Framework (REF) and the introduction of new measures of assessment like ‘impact’ or more specifically, the economic and societal impact of research. Detractors of this regulatory intervention, however, question the legitimacy and credibility of such a system for, and focus within, the evaluation of research performance. Within this study, we specifically sought to understand the process of evaluating the impact of research by gaining unique access as observers of a simulated impact evaluation exercise populated by senior academic peer-reviewers and user-assessors, undertaken within one UK research intensive university prior to and in preparation of its submission to REF2014. Over an intensive two-day period we observed how peer-reviewers and user-assessors grouped into four over-arching disciplinary panels went about deliberating and scoring impact, presented in the form of narrative-based case studies. Among other findings, our observations revealed that in their efforts to evaluate impact, peer-reviewers were indirectly promoting a kind of impact mercantilism, where case studies that best sold impact were those rewarded with the highest evaluative scores.

UR - http://dx.doi.org/10.1080/02680939.2016.1170885

U2 - 10.1080/02680939.2016.1170885

DO - 10.1080/02680939.2016.1170885

M3 - Article

VL - 31

SP - 651

EP - 665

JO - Journal of Education Policy

JF - Journal of Education Policy

SN - 0268-0939

IS - 5

ER -