The intensification of an audit culture in higher education is made no more apparent than with the growing prevalence of performance based research funding systems (PRFS) like the UK’s Research Excellence Framework (REF) and the introduction of new measures of assessment like ‘impact’ or more specifically, the economic and societal impact of research. Detractors of this regulatory intervention, however, question the legitimacy and credibility of such a system for, and focus within, the evaluation of research performance. Within this study, we specifically sought to understand the process of evaluating the impact of research by gaining unique access as observers of a simulated impact evaluation exercise populated by senior academic peer-reviewers and user-assessors, undertaken within one UK research intensive university prior to and in preparation of its submission to REF2014. Over an intensive two-day period we observed how peer-reviewers and user-assessors grouped into four over-arching disciplinary panels went about deliberating and scoring impact, presented in the form of narrative-based case studies. Among other findings, our observations revealed that in their efforts to evaluate impact, peer-reviewers were indirectly promoting a kind of impact mercantilism, where case studies that best sold impact were those rewarded with the highest evaluative scores.