Abstract
Most audio-visual (AV) emotion databases consist of clips that do not reflect real-life emotion processing (e.g., professional actors in bright studio-like environment), contain only spoken clips, and none have sung clips that express complex emotions. Here, we introduce a new AV database, the Reading Everyday Emotion Database (REED), which directly addresses those gaps. We recorded the faces of everyday adults with a diverse range of acting experience expressing 13 emotions—neutral, the six basic emotions (angry, disgusted, fearful, happy, sad, surprised), and six complex emotions (embarrassed, hopeful, jealous, proud, sarcastic, stressed)—in two auditory domains (spoken and sung) using everyday recording devices (e.g., laptops, mobile phones, etc.). The recordings were validated by an independent group of raters. We found that: intensity ratings of the recordings were positively associated with recognition accuracy; and the basic emotions, as well as the Neutral and Sarcastic emotions, were recognised more accurately than the other complex emotions. Emotion recognition accuracy also differed by utterance. Exploratory analysis revealed that recordings of those with drama experience were better recognised than those without. Overall, this database will benefit those who need AV clips with natural variations in both emotion expressions and recording environment.
Original language | English |
---|---|
Number of pages | 23 |
Journal | Language Resources and Evaluation |
DOIs | |
Publication status | Published - 20 Nov 2023 |
Bibliographical note
Funding This work was supported by a European Research Council (ERC) Starting Grant (CAASD,678733) awarded to FL. JHO was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 887283.
Data, material and/or code availability The datasets generated in the validation study are available in the
University of Reading Data Archive, https://doi.org/10.17864/1947.000407. The complete REED database is available to authorised users subject to a Data Access Agreement, which can be accessed at the
following link: https://doi.org/10.17864/1947.000407.