This dataset includes data on behavioural outcomes for the audiovisual emotion recognition tasks used in the publication, "High Trait Anxiety Enhances Optimal Integration of Auditory and Visual Threat Cues". In this study the authors investigated perception of happy, sad and angry emotions within unimodal (audio- and visual-only) and audiovisual displays in adults with low vs. high levels of trait anxiety. The data is organised to facilitate replication of the analyses carried out in the aforementioned study, which includes two model-based analyses to elucidate how multisensory integration of emotional information operates in high trait anxiety. This was done by comparing performance in the audiovisual condition for both high and low trait anxiety groups to performance predicted by the Maximum Likelihood Estimation (MLE) model (Ernst & Banks, 2002; Rohde et al., 2016) and Miller’s Race Model (Miller, 1982; Ulrich et al., 2007). Data included in this dataset has already been pre-processed (i.e., univariate outliers have already been identified and dealt with).