Incongruence between observers’ and observed facial muscle activation reduces recognition of emotional facial expressions from video stimuli

Tanja Wingenbach, Mark Brosnan, Monique Pfaltz, Michael Plichta, Christopher Ashwin

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e. incongruent facial muscle activity, this might impede recognition.
The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c).
Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced.
Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Original languageEnglish
Article number864
Pages (from-to)1-12
Number of pages12
JournalFrontiers in Psychology
Volume9
DOIs
Publication statusPublished - 6 Jun 2018

Fingerprint

Facial Muscles
Facial Expression
Emotions
Bovine Immunodeficiency Virus
Recognition (Psychology)
Electromyography
Lip
Cognition

Keywords

  • Dynamic stimuli
  • Embodiment
  • Facial EMG
  • Facial emotion recognition
  • Facial muscle activity
  • Imitation
  • Videos
  • facial expressions of emotion

ASJC Scopus subject areas

  • Psychology(all)

Cite this

Incongruence between observers’ and observed facial muscle activation reduces recognition of emotional facial expressions from video stimuli. / Wingenbach, Tanja; Brosnan, Mark; Pfaltz, Monique; Plichta, Michael; Ashwin, Christopher.

In: Frontiers in Psychology, Vol. 9, 864, 06.06.2018, p. 1-12.

Research output: Contribution to journalArticle

@article{145c3c680b784dce9dc1d12ac6890149,
title = "Incongruence between observers’ and observed facial muscle activation reduces recognition of emotional facial expressions from video stimuli",
abstract = "According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e. incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced.Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.",
keywords = "Dynamic stimuli, Embodiment, Facial EMG, Facial emotion recognition, Facial muscle activity, Imitation, Videos, facial expressions of emotion",
author = "Tanja Wingenbach and Mark Brosnan and Monique Pfaltz and Michael Plichta and Christopher Ashwin",
year = "2018",
month = "6",
day = "6",
doi = "10.3389/fpsyg.2018.00864",
language = "English",
volume = "9",
pages = "1--12",
journal = "Frontiers in Psychology: Personality and Social Psychology",
issn = "1664-1078",
publisher = "Frontiers Media S.A.",

}

TY - JOUR

T1 - Incongruence between observers’ and observed facial muscle activation reduces recognition of emotional facial expressions from video stimuli

AU - Wingenbach, Tanja

AU - Brosnan, Mark

AU - Pfaltz, Monique

AU - Plichta, Michael

AU - Ashwin, Christopher

PY - 2018/6/6

Y1 - 2018/6/6

N2 - According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e. incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced.Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

AB - According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e. incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced.Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

KW - Dynamic stimuli

KW - Embodiment

KW - Facial EMG

KW - Facial emotion recognition

KW - Facial muscle activity

KW - Imitation

KW - Videos

KW - facial expressions of emotion

UR - http://www.scopus.com/inward/record.url?scp=85048115942&partnerID=8YFLogxK

U2 - 10.3389/fpsyg.2018.00864

DO - 10.3389/fpsyg.2018.00864

M3 - Article

VL - 9

SP - 1

EP - 12

JO - Frontiers in Psychology: Personality and Social Psychology

JF - Frontiers in Psychology: Personality and Social Psychology

SN - 1664-1078

M1 - 864

ER -