menu
techminis

A naukri.com initiative

google-web-stories
Home

>

ML News

>

Multimodal...
source image

Arxiv

2d

read

344

img
dot

Image Credit: Arxiv

Multimodal Emotion Coupling via Speech-to-Facial and Bodily Gestures in Dyadic Interaction

  • Human emotional expression involves coordinated vocal, facial, and gestural signals.
  • Understanding the dynamics between speech and facial/hand gestures is crucial for real interaction understanding.
  • Sequential turn-taking in conversations creates stable temporal windows for synchrony.
  • Simultaneous speech disrupts alignment and impacts emotional clarity.
  • Analyzing multimodal emotion coupling in dyadic interactions using the IEMOCAP corpus.
  • Speech features like prosody, MFCCs, arousal, valence, and emotions were aligned with facial and hand movements.
  • Expressive activeness measured through displacement magnitudes, showing more in the lower face during non-overlapping speech.
  • Sadness exhibited increased expressivity during non-overlapping speech, while anger suppressed gestures during overlaps.
  • Predictive mapping from speech features to gestures showed greater accuracy for prosody and MFCCs compared to arousal and valence.
  • Hand speech synchrony was enhanced under low arousal and overlapping speech.

Read Full Article

like

20 Likes

For uninterrupted reading, download the app