On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild

Nov 4, 2024·
N. Kovačević
,
C. Holz
,
M. Gross
Dr. Rafael Wampfler
Dr. Rafael Wampfler
Abstract
Emotion recognition during human-chatbot interaction presents distinct challenges compared to controlled laboratory settings: conversations are spontaneous, multimodal signals are noisy, and emotional expressions are subtle and often ambiguous. We conduct a systematic study of multimodal emotion recognition in the wild during chatbot interactions, evaluating fusion strategies that combine text, acoustic, and behavioral signals. We analyze the specific challenges that arise from the conversational AI context — including the influence of chatbot response quality on user affect — and propose approaches for more robust recognition under naturalistic conditions.
Type
Publication
In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI ‘24), San Jose, Costa Rica
publications
Dr. Rafael Wampfler
Authors
Senior Researcher & Lecturer

I am a Senior Researcher & Lecturer at the Computer Graphics Laboratory of ETH Zurich, and a Research Consultant at Disney Research. I am leading the Digital Character AI projects at CGL. My research interests include conversational digital characters, affective computing, human-computer interaction, and applied machine learning.

My vision is to create intelligent digital humans that can naturally communicate, understand, and support people across domains such as education and mental health. My research focuses on multimodal artificial intelligence for interactive digital humans, developing models that combine large language models, affective computing, and data-driven animation to create embodied conversational agents endowed with autonomous agency, consistent values, and beliefs.

My work bridges machine learning, human–computer interaction, and computer graphics to enable AI systems such as Digital Einstein and interactive patient avatars for psychotherapy training and health education.