M. Gross

PhonemeNet: A Transformer Pipeline for Text-Driven Facial Animation featured image

PhonemeNet: A Transformer Pipeline for Text-Driven Facial Animation

A transformer pipeline for text-driven facial animation exploiting phoneme-level speech structure, achieving real-time performance and best-in-class lip synchronization accuracy. …

p.-witzig

Steering Narrative Agents through a Dynamic Cognitive Framework for Guided Emergent Storytelling

A dynamic cognitive framework for narrative agents in interactive storytelling, combining BDI representations with LLM generation to balance story coherence with player agency. …

c.-yang

BEE: Belief-Value-Aligned, Explainable, and Extensible Cognitive Framework for Conversational Agents

BEE is a modular cognitive framework for conversational agents featuring belief management, value alignment, transparent reasoning, and extensibility. Best Paper Honorable Mention …

c.-yang

A Joint Personality-Emotion Framework for Personality-Consistent Conversational Agents

A joint framework modeling personality and emotion for personality-consistent conversational agents, using contrastive learning to decouple emotion from semantic content. IVA 2025. …

n.-kovacevic
A Platform for Interactive AI Character Experiences featured image

A Platform for Interactive AI Character Experiences

A full-pipeline platform for interactive AI character experiences, demonstrated through Digital Einstein and deployed at scientific conferences, technology events, and public …

avatar
Dr. Rafael Wampfler

Immersive Conversations with Digital Einstein: Linking a Physical System and AI

SIGGRAPH Asia 2024 Emerging Technologies demonstration describing the physical installation and AI integration of Digital Einstein at the Tokyo venue.

avatar
Dr. Rafael Wampfler

EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech Animation

EmoSpaceTime decouples emotion and content in 3D speech animation through contrastive learning, enabling fine-grained control over emotional expressivity independent of spoken …

p.-witzig

On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild

Systematic study of multimodal emotion recognition in natural human-chatbot interactions, evaluating text, acoustic, and behavioral signal fusion strategies. ICMI 2024.

n.-kovacevic

Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusions

Dynamic personality infusion for chatbots — modulating expressed Big Five personality traits at inference time to improve user engagement and interaction quality. CUI 2024.

n.-kovacevic
The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions featured image

The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions

A large-scale empirical characterization of the personality dimensions GPT-3 expresses during human-chatbot interaction, using Big Five psychometrics. Published in ACM IMWUT 2024.

n.-kovacevic