Multimodal Dialog Act Classification for Conversations With Digital Characters
A multimodal dialog act classifier integrating text and acoustic features for real-time classification in conversations with digital characters. CUI 2024.
A multimodal dialog act classifier integrating text and acoustic features for real-time classification in conversations with digital characters. CUI 2024.
Dynamic personality infusion for chatbots — modulating expressed Big Five personality traits at inference time to improve user engagement and interaction quality. CUI 2024.
Multimodal affective state prediction from smartphone touch and sensor data in naturalistic conditions, using deep learning fusion. CHI 2022.
Image reconstruction from tablet front camera recordings for engagement analysis in educational settings. EDM 2020.
Glyph-based visualization technique for representing multimodal affective state data, designed for intuitive perception and scalable display. EuroVis 2020.
Semi-supervised learning for affective state prediction from smartphone touch data, leveraging abundant unlabeled naturalistic data. CHI 2020.
Affective state prediction during mobile learning using wearable biometric sensors and stylus interaction data. EDM 2019.
Variational autoencoder-based feature embeddings for student classification in educational data mining settings. EDM 2017.