<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Emotion Recognition | Dr. Rafael Wampfler</title><link>https://rafael-wampfler.github.io/tags/emotion-recognition/</link><atom:link href="https://rafael-wampfler.github.io/tags/emotion-recognition/index.xml" rel="self" type="application/rss+xml"/><description>Emotion Recognition</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Dec 2025 00:00:00 +0000</lastBuildDate><item><title>Affective Computing &amp; Emotion Recognition</title><link>https://rafael-wampfler.github.io/projects/affective-computing/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/affective-computing/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;This research thread develops deep learning architectures for predicting human emotional and cognitive states from rich, naturalistic data streams. Unlike laboratory-controlled setups, our systems operate &amp;ldquo;in-the-wild&amp;rdquo; — on real devices, in real environments, with real users — addressing the full complexity of affective computing at scale.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Affective computing — the capacity of machines to detect, interpret, and respond to human emotions — is a foundational capability for human-centric AI. Yet most academic benchmarks rely on controlled, acted datasets that poorly predict real-world performance. Building systems that genuinely work in naturalistic settings requires confronting three fundamental challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Domain adaptation&lt;/strong&gt;: Affective signals vary enormously across individuals and contexts; models must transfer gracefully.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uncertainty estimation&lt;/strong&gt;: Emotion recognition inherently involves ambiguity and subjectivity; systems must quantify and communicate their confidence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Continuous affective sensing must operate on resource-constrained mobile and edge devices.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;h3 id="multimodal-fusion"&gt;Multimodal Fusion&lt;/h3&gt;
&lt;p&gt;Our work leverages a broad set of input modalities, combining them through transformer-based and convolutional architectures:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Smartphone touch and sensor data&lt;/strong&gt;: Stylus pressure, touch dynamics, accelerometer, and gyroscope signals during naturalistic task completion&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Biometric data&lt;/strong&gt;: Heart rate, skin conductance, and other physiological signals from wearables&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Egocentric vision&lt;/strong&gt;: First-person video from wearable cameras, capturing the user&amp;rsquo;s visual environment&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Typing behavior&lt;/strong&gt;: Smartphone keyboard dynamics as a passive indicator of affective and personality state&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="semi-supervised-and-self-supervised-learning"&gt;Semi-Supervised and Self-Supervised Learning&lt;/h3&gt;
&lt;p&gt;Given the difficulty and cost of obtaining large labeled affective datasets in natural settings, we exploit semi-supervised learning strategies that leverage abundant unlabeled data. This improves generalization without requiring exhaustive annotation.&lt;/p&gt;
&lt;h3 id="egoemotion-neurips-2025"&gt;egoEMOTION (NeurIPS 2025)&lt;/h3&gt;
&lt;p&gt;The most recent and ambitious contribution is &lt;em&gt;egoEMOTION&lt;/em&gt;, presented at NeurIPS 2025 (Datasets and Benchmarks track). This work combines &lt;strong&gt;egocentric vision&lt;/strong&gt; and &lt;strong&gt;physiological signals&lt;/strong&gt; into a unified multimodal architecture, advancing both fusion strategies and providing a new reproducible benchmark dataset. egoEMOTION addresses the challenge of predicting emotion and personality from the wearer&amp;rsquo;s own perspective, a naturalistic setting of growing relevance as wearable cameras become ubiquitous.&lt;/p&gt;
&lt;h3 id="personality-recognition-from-typing"&gt;Personality Recognition from Typing&lt;/h3&gt;
&lt;p&gt;Beyond momentary emotions, we have also developed systems for personality trait recognition from passive smartphone typing dynamics. This work (IEEE Transactions on Affective Computing, 2023) demonstrates that stable personality traits leave measurable signatures in everyday smartphone interactions, enabling passive, continuous personality inference.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Demonstrated state-of-the-art in-the-wild affective state prediction from smartphone sensors across multiple CHI publications&lt;/li&gt;
&lt;li&gt;Published a new egocentric multimodal emotion and personality benchmark (NeurIPS 2025)&lt;/li&gt;
&lt;li&gt;Showed that semi-supervised learning substantially closes the gap between labeled and unlabeled-data performance&lt;/li&gt;
&lt;li&gt;Developed personality trait recognition from typing dynamics achieving strong classification performance on real-world data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publications"&gt;Publications&lt;/h2&gt;
&lt;p&gt;M. Jammot, B. Braun, P. Streli, &lt;strong&gt;R. Wampfler&lt;/strong&gt; and C. Holz (2025). &lt;em&gt;egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-World Tasks&lt;/em&gt;. In Conference on Neural Information Processing Systems 2025 (Datasets and Benchmarks, NeurIPS), pp. 1–12.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild&lt;/em&gt;. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI &amp;lsquo;24), San Jose, Costa Rica, November 4–8, 2024.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, T. Günther, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2023). &lt;em&gt;Personality Trait Recognition Based on Smartphone Typing Characteristics in the Wild&lt;/em&gt;. IEEE Transactions on Affective Computing, pp. 1–11, 2023.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi, M. Gross and C. Holz (2022). &lt;em&gt;Affective State Prediction from Smartphone Touch and Sensor Data in the Wild&lt;/em&gt;. Proceedings of the Conference on Human Factors in Computing Systems (CHI), New Orleans, USA, April 30–May 5, 2022, pp. 1–14.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi and M. Gross (2020). &lt;em&gt;Affective State Prediction Based on Semi-Supervised Learning from Smartphone Touch Data&lt;/em&gt;. Proceedings of the Conference on Human Factors in Computing Systems (CHI), Virtual, April 25–30, 2020, pp. 1–13.&lt;/p&gt;
&lt;p&gt;N. Kovačević, &lt;strong&gt;R. Wampfler&lt;/strong&gt;, B. Solenthaler, M. Gross and T. Günther (2020). &lt;em&gt;Glyph-Based Visualization of Affective States&lt;/em&gt;. Eurographics/IEEE VGTC Symposium on Visualization (EuroVis), Virtual, May 25–29, 2020, pp. 121–125.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi and M. Gross (2019). &lt;em&gt;Affective State Prediction in a Mobile Setting using Wearable Biometric Sensors and Stylus&lt;/em&gt;. Proceedings of the International Conference on Educational Data Mining (EDM), Montréal, Canada, July 2–5, 2019, pp. 224–233.&lt;/p&gt;</description></item><item><title>egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-World Tasks</title><link>https://rafael-wampfler.github.io/publications/egoemotion-2025/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/egoemotion-2025/</guid><description/></item><item><title>On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild</title><link>https://rafael-wampfler.github.io/publications/multimodal-emotion-recognition-2024/</link><pubDate>Mon, 04 Nov 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/multimodal-emotion-recognition-2024/</guid><description/></item><item><title>Affective State Prediction from Smartphone Touch and Sensor Data in the Wild</title><link>https://rafael-wampfler.github.io/publications/affective-state-smartphone-2022/</link><pubDate>Sat, 30 Apr 2022 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/affective-state-smartphone-2022/</guid><description/></item><item><title>Affective State Prediction Based on Semi-Supervised Learning from Smartphone Touch Data</title><link>https://rafael-wampfler.github.io/publications/affective-state-semi-supervised-2020/</link><pubDate>Sat, 25 Apr 2020 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/affective-state-semi-supervised-2020/</guid><description/></item></channel></rss>