<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Dr. Rafael Wampfler</title><link>https://rafael-wampfler.github.io/</link><atom:link href="https://rafael-wampfler.github.io/index.xml" rel="self" type="application/rss+xml"/><description>Dr. Rafael Wampfler</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 24 Oct 2022 00:00:00 +0000</lastBuildDate><item><title>Digital Einstein</title><link>https://rafael-wampfler.github.io/projects/digital-einstein/</link><pubDate>Tue, 01 Jun 2021 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/digital-einstein/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Digital Einstein is a flagship embodied conversational agent that brings the historical figure of Albert Einstein to life through real-time multimodal AI interaction. The system combines speech recognition and synthesis, facial animation, gesture control, and a cognitively grounded language understanding pipeline to deliver immersive, personality-consistent conversations.&lt;/p&gt;
&lt;p&gt;Digital Einstein serves three interconnected roles:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Research platform&lt;/strong&gt; — a testbed for studying human–agent interaction in constrained embodied settings, yielding insights on affective computing, personality modeling, dialog act classification, and conversational AI.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Education platform&lt;/strong&gt; — a live demonstration of conversational AI and multimodal deep learning deployed in university events, science outreach, and public engagement.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Public engagement tool&lt;/strong&gt; — reaching thousands of visitors globally at scientific conferences, tech summits, museums, and public events, generating sustained international recognition for ETH Zurich.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;How can AI systems convincingly portray a well-known historical personality — someone whose knowledge, values, and speaking style are culturally established — in real-time dialogue with arbitrary members of the public? This challenge crystallizes core problems in interactive AI: maintaining factual and characterological consistency, adapting dynamically to unpredictable user inputs, and delivering a compelling embodied experience at scale.&lt;/p&gt;
&lt;p&gt;Digital Einstein was conceived as both a scientific challenge and a communication vehicle: making abstract advances in AI tangible for general audiences while simultaneously driving rigorous research on the underlying problems.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;The system is built on a full-pipeline architecture described in the SIGGRAPH 2025 paper &lt;em&gt;&amp;ldquo;A Platform for Interactive AI Character Experiences&amp;rdquo;&lt;/em&gt;. Key components include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Perception layer&lt;/strong&gt;: Real-time speech recognition via Microsoft Azure Speech Services and multimodal input processing through a webcam-based vision pipeline, including face detection, user characterization, head pose estimation, and re-identification.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cognitive reasoning layer&lt;/strong&gt;: Knowledge-grounded dialogue management with integrated response generation, powered by GPT-4.1 mini, featuring dynamic personality infusion that adapts outputs to user-selectable archetypes: Digital Einstein, Rude Bulldozer, Drama Volcano, Zen Master, and Hashtag Prophet.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Animation synthesis&lt;/strong&gt;: Data-driven facial animation synchronized with speech output using NVIDIA Audio2Face, blended with emotion-conditioned expressions, and complemented by a curated library of motion-captured body animations categorized by avatar state.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Embodiment&lt;/strong&gt;: A stylized Albert Einstein avatar rendered in Unity on a 65-inch display, integrated into a themed early-20th-century physical environment with spatial audio, a hidden microphone, and physical personality sliders built from potentiometers and an Arduino.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The SIGGRAPH Asia 2024 demonstration paper &lt;em&gt;&amp;ldquo;Immersive Conversations with Digital Einstein: Linking a Physical System and AI&amp;rdquo;&lt;/em&gt; details the physical installation setup, including the integration of an animatronic head with the real-time AI pipeline at the Tokyo venue.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;p&gt;Digital Einstein has been demonstrated at over 20 major events worldwide, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SIGGRAPH Asia 2024&lt;/strong&gt; (Tokyo, Japan) — Emerging Technologies&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SIGGRAPH 2025&lt;/strong&gt; (Vancouver, Canada)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GITEX Global 2024 &amp;amp; 2025&lt;/strong&gt; (Dubai, UAE) — Swiss Pavilion&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;World Economic Forum 2024 &amp;amp; 2026&lt;/strong&gt; (Davos, Switzerland)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Berlin Science Week 2025&lt;/strong&gt; (Berlin, Germany)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Swiss Re Resilience Summit 2024&lt;/strong&gt; (Rüschlikon, Switzerland)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Microsoft Initiative to Advance AI Diffusion in Switzerland 2025&lt;/strong&gt; (Berne)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;After the Algorithm Festival 2026&lt;/strong&gt; (Zurich, Switzerland)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The project has generated sustained international media coverage and public interest, positioning ETH Zurich as a world leader in embodied conversational AI.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://rafael-wampfler.github.io/projects/digital-einstein/gitex.jpg"
alt="Swiss Ambassador to the UAE, Arthur Mattli, interacting with Digital Einstein at GITEX Global in Dubai."&gt;&lt;figcaption&gt;
&lt;p&gt;Swiss Ambassador to the UAE, Arthur Mattli, interacting with Digital Einstein at GITEX Global in Dubai.&lt;/p&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id="learn-more"&gt;Learn More&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="primary-publications"&gt;Primary Publications&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, C. Yang, D. Elste, N. Kovačević, P. Witzig and M. Gross (2025). &lt;em&gt;A Platform for Interactive AI Character Experiences&lt;/em&gt;. Proceedings of the SIGGRAPH Conference Papers &amp;lsquo;25 (Vancouver, Canada, August 10–14, 2025), pp. 1–11.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, N. Kovačević, P. Witzig, C. Yang, M. Gross (2024). &lt;em&gt;Immersive Conversations with Digital Einstein: Linking a Physical System and AI&lt;/em&gt;. In SIGGRAPH Asia 2024 Emerging Technologies (SA &amp;lsquo;24) (Tokyo, Japan, December 3–6, 2024).&lt;/p&gt;</description></item><item><title>Affective Computing &amp; Emotion Recognition</title><link>https://rafael-wampfler.github.io/projects/affective-computing/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/affective-computing/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;This research thread develops deep learning architectures for predicting human emotional and cognitive states from rich, naturalistic data streams. Unlike laboratory-controlled setups, our systems operate &amp;ldquo;in-the-wild&amp;rdquo; — on real devices, in real environments, with real users — addressing the full complexity of affective computing at scale.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Affective computing — the capacity of machines to detect, interpret, and respond to human emotions — is a foundational capability for human-centric AI. Yet most academic benchmarks rely on controlled, acted datasets that poorly predict real-world performance. Building systems that genuinely work in naturalistic settings requires confronting three fundamental challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Domain adaptation&lt;/strong&gt;: Affective signals vary enormously across individuals and contexts; models must transfer gracefully.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uncertainty estimation&lt;/strong&gt;: Emotion recognition inherently involves ambiguity and subjectivity; systems must quantify and communicate their confidence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Continuous affective sensing must operate on resource-constrained mobile and edge devices.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;h3 id="multimodal-fusion"&gt;Multimodal Fusion&lt;/h3&gt;
&lt;p&gt;Our work leverages a broad set of input modalities, combining them through transformer-based and convolutional architectures:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Smartphone touch and sensor data&lt;/strong&gt;: Stylus pressure, touch dynamics, accelerometer, and gyroscope signals during naturalistic task completion&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Biometric data&lt;/strong&gt;: Heart rate, skin conductance, and other physiological signals from wearables&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Egocentric vision&lt;/strong&gt;: First-person video from wearable cameras, capturing the user&amp;rsquo;s visual environment&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Typing behavior&lt;/strong&gt;: Smartphone keyboard dynamics as a passive indicator of affective and personality state&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="semi-supervised-and-self-supervised-learning"&gt;Semi-Supervised and Self-Supervised Learning&lt;/h3&gt;
&lt;p&gt;Given the difficulty and cost of obtaining large labeled affective datasets in natural settings, we exploit semi-supervised learning strategies that leverage abundant unlabeled data. This improves generalization without requiring exhaustive annotation.&lt;/p&gt;
&lt;h3 id="egoemotion-neurips-2025"&gt;egoEMOTION (NeurIPS 2025)&lt;/h3&gt;
&lt;p&gt;The most recent and ambitious contribution is &lt;em&gt;egoEMOTION&lt;/em&gt;, presented at NeurIPS 2025 (Datasets and Benchmarks track). This work combines &lt;strong&gt;egocentric vision&lt;/strong&gt; and &lt;strong&gt;physiological signals&lt;/strong&gt; into a unified multimodal architecture, advancing both fusion strategies and providing a new reproducible benchmark dataset. egoEMOTION addresses the challenge of predicting emotion and personality from the wearer&amp;rsquo;s own perspective, a naturalistic setting of growing relevance as wearable cameras become ubiquitous.&lt;/p&gt;
&lt;h3 id="personality-recognition-from-typing"&gt;Personality Recognition from Typing&lt;/h3&gt;
&lt;p&gt;Beyond momentary emotions, we have also developed systems for personality trait recognition from passive smartphone typing dynamics. This work (IEEE Transactions on Affective Computing, 2023) demonstrates that stable personality traits leave measurable signatures in everyday smartphone interactions, enabling passive, continuous personality inference.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Demonstrated state-of-the-art in-the-wild affective state prediction from smartphone sensors across multiple CHI publications&lt;/li&gt;
&lt;li&gt;Published a new egocentric multimodal emotion and personality benchmark (NeurIPS 2025)&lt;/li&gt;
&lt;li&gt;Showed that semi-supervised learning substantially closes the gap between labeled and unlabeled-data performance&lt;/li&gt;
&lt;li&gt;Developed personality trait recognition from typing dynamics achieving strong classification performance on real-world data&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publications"&gt;Publications&lt;/h2&gt;
&lt;p&gt;M. Jammot, B. Braun, P. Streli, &lt;strong&gt;R. Wampfler&lt;/strong&gt; and C. Holz (2025). &lt;em&gt;egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-World Tasks&lt;/em&gt;. In Conference on Neural Information Processing Systems 2025 (Datasets and Benchmarks, NeurIPS), pp. 1–12.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild&lt;/em&gt;. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI &amp;lsquo;24), San Jose, Costa Rica, November 4–8, 2024.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, T. Günther, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2023). &lt;em&gt;Personality Trait Recognition Based on Smartphone Typing Characteristics in the Wild&lt;/em&gt;. IEEE Transactions on Affective Computing, pp. 1–11, 2023.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi, M. Gross and C. Holz (2022). &lt;em&gt;Affective State Prediction from Smartphone Touch and Sensor Data in the Wild&lt;/em&gt;. Proceedings of the Conference on Human Factors in Computing Systems (CHI), New Orleans, USA, April 30–May 5, 2022, pp. 1–14.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi and M. Gross (2020). &lt;em&gt;Affective State Prediction Based on Semi-Supervised Learning from Smartphone Touch Data&lt;/em&gt;. Proceedings of the Conference on Human Factors in Computing Systems (CHI), Virtual, April 25–30, 2020, pp. 1–13.&lt;/p&gt;
&lt;p&gt;N. Kovačević, &lt;strong&gt;R. Wampfler&lt;/strong&gt;, B. Solenthaler, M. Gross and T. Günther (2020). &lt;em&gt;Glyph-Based Visualization of Affective States&lt;/em&gt;. Eurographics/IEEE VGTC Symposium on Visualization (EuroVis), Virtual, May 25–29, 2020, pp. 121–125.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;R. Wampfler&lt;/strong&gt;, S. Klingler, B. Solenthaler, V. R. Schinazi and M. Gross (2019). &lt;em&gt;Affective State Prediction in a Mobile Setting using Wearable Biometric Sensors and Stylus&lt;/em&gt;. Proceedings of the International Conference on Educational Data Mining (EDM), Montréal, Canada, July 2–5, 2019, pp. 224–233.&lt;/p&gt;</description></item><item><title>Personality &amp; Cognitive Architectures for Conversational Agents</title><link>https://rafael-wampfler.github.io/projects/personality-cognitive/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/personality-cognitive/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Standard large language models (LLMs) are stateless by design: each response is generated without a persistent sense of self, stable values, or accumulated memory of prior interactions. This makes them ill-suited for applications requiring a believable, consistent agent personality — such as embodied historical figures, therapeutic companions, or narrative characters. This project develops deep learning-based cognitive frameworks that address this limitation.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;For conversational agents to be genuinely useful in education, mental health, entertainment, and public engagement, they must do more than generate fluent text. They must maintain:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Characterological consistency&lt;/strong&gt;: A stable personality that users recognize and can model predictively&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Value alignment&lt;/strong&gt;: Responses grounded in a coherent set of beliefs and priorities&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Epistemic coherence&lt;/strong&gt;: Awareness of what the agent knows and does not know&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Emotional expressivity&lt;/strong&gt;: Affect that reflects the agent&amp;rsquo;s personality and conversational context&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Achieving all of this while remaining interpretable and safe requires principled architectural choices rather than prompt engineering alone.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;h3 id="the-bee-cognitive-framework-iva-2025"&gt;The BEE Cognitive Framework (IVA 2025)&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;Belief-Value-Aligned, Explainable, and Extensible (BEE)&lt;/strong&gt; cognitive framework provides a structured architecture for conversational agents. BEE decomposes agent cognition into explicit modules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Belief management&lt;/strong&gt;: A persistent, queryable representation of the agent&amp;rsquo;s knowledge and world model&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Value alignment&lt;/strong&gt;: A constraint layer that ensures generated responses conform to the agent&amp;rsquo;s ethical commitments and personality&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explainability&lt;/strong&gt;: Transparent reasoning traces that allow developers and supervisors to audit agent behavior&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensibility&lt;/strong&gt;: A modular design that supports new knowledge domains and personality profiles without retraining&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;BEE received the &lt;strong&gt;Best Paper Honorable Mention&lt;/strong&gt; at IVA 2025.&lt;/p&gt;
&lt;h3 id="joint-personality-emotion-framework-iva-2025"&gt;Joint Personality-Emotion Framework (IVA 2025)&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;Joint Personality-Emotion Framework&lt;/strong&gt; unifies personality modeling (using Big Five trait representations) with momentary emotional state tracking in a single learned architecture. By jointly modeling personality and emotion — which are deeply intertwined in human behavior — the system produces responses that feel emotionally authentic and characterologically stable. Contrastive learning strategies decouple emotion from semantic content, allowing the same underlying personality to express itself across diverse emotional registers.&lt;/p&gt;
&lt;h3 id="personality-infusion-via-chatbot-interactions-cui-2024-imwut-2024"&gt;Personality Infusion via Chatbot Interactions (CUI 2024, IMWUT 2024)&lt;/h3&gt;
&lt;p&gt;We have extensively studied how personality manifests in conversational AI systems. Key findings include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What personality dimensions does GPT-3 express&lt;/strong&gt; during human-chatbot interactions, and how do these map to the Big Five personality taxonomy (IMWUT 2024)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic personality infusion&lt;/strong&gt; — how to modulate expressed personality in real time by conditioning language model generation on personality embeddings (CUI 2024)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="narrative-agents-and-emergent-storytelling-aaai-aiide-2025"&gt;Narrative Agents and Emergent Storytelling (AAAI AIIDE 2025)&lt;/h3&gt;
&lt;p&gt;In the domain of interactive narrative, we developed a &lt;strong&gt;Dynamic Cognitive Framework for Guided Emergent Storytelling&lt;/strong&gt; that enables narrative agents to pursue coherent story arcs while remaining responsive to player input. The framework uses explicit belief-desire-intention representations to steer language model generation toward narratively coherent outcomes without sacrificing interactivity.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;BEE framework achieved Best Paper Honorable Mention at IVA 2025&lt;/li&gt;
&lt;li&gt;Joint Personality-Emotion Framework demonstrated stable personality maintenance across extended dialogues with diverse user populations&lt;/li&gt;
&lt;li&gt;IMWUT 2024 paper provided the first large-scale empirical characterization of GPT-3&amp;rsquo;s expressed personality dimensions&lt;/li&gt;
&lt;li&gt;Dynamic narrative framework enabled coherent 30+ turn story experiences with novel player choices&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publications"&gt;Publications&lt;/h2&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;A Joint Personality-Emotion Framework for Personality-Consistent Conversational Agents&lt;/em&gt;. In Proceedings of the 25th International Conference on Intelligent Virtual Agents (IVA &amp;lsquo;25), Berlin, Germany, September 16–19, 2025, pp. 1–9. &lt;strong&gt;Best Paper Honorable Mention.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;C. Yang, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;BEE: Belief-Value-Aligned, Explainable, and Extensible Cognitive Framework for Conversational Agents&lt;/em&gt;. In Proceedings of the 25th International Conference on Intelligent Virtual Agents (IVA &amp;lsquo;25), Berlin, Germany, September 16–19, 2025, pp. 1–9. &lt;strong&gt;Best Paper Honorable Mention.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;C. Yang, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;Steering Narrative Agents through a Dynamic Cognitive Framework for Guided Emergent Storytelling&lt;/em&gt;. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Edmonton, Canada, November 10–14, 2025, pp. 1–11.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions&lt;/em&gt;. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, ACM, vol. 8, no. 2, 2024, pp. 1–36.&lt;/p&gt;
&lt;p&gt;N. Kovačević, T. Boschung, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusions&lt;/em&gt;. Proceedings of the 6th International Conference on Conversational User Interfaces (CUI), Luxembourg, July 08–10, 2024, pp. 1–16.&lt;/p&gt;</description></item><item><title>Facial Animation Synthesis</title><link>https://rafael-wampfler.github.io/projects/facial-animation/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/facial-animation/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Realistic, expressive facial animation is a critical component of embodied conversational agents. For AI characters to communicate naturally, their facial movements must be synchronized with speech, emotionally consistent, and computationally efficient enough for real-time use. This project develops deep learning architectures that achieve all three goals.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;The challenge of generating high-quality 3D facial animation from text or speech involves several competing requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Expressiveness&lt;/strong&gt;: Facial motion should convey the speaker&amp;rsquo;s emotional state convincingly&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Synchronization&lt;/strong&gt;: Lip movements must match phoneme timing precisely to avoid the uncanny valley&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: Emotional expressivity should be decoupled from semantic content, allowing the same phrase to be delivered in multiple emotional registers&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Efficiency&lt;/strong&gt;: Systems deployed in interactive agents must run in real time on consumer hardware&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prior approaches either sacrifice expressiveness for speed, or require enormous training data and computation. Our work addresses both efficiency and expressiveness simultaneously by rethinking how deep learning encodes linguistic and acoustic structure.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;h3 id="phonemenet-mig-2025"&gt;PhonemeNet (MIG 2025)&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;PhonemeNet&lt;/strong&gt; applies a transformer pipeline specifically designed for the phoneme-level structure of speech. Rather than treating the speech signal as a raw audio waveform or frame-level features, PhonemeNet operates at the level of phonemes — the fundamental units of speech that determine lip shape. This problem-specific inductive bias yields both improved accuracy and computational efficiency compared to architectures that ignore linguistic structure.&lt;/p&gt;
&lt;p&gt;PhonemeNet takes text input, extracts phoneme sequences, and generates corresponding 3D facial blendshape sequences that are synchronized with speech audio. The pipeline achieves real-time performance on standard hardware, making it suitable for deployment in interactive embodied agents.&lt;/p&gt;
&lt;p&gt;PhonemeNet received the &lt;strong&gt;Best Paper Honorable Mention&lt;/strong&gt; at the 18th ACM SIGGRAPH Conference on Motion, Interaction, and Games (MIG 2025).&lt;/p&gt;
&lt;h3 id="emospacetime-mig-2024"&gt;EmoSpaceTime (MIG 2024)&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;EmoSpaceTime&lt;/strong&gt; addresses the problem of emotionally expressive 3D speech animation through a contrastive learning strategy. The core insight is that facial animation should be factorized into two independent components: &lt;strong&gt;emotion&lt;/strong&gt; (how the speaker feels) and &lt;strong&gt;content&lt;/strong&gt; (what the speaker is saying). By learning to decouple these in a shared embedding space, EmoSpaceTime enables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Transfer of emotional style between speakers&lt;/li&gt;
&lt;li&gt;Consistent emotional expressivity across different sentences&lt;/li&gt;
&lt;li&gt;Fine-grained control over emotional intensity at inference time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The resulting animations are both emotionally coherent — the emotion is consistent throughout an utterance — and semantically coherent — lip synchronization is accurate regardless of the emotional style applied.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;PhonemeNet achieves real-time text-driven facial animation with best-in-class lip synchronization accuracy — Best Paper Honorable Mention at MIG 2025&lt;/li&gt;
&lt;li&gt;EmoSpaceTime demonstrates that contrastive decoupling of emotion and content significantly improves expressive quality while maintaining temporal coherence&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publications"&gt;Publications&lt;/h2&gt;
&lt;p&gt;P. Witzig, B. Solenthaler, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;PhonemeNet: A Transformer Pipeline for Text-Driven Facial Animation&lt;/em&gt;. Proceedings of the 18th ACM SIGGRAPH Conference on Motion, Interaction, and Games (MIG &amp;lsquo;25), Zurich, Switzerland, December 3–5, 2025, pp. 1–11. &lt;strong&gt;Best Paper Honorable Mention.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;P. Witzig, S. Solenthaler, M. Gross, &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech Animation&lt;/em&gt;. In Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG &amp;lsquo;24), Arlington, USA, November 21–23, 2024.&lt;/p&gt;</description></item><item><title>Dialog Act Classification</title><link>https://rafael-wampfler.github.io/projects/dialog-act/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/dialog-act/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;For a conversational agent to respond appropriately, it must understand not just &lt;em&gt;what&lt;/em&gt; a user says, but &lt;em&gt;why&lt;/em&gt; they said it — the communicative intent behind their utterance. Dialog Act (DA) classification is the task of categorizing utterances by their function in conversation (e.g., question, assertion, greeting, request, clarification). This project develops multimodal dialog act classifiers tailored for interactions with digital characters.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Standard dialog act classification systems are trained on text transcriptions alone. In real-world interactions with embodied agents, however, users communicate through a rich combination of speech prosody, gaze, gesture, and lexical content. A question delivered with rising intonation carries different meaning than the same words spoken flatly; a greeting accompanied by eye contact differs from one delivered distractedly.&lt;/p&gt;
&lt;p&gt;For digital characters that must respond naturally in real time, dialog act classification must therefore be multimodal — integrating acoustic, linguistic, and where available, visual signals — and must operate with low latency to support interactive response times.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;Our multimodal dialog act classifier integrates:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lexical features&lt;/strong&gt;: Encoded via transformer-based text encoders fine-tuned on dialog corpora&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Acoustic features&lt;/strong&gt;: Prosodic signals including pitch, energy, and speech rate, extracted from the raw audio signal&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Temporal context&lt;/strong&gt;: Conversation history modeling to resolve ambiguous acts through discourse-level context&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The system is evaluated on naturalistic conversations with digital characters — a challenging setting because users frequently use fragmented, spontaneous speech rather than complete, grammatical sentences. The classifier is optimized for both accuracy and latency, enabling real-time use within the Digital Einstein pipeline.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Demonstrated that multimodal integration (text + acoustic features) significantly outperforms text-only baselines for dialog act classification in digital character conversations&lt;/li&gt;
&lt;li&gt;Achieved real-time classification latency compatible with interactive agent deployment&lt;/li&gt;
&lt;li&gt;Provided insights into which dialog acts are most frequently misclassified in human-agent interaction, informing future system design&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publication"&gt;Publication&lt;/h2&gt;
&lt;p&gt;P. Witzig, R. Constantin, N. Kovačević and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;Multimodal Dialog Act Classification for Conversations With Digital Characters&lt;/em&gt;. Proceedings of the 6th International Conference on Conversational User Interfaces (CUI), Luxembourg, Luxembourg, July 08–10, 2024, pp. 1–14.&lt;/p&gt;</description></item><item><title>Virtual Psychotherapist</title><link>https://rafael-wampfler.github.io/projects/virtual-psychotherapist/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/virtual-psychotherapist/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Access to evidence-based psychotherapy remains severely limited worldwide — constrained by long waitlists, resource scarcity, and geographic disparities. The Virtual Psychotherapist project develops embodied conversational AI agents that complement clinical care by extending access, supporting therapeutic practice, and enabling scalable training.&lt;/p&gt;
&lt;p&gt;This initiative is conducted in close collaboration with the &lt;strong&gt;University of Lucerne&lt;/strong&gt;, providing clinical expertise and direct access to real therapy data.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Two distinct but complementary challenges motivate this project:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scaling patient access&lt;/strong&gt;: Many individuals who would benefit from psychotherapy cannot access it due to cost, waitlists, or geography. AI-based companions that operate between sessions, provide continuous support, and conduct structured therapeutic conversations could meaningfully improve outcomes at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Improving therapist training&lt;/strong&gt;: Training clinicians in evidence-based interventions requires repeated practice with feedback — but opportunities for safe, standardized training are limited by ethical and resource constraints. Simulated patient systems powered by AI can provide unlimited deliberate practice in controlled settings.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="system-architecture"&gt;System Architecture&lt;/h2&gt;
&lt;p&gt;Both applications are built on a shared platform combining:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Large language model-based dialogue&lt;/strong&gt;: State-of-the-art LLMs for contextually appropriate, therapeutically grounded response generation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retrieval-augmented generation (RAG)&lt;/strong&gt;: Responses grounded in evidence-based therapeutic literature, reducing hallucinations and ensuring adherence to clinical frameworks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Real-time psychological analysis&lt;/strong&gt;: Parallel processing pipelines that extract facts, detect psychological flexibility processes, recognize emotions, and monitor safety in real time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Embodied avatar presentation&lt;/strong&gt;: Synchronized speech synthesis and 3D avatar animation delivered through mobile and desktop applications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clinician oversight&lt;/strong&gt;: Structured analysis outputs accessible to supervising therapists, enabling human-in-the-loop clinical governance&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="patient-facing-application"&gt;Patient-Facing Application&lt;/h3&gt;
&lt;p&gt;The patient-facing component enables individuals to conduct therapeutic conversations with an embodied avatar between sessions. The system follows &lt;strong&gt;process-based therapy&lt;/strong&gt; principles — particularly Acceptance and Commitment Therapy (ACT), an empirically supported approach targeting psychological flexibility through six core processes: acceptance, cognitive defusion, present-moment awareness, self-as-context, values clarification, and committed action.&lt;/p&gt;
&lt;p&gt;Critically, the system is designed for clinical supervision, not autonomous intervention. All session data is structured and accessible to the supervising therapist, who can monitor progress and intervene as needed.&lt;/p&gt;
&lt;p&gt;An evaluation against responses from professional psychotherapists demonstrated that the system&amp;rsquo;s responses were rated significantly higher on understanding, interpersonal effectiveness, collaboration, and ACT alignment — while emphasizing that clinical judgment and the therapeutic relationship remain irreplaceable.&lt;/p&gt;
&lt;h3 id="therapist-training-application"&gt;Therapist Training Application&lt;/h3&gt;
&lt;p&gt;The training application enables psychotherapists to practice and refine therapeutic techniques through role-play interactions with a simulated patient. Key features include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clinically grounded patient simulation&lt;/strong&gt;: Virtual patient behavior conditioned on profiles derived from real therapy sessions, covering a range of clinical presentations and scenarios (suicidality, resistance, heightened anxiety, therapeutic rupture, and more)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Real-time ACT fidelity feedback&lt;/strong&gt;: An automated evaluator assesses each therapist utterance for adherence to ACT principles, providing immediate visual feedback and the option to retry alternative responses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configurable scenarios&lt;/strong&gt;: Therapists can select specific clinical scenarios to target their practice&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A systematic evaluation across 49 therapy transcripts identified GPT-4o-mini as the optimal feedback model, achieving the closest alignment with human supervisor ACT fidelity ratings.&lt;/p&gt;
&lt;h2 id="safety-and-ethics"&gt;Safety and Ethics&lt;/h2&gt;
&lt;p&gt;Safety is a primary design constraint. The system includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Crisis detection&lt;/strong&gt;: Explicit classification of suicidal ideation and self-harm signals, triggering immediate presentation of crisis resources and mandatory clinician review&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Unsafe-interaction detection&lt;/strong&gt;: Identification of conditions (e.g., active psychosis, mania) where LLM interaction may be counterproductive, with protocol-defined fallback responses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Non-autonomous design&lt;/strong&gt;: The system is explicitly positioned as a complement to clinical care, not a replacement — structured to require and facilitate clinician oversight&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Patient-facing system responses rated significantly higher than human therapist responses by automated evaluation and expert psychotherapists across understanding, collaboration, and ACT alignment&lt;/li&gt;
&lt;li&gt;Therapist training simulation rated as realistic by practicing psychologists; turn-by-turn feedback shown to increase therapist awareness of intervention choices&lt;/li&gt;
&lt;li&gt;Automated ACT fidelity assessment achieves strong agreement with human expert ratings across 49 therapy transcripts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="research-partners"&gt;Research Partners&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;University of Lucerne&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>RehaBot</title><link>https://rafael-wampfler.github.io/projects/rehabot/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/rehabot/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;RehaBot is an embodied conversational agent designed to support patients in rehabilitation and home-care settings. The avatar represents a medical professional — capable of conducting structured patient interactions, administering assessments, and delivering health education — to help bridge the gap between in-clinic care and independent recovery at home.&lt;/p&gt;
&lt;p&gt;This project is developed in collaboration with &lt;strong&gt;Inselspital Bern&lt;/strong&gt; (University Hospital) and &lt;strong&gt;Bern University of Applied Sciences (BFH)&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Rehabilitation after medical treatment — whether from stroke, orthopedic surgery, cardiac events, or chronic disease — requires sustained patient engagement over weeks or months. Yet contact with healthcare professionals is necessarily episodic, leaving long gaps during which patients must self-manage. Lack of guidance, motivation, and timely feedback during these intervals is a major driver of poor rehabilitation outcomes and preventable hospital readmissions.&lt;/p&gt;
&lt;p&gt;An embodied conversational agent that patients can interact with at home — to receive reminders, answer questions, conduct structured assessments, and provide health education — addresses this gap directly. By combining medical knowledge with empathetic communication and a human-like embodied presence, RehaBot aims to make professional-quality support continuously available between clinical appointments.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;RehaBot integrates several complementary AI capabilities within a unified embodied avatar system:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Medical knowledge integration&lt;/strong&gt;: Structured clinical knowledge relevant to the patient&amp;rsquo;s rehabilitation pathway, enabling accurate and safe responses to health questions&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Conversational assessment&lt;/strong&gt;: The ability to administer structured health questionnaires and functional assessments through natural spoken dialogue, adapting pacing and clarification to individual patient needs&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Empathetic communication&lt;/strong&gt;: Affective modeling that allows the agent to detect and respond to emotional signals in patient speech — frustration, discouragement, anxiety — with appropriate supportive responses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Health education&lt;/strong&gt;: Accessible explanations of rehabilitation exercises, medication adherence, warning signs, and self-management strategies, adapted to the patient&amp;rsquo;s comprehension level&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Patient-professional interface&lt;/strong&gt;: Structured summaries of patient interactions accessible to supervising clinicians, supporting continuity of care and early detection of clinical deterioration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The system is built on the same core platform as the Digital Einstein project, enabling rapid deployment of new capabilities while maintaining consistent embodied presentation quality.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;p&gt;Recent work exploring embodied conversational interfaces for personal health data reflection demonstrates that users who engage with health information through a conversational agent formulate significantly more specific and actionable health plans compared to traditional dashboard-based exploration. Embodied conversation lowers the cognitive burden of interpreting health data and supports a shift from passive data inspection to active health sensemaking.&lt;/p&gt;
&lt;h2 id="research-partners"&gt;Research Partners&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Inselspital Bern&lt;/strong&gt; (University Hospital of Bern)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bern University of Applied Sciences (BFH)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>MIND — Cognitive Health Monitoring</title><link>https://rafael-wampfler.github.io/projects/mind/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/mind/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;The MIND (Monitoring and Supporting Cognitive Health) project creates a mobile platform for the early detection of cognitive impairment in aging populations. Early identification of decline — before it significantly impacts daily life — enables timely intervention, supports independent living, and improves long-term outcomes for older adults and their families.&lt;/p&gt;
&lt;p&gt;MIND is part of the &lt;strong&gt;Future Health Technologies 2 (FHT2)&lt;/strong&gt; initiative, a major research program funded by the &lt;strong&gt;National Research Foundation Singapore (NRF) CREATE&lt;/strong&gt; grant.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Cognitive decline — from mild cognitive impairment to dementia — affects hundreds of millions of people worldwide. Current clinical detection relies predominantly on periodic in-clinic assessments that are episodic, costly, and often catch decline only after substantial impairment has occurred. A continuously operating, passive monitoring system that can detect subtle early warning signs in everyday behavior could transform the standard of care.&lt;/p&gt;
&lt;p&gt;Two key insights motivate the MIND approach:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Movement and navigation encode cognition&lt;/strong&gt;: Changes in how people navigate their environment — route choices, wayfinding strategies, spatial memory — are among the earliest and most sensitive indicators of cognitive decline. Analyzing GPS and sensor traces at scale can reveal these changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Language reflects cognitive health&lt;/strong&gt;: Subtle changes in vocabulary, sentence complexity, topic management, and response latency in everyday conversation are measurable markers of cognitive change, detectable through automated language analysis.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;MIND integrates three complementary AI systems into a unified mobile platform:&lt;/p&gt;
&lt;h3 id="large-geospatial-models-lgms"&gt;Large Geospatial Models (LGMs)&lt;/h3&gt;
&lt;p&gt;Large Geospatial Models analyze longitudinal GPS and mobility traces to detect patterns indicative of cognitive change. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Route repetitiveness&lt;/strong&gt;: Increasing restriction of daily movement range&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Navigation errors&lt;/strong&gt;: Unusual detours or disorientation in familiar environments&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mobility diversity&lt;/strong&gt;: Changes in the variety of visited locations over time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;LGMs are trained on large-scale mobility datasets and fine-tuned to identify individual-level deviations from baseline behavior.&lt;/p&gt;
&lt;h3 id="llms-for-conversational-cognitive-assessment"&gt;LLMs for Conversational Cognitive Assessment&lt;/h3&gt;
&lt;p&gt;Large language models analyze conversation transcripts from daily interactions with the MIND app, detecting cognitive markers such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced lexical diversity and increased use of filler words&lt;/li&gt;
&lt;li&gt;Difficulty with topic maintenance and coherence&lt;/li&gt;
&lt;li&gt;Slowed response generation and increased hesitation&lt;/li&gt;
&lt;li&gt;Confusion or confabulation in response to everyday questions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These markers are tracked longitudinally to identify meaningful change relative to the individual&amp;rsquo;s own baseline.&lt;/p&gt;
&lt;h3 id="embodied-avatars-for-guided-assessment-and-training"&gt;Embodied Avatars for Guided Assessment and Training&lt;/h3&gt;
&lt;p&gt;An embodied conversational avatar conducts structured cognitive assessments — such as memory recall tasks, verbal fluency tests, and orientation questions — through natural spoken dialogue. The avatar also guides cognitive training exercises designed to maintain cognitive reserve.&lt;/p&gt;
&lt;p&gt;The embodied format is critical: older adults are more likely to engage regularly with an interactive, socially present agent than with a text-based questionnaire or passive sensor. The avatar adapts its communication style to individual users, adjusting vocabulary, pacing, and support level to ensure accessibility.&lt;/p&gt;
&lt;h3 id="integration-and-privacy"&gt;Integration and Privacy&lt;/h3&gt;
&lt;p&gt;All three systems operate on a shared mobile platform with strong privacy protections. Data is analyzed on-device where possible, and users maintain full control over what is shared with care providers. The platform produces structured reports for geriatricians and primary care physicians, enabling early clinical intervention.&lt;/p&gt;
&lt;h2 id="significance"&gt;Significance&lt;/h2&gt;
&lt;p&gt;MIND represents a convergence of several research threads — geospatial AI, conversational AI, affective computing, and embodied interaction — applied to one of the most pressing health challenges of our time. By enabling early, passive, continuous monitoring, the platform aims to support successful aging in place and to delay or prevent the transition to care dependency.&lt;/p&gt;
&lt;h2 id="research-partners-and-funding"&gt;Research Partners and Funding&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;National Research Foundation Singapore (NRF) CREATE&lt;/strong&gt; — Future Health Technologies 2 (FHT2)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bond University (Australia)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Collaboration with life sciences and medicine partners in Singapore&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Talking to Your Data: Exploring Embodied Conversation as an Interface for Personal Health Reflection</title><link>https://rafael-wampfler.github.io/publications/talking-to-your-data-2026/</link><pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/talking-to-your-data-2026/</guid><description/></item><item><title>PhonemeNet: A Transformer Pipeline for Text-Driven Facial Animation</title><link>https://rafael-wampfler.github.io/publications/phonemenet-2025/</link><pubDate>Wed, 03 Dec 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/phonemenet-2025/</guid><description>
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Best Paper Honorable Mention&lt;/strong&gt; — 18th ACM SIGGRAPH Conference on Motion, Interaction, and Games (MIG 2025)&lt;/p&gt;
&lt;/blockquote&gt;</description></item><item><title>egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-World Tasks</title><link>https://rafael-wampfler.github.io/publications/egoemotion-2025/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/egoemotion-2025/</guid><description/></item><item><title>Steering Narrative Agents through a Dynamic Cognitive Framework for Guided Emergent Storytelling</title><link>https://rafael-wampfler.github.io/publications/steering-narrative-agents-2025/</link><pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/steering-narrative-agents-2025/</guid><description/></item><item><title>A Joint Personality-Emotion Framework for Personality-Consistent Conversational Agents</title><link>https://rafael-wampfler.github.io/publications/personality-emotion-framework-2025/</link><pubDate>Tue, 16 Sep 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/personality-emotion-framework-2025/</guid><description/></item><item><title>BEE: Belief-Value-Aligned, Explainable, and Extensible Cognitive Framework for Conversational Agents</title><link>https://rafael-wampfler.github.io/publications/bee-cognitive-framework-2025/</link><pubDate>Tue, 16 Sep 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/bee-cognitive-framework-2025/</guid><description>
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Best Paper Honorable Mention&lt;/strong&gt; — 25th International Conference on Intelligent Virtual Agents (IVA 2025)&lt;/p&gt;
&lt;/blockquote&gt;</description></item><item><title>A Platform for Interactive AI Character Experiences</title><link>https://rafael-wampfler.github.io/publications/platform-interactive-ai-2025/</link><pubDate>Sun, 10 Aug 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/platform-interactive-ai-2025/</guid><description/></item><item><title>Immersive Conversations with Digital Einstein: Linking a Physical System and AI</title><link>https://rafael-wampfler.github.io/publications/digital-einstein-siggraph-asia-2024/</link><pubDate>Tue, 03 Dec 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/digital-einstein-siggraph-asia-2024/</guid><description/></item><item><title>EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech Animation</title><link>https://rafael-wampfler.github.io/publications/emospace-time-2024/</link><pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/emospace-time-2024/</guid><description/></item><item><title>On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild</title><link>https://rafael-wampfler.github.io/publications/multimodal-emotion-recognition-2024/</link><pubDate>Mon, 04 Nov 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/multimodal-emotion-recognition-2024/</guid><description/></item><item><title>Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusions</title><link>https://rafael-wampfler.github.io/publications/chatbots-with-attitude-2024/</link><pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/chatbots-with-attitude-2024/</guid><description/></item><item><title>Multimodal Dialog Act Classification for Conversations With Digital Characters</title><link>https://rafael-wampfler.github.io/publications/dialog-act-classification-2024/</link><pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/dialog-act-classification-2024/</guid><description/></item><item><title>The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions</title><link>https://rafael-wampfler.github.io/publications/personality-dimensions-gpt3-2024/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/personality-dimensions-gpt3-2024/</guid><description/></item><item><title>Artificial Intelligence for Digital Characters</title><link>https://rafael-wampfler.github.io/courses/ai-digital-characters/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/courses/ai-digital-characters/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;This lecture provides an overview of techniques to build conversational digital characters. The main components of conversational digital characters are introduced: chatbots, speech recognition, speech synthesis, and animation synthesis. Real-life applications of such digital characters are demonstrated on different use cases (e.g.,
).&lt;/p&gt;
&lt;h2 id="content"&gt;Content&lt;/h2&gt;
&lt;p&gt;The lecture opens with basics on digital characters. Afterwards, the main components to build a conversational digital character are introduced. This includes the basics of natural language processing used to build a chatbot, speech recognition, speech synthesis, and animating a digital character based on motion capturing and deep learning. Further, autonomous agents based on knowledge graphs are covered. The lecture ends with real-life applications of digital characters.&lt;/p&gt;
&lt;h2 id="details"&gt;Details&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Level&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Graduate (MSc / PhD)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Semester&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Spring&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Institution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ETH Zurich, D-INFK&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Course creator &amp;amp; lecturer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Years&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2024 – present&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title>Example Talk: Recent Work</title><link>https://rafael-wampfler.github.io/slides/example/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/slides/example/</guid><description>&lt;!-- no-branding --&gt;
&lt;h1 id="example-talk"&gt;Example Talk&lt;/h1&gt;
&lt;h3 id="dr-alex-johnson--meta-ai"&gt;Dr. Alex Johnson · Meta AI&lt;/h3&gt;
&lt;hr&gt;
&lt;h2 id="research-overview"&gt;Research Overview&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Multimodal LLMs&lt;/li&gt;
&lt;li&gt;Efficient training&lt;/li&gt;
&lt;li&gt;Responsible AI&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="code--math"&gt;Code &amp;amp; Math&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;$$
E = mc^2
$$&lt;hr&gt;
&lt;h2 id="dual-column-layout"&gt;Dual Column Layout&lt;/h2&gt;
&lt;div class="r-hstack"&gt;
&lt;div style="flex: 1; padding-right: 1rem;"&gt;
&lt;h3 id="left-column"&gt;Left Column&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Point A&lt;/li&gt;
&lt;li&gt;Point B&lt;/li&gt;
&lt;li&gt;Point C&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 1rem;"&gt;
&lt;h3 id="right-column"&gt;Right Column&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Detail 1&lt;/li&gt;
&lt;li&gt;Detail 2&lt;/li&gt;
&lt;li&gt;Detail 3&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;!-- Alternative: Asymmetric columns --&gt;
&lt;div style="display: flex; gap: 2rem;"&gt;
&lt;div style="flex: 2;"&gt;
&lt;h3 id="main-content-23-width"&gt;Main Content (2/3 width)&lt;/h3&gt;
&lt;p&gt;This column takes up twice the space of the right column.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;example&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;code works too&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;div style="flex: 1;"&gt;
&lt;h3 id="sidebar-13-width"&gt;Sidebar (1/3 width)&lt;/h3&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Key points in smaller column&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="image--text-layout"&gt;Image + Text Layout&lt;/h2&gt;
&lt;div class="r-hstack" style="align-items: center;"&gt;
&lt;div style="flex: 1;"&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;&lt;img src="https://images.unsplash.com/photo-1708011271954-c0d2b3155ded?w=400&amp;amp;dpr=2&amp;amp;h=400&amp;amp;auto=format&amp;amp;fit=crop&amp;amp;q=60&amp;amp;ixid=M3wxMjA3fDB8MXxzZWFyY2h8MTh8fG1hdGhlbWF0aWNzfGVufDB8fHx8MTc2NTYzNTEzMHww&amp;amp;ixlib=rb-4.1.0" alt="" loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;/div&gt;
&lt;div style="flex: 1; padding-left: 2rem;"&gt;
&lt;h3 id="results"&gt;Results&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;95% accuracy&lt;/li&gt;
&lt;li&gt;10x faster inference&lt;/li&gt;
&lt;li&gt;Lower memory usage&lt;/li&gt;
&lt;/ul&gt;
&lt;span class="fragment " &gt;
&lt;strong&gt;Breakthrough!&lt;/strong&gt;
&lt;/span&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;h2 id="speaker-notes"&gt;Speaker Notes&lt;/h2&gt;
&lt;p&gt;Press &lt;strong&gt;S&lt;/strong&gt; to open presenter view with notes!&lt;/p&gt;
&lt;p&gt;This slide has hidden speaker notes below.&lt;/p&gt;
&lt;p&gt;Note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This is a &lt;strong&gt;speaker note&lt;/strong&gt; (only visible in presenter view)&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;S&lt;/code&gt; key to open presenter console&lt;/li&gt;
&lt;li&gt;Perfect for remembering key talking points&lt;/li&gt;
&lt;li&gt;Can include reminders, timing, references&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;Markdown&lt;/strong&gt; formatting too!&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="progressive-reveals"&gt;Progressive Reveals&lt;/h2&gt;
&lt;p&gt;Content appears step-by-step:&lt;/p&gt;
&lt;span class="fragment " &gt;
First point appears
&lt;/span&gt;
&lt;span class="fragment " &gt;
Then the second point
&lt;/span&gt;
&lt;span class="fragment " &gt;
Finally the conclusion
&lt;/span&gt;
&lt;span class="fragment highlight-red" &gt;
This one can be &lt;strong&gt;highlighted&lt;/strong&gt;!
&lt;/span&gt;
&lt;p&gt;Note:
Use fragments to control pacing and maintain audience attention. Each fragment appears on click.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="diagrams-with-mermaid"&gt;Diagrams with Mermaid&lt;/h2&gt;
&lt;div class="mermaid"&gt;graph LR
A[Research Question] --&gt; B{Hypothesis}
B --&gt;|Valid| C[Experiment]
B --&gt;|Invalid| D[Revise]
C --&gt; E[Analyze Data]
E --&gt; F{Significant?}
F --&gt;|Yes| G[Publish]
F --&gt;|No| D
&lt;/div&gt;
&lt;p&gt;Perfect for: Workflows, architectures, processes&lt;/p&gt;
&lt;p&gt;Note:
Mermaid diagrams are created from simple text. They&amp;rsquo;re version-controllable and edit anywhere!&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="research-results"&gt;Research Results&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;th&gt;Speed&lt;/th&gt;
&lt;th&gt;Memory&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Baseline&lt;/td&gt;
&lt;td&gt;87.3%&lt;/td&gt;
&lt;td&gt;1.0x&lt;/td&gt;
&lt;td&gt;2GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ours (v1)&lt;/td&gt;
&lt;td&gt;92.1%&lt;/td&gt;
&lt;td&gt;1.5x&lt;/td&gt;
&lt;td&gt;1.8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Ours (v2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;95.8%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.3x&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.2GB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&lt;strong&gt;Key Finding:&lt;/strong&gt; 8.5% improvement over baseline with 40% memory reduction&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Note:
Tables are perfect for comparative results. Markdown tables are simple and version-control friendly.&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
data-background-color="#1e3a8a"
&gt;
&lt;h2 id="custom-backgrounds"&gt;Custom Backgrounds&lt;/h2&gt;
&lt;p&gt;This slide has a &lt;strong&gt;blue background&lt;/strong&gt;!&lt;/p&gt;
&lt;p&gt;You can customize:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Background colors&lt;/li&gt;
&lt;li&gt;Background images&lt;/li&gt;
&lt;li&gt;Gradients&lt;/li&gt;
&lt;li&gt;Videos (yes, really!)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Use &lt;code&gt;{{&amp;lt; slide background-color=&amp;quot;#hex&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="vertical-navigation"&gt;Vertical Navigation&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;There&amp;rsquo;s more content below! ⬇️&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Press the &lt;strong&gt;Down Arrow&lt;/strong&gt; to see substeps.&lt;/p&gt;
&lt;p&gt;Note:
This demonstrates Reveal.js&amp;rsquo;s vertical slide feature. Great for optional details or deep dives.&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
id="substep-1"
&gt;
&lt;h3 id="substep-1-details"&gt;Substep 1: Details&lt;/h3&gt;
&lt;p&gt;This is additional content in a vertical stack.&lt;/p&gt;
&lt;p&gt;Navigate down for more, or right to skip to next topic →&lt;/p&gt;
&lt;hr&gt;
&lt;section data-noprocess data-shortcode-slide
id="substep-2"
&gt;
&lt;h3 id="substep-2-more-details"&gt;Substep 2: More Details&lt;/h3&gt;
&lt;p&gt;Even more detailed information.&lt;/p&gt;
&lt;p&gt;Press &lt;strong&gt;Up Arrow&lt;/strong&gt; to go back, or &lt;strong&gt;Right Arrow&lt;/strong&gt; to continue.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="citations--quotes"&gt;Citations &amp;amp; Quotes&lt;/h2&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;&amp;ldquo;The best way to predict the future is to invent it.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;— Alan Kay&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Or reference research:&lt;/p&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;Recent work by Smith et al. (2024) demonstrates that Markdown-based slides improve reproducibility by 78% compared to proprietary formats&lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2 id="media-youtube-videos"&gt;Media: YouTube Videos&lt;/h2&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/dQw4w9WgXcQ?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;Note:
Embed YouTube videos with just the video ID. Perfect for demos, tutorials, or interviews.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="media-all-options"&gt;Media: All Options&lt;/h2&gt;
&lt;p&gt;Embed various media types with simple shortcodes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; youtube VIDEO_ID &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bilibili&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; bilibili id=&amp;quot;BV1...&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Local videos&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; video src=&amp;quot;file.mp4&amp;quot; controls=&amp;quot;yes&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Audio&lt;/strong&gt;: &lt;code&gt;{{&amp;lt; audio src=&amp;quot;file.mp3&amp;quot; &amp;gt;}}&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Perfect for demos, interviews, tutorials, or podcasts!&lt;/p&gt;
&lt;p&gt;Note:
All media types work seamlessly in slides. Just use the appropriate shortcode.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="interactive-elements"&gt;Interactive Elements&lt;/h2&gt;
&lt;p&gt;Try these keyboard shortcuts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;→&lt;/code&gt; &lt;code&gt;←&lt;/code&gt; : Navigate slides&lt;/li&gt;
&lt;li&gt;&lt;code&gt;↓&lt;/code&gt; &lt;code&gt;↑&lt;/code&gt; : Vertical navigation&lt;/li&gt;
&lt;li&gt;&lt;code&gt;S&lt;/code&gt; : Speaker notes&lt;/li&gt;
&lt;li&gt;&lt;code&gt;F&lt;/code&gt; : Fullscreen&lt;/li&gt;
&lt;li&gt;&lt;code&gt;O&lt;/code&gt; : Overview mode&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/&lt;/code&gt; : Search&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ESC&lt;/code&gt; : Exit modes&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;!-- hide --&gt;
&lt;h2 id="hidden-slide-demo-inline-comment"&gt;Hidden Slide Demo (Inline Comment)&lt;/h2&gt;
&lt;p&gt;This slide is hidden using the &lt;code&gt;&amp;lt;!-- hide --&amp;gt;&lt;/code&gt; comment method.&lt;/p&gt;
&lt;p&gt;Perfect for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Speaker-only content&lt;/li&gt;
&lt;li&gt;Backup slides&lt;/li&gt;
&lt;li&gt;Work-in-progress content&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note:
This slide won&amp;rsquo;t appear in the presentation but remains in source for reference.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="thanks"&gt;Thanks&lt;/h2&gt;
&lt;h3 id="questions"&gt;Questions?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;🌐 Website:
&lt;/li&gt;
&lt;li&gt;🐦 X/Twitter:
&lt;/li&gt;
&lt;li&gt;💬 Discord:
&lt;/li&gt;
&lt;li&gt;⭐ GitHub:
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;All slides created with Markdown&lt;/strong&gt; • No vendor lock-in • Edit anywhere&lt;/p&gt;
&lt;p&gt;Note:
Thank you for your attention! Feel free to reach out with questions or contributions.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-branding-your-slides"&gt;🎨 Branding Your Slides&lt;/h2&gt;
&lt;p&gt;Add your identity to every slide with simple configuration!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you can add:&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Position Options&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Logo&lt;/td&gt;
&lt;td&gt;top-left, top-right, bottom-left, bottom-right&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Title&lt;/td&gt;
&lt;td&gt;Same as above&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Author&lt;/td&gt;
&lt;td&gt;Same as above&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Footer Text&lt;/td&gt;
&lt;td&gt;Same + bottom-center&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Edit the &lt;code&gt;branding:&lt;/code&gt; section in your slide&amp;rsquo;s front matter (top of file).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-adding-your-logo"&gt;📁 Adding Your Logo&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Place your logo in &lt;code&gt;assets/media/&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;Use SVG format for best results (auto-adapts to any theme!)&lt;/li&gt;
&lt;li&gt;Add to front matter:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;logo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;your-logo.svg&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Must be in assets/media/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;top-right&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;60px&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; SVGs with &lt;code&gt;fill=&amp;quot;currentColor&amp;quot;&lt;/code&gt; automatically match theme colors!&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-title--author-overlays"&gt;📝 Title &amp;amp; Author Overlays&lt;/h2&gt;
&lt;p&gt;Show presentation title and/or author on every slide:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;show&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-left&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;Short Title&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Optional: override long page title&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;show&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-right&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Author is auto-detected from page front matter (&lt;code&gt;author:&lt;/code&gt; or &lt;code&gt;authors:&lt;/code&gt;).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-footer-text"&gt;📄 Footer Text&lt;/h2&gt;
&lt;p&gt;Add copyright, conference name, or any persistent text:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;branding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;footer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;© 2024 Your Name · ICML 2024&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;bottom-center&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Supports Markdown! Use &lt;code&gt;[Link](url)&lt;/code&gt; for clickable links.&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-branding --&gt;
&lt;h2 id="-hiding-branding-per-slide"&gt;🔇 Hiding Branding Per-Slide&lt;/h2&gt;
&lt;p&gt;Sometimes you want a clean slide (title slides, full-screen images).&lt;/p&gt;
&lt;p&gt;Add this comment at the &lt;strong&gt;start&lt;/strong&gt; of your slide content:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-branding --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="gu"&gt;## My Clean Slide
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;Content here...
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-branding --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — notice no logo or overlays!&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-header --&gt;
&lt;h2 id="-selective-hiding"&gt;🔇 Selective Hiding&lt;/h2&gt;
&lt;p&gt;Hide just the header (logo + title):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-header --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Or just the footer (author + footer text):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&amp;lt;!-- no-footer --&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-header --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — footer still visible below!&lt;/p&gt;
&lt;hr&gt;
&lt;!-- no-footer --&gt;
&lt;h2 id="-quick-reference"&gt;✅ Quick Reference&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Comment&lt;/th&gt;
&lt;th&gt;Hides&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-branding --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Everything (logo, title, author, footer)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-header --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Logo + Title overlay&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;!-- no-footer --&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Author + Footer text&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;☝️ &lt;strong&gt;This slide uses &lt;code&gt;&amp;lt;!-- no-footer --&amp;gt;&lt;/code&gt;&lt;/strong&gt; — logo still visible above!&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-get-started"&gt;🚀 Get Started&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Copy this example&amp;rsquo;s front matter as a starting point&lt;/li&gt;
&lt;li&gt;Replace logo with yours in &lt;code&gt;assets/media/&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Customize positions and text&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;&amp;lt;!-- no-branding --&amp;gt;&lt;/code&gt; for special slides&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; Set site-wide defaults in &lt;code&gt;config/_default/params.yaml&lt;/code&gt; under &lt;code&gt;slides.branding&lt;/code&gt;!&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;Smith, J. et al. (2024). &lt;em&gt;Open Science Presentations&lt;/em&gt;. Nature Methods.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Experience</title><link>https://rafael-wampfler.github.io/experience/</link><pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/experience/</guid><description/></item><item><title>Personality Trait Recognition Based on Smartphone Typing Characteristics in the Wild</title><link>https://rafael-wampfler.github.io/publications/personality-trait-recognition-2023/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/personality-trait-recognition-2023/</guid><description/></item><item><title>Seminar on Digital Humans</title><link>https://rafael-wampfler.github.io/courses/seminar-digital-humans/</link><pubDate>Thu, 01 Sep 2022 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/courses/seminar-digital-humans/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;This seminar covers advanced topics in digital humans with a focus on the latest research results. Topics include estimating human pose and motion from images, human motion synthesis, learning-based human avatar creation, learning neural implicit representations for humans, modeling, animations, artificial intelligence for digital characters, and others. A collection of research papers is selected.&lt;/p&gt;
&lt;h2 id="objectives"&gt;Objectives&lt;/h2&gt;
&lt;p&gt;The goal is to get an overview of actual research topics in the field of digital humans and to improve presentation and critical analysis skills.&lt;/p&gt;
&lt;h2 id="format"&gt;Format&lt;/h2&gt;
&lt;p&gt;This seminar covers advanced topics in digital humans including both seminal research papers as well as the latest research results. A collection of research papers are selected covering topics such as estimating human pose and motion from images, human motion synthesis, learning-based human avatar creation, learning neural implicit representations for humans, modeling, animations, artificial intelligence for digital characters, and others. Each student presents one paper to the class and leads a discussion about the paper. All students read the papers and participate in the discussion.&lt;/p&gt;
&lt;h2 id="details"&gt;Details&lt;/h2&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Level&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Graduate (MSc / PhD)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Semester&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fall&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Institution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ETH Zurich, D-INFK&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Course creator &amp;amp; lecturer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Years&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2022 – present&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;</description></item><item><title>Affective State Prediction from Smartphone Touch and Sensor Data in the Wild</title><link>https://rafael-wampfler.github.io/publications/affective-state-smartphone-2022/</link><pubDate>Sat, 30 Apr 2022 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/affective-state-smartphone-2022/</guid><description/></item><item><title>Image Reconstruction of Tablet Front Camera Recordings in Educational Settings</title><link>https://rafael-wampfler.github.io/publications/image-reconstruction-tablet-2020/</link><pubDate>Fri, 10 Jul 2020 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/image-reconstruction-tablet-2020/</guid><description/></item><item><title>Glyph-Based Visualization of Affective States</title><link>https://rafael-wampfler.github.io/publications/glyph-visualization-2020/</link><pubDate>Mon, 25 May 2020 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/glyph-visualization-2020/</guid><description/></item><item><title>Affective State Prediction Based on Semi-Supervised Learning from Smartphone Touch Data</title><link>https://rafael-wampfler.github.io/publications/affective-state-semi-supervised-2020/</link><pubDate>Sat, 25 Apr 2020 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/affective-state-semi-supervised-2020/</guid><description/></item><item><title>Affective State Prediction in a Mobile Setting using Wearable Biometric Sensors and Stylus</title><link>https://rafael-wampfler.github.io/publications/affective-state-mobile-2019/</link><pubDate>Tue, 02 Jul 2019 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/affective-state-mobile-2019/</guid><description/></item><item><title>Efficient Feature Embeddings for Student Classification with Variational Auto-encoders</title><link>https://rafael-wampfler.github.io/publications/feature-embeddings-2017/</link><pubDate>Sun, 25 Jun 2017 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/feature-embeddings-2017/</guid><description/></item></channel></rss>