<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Large Language Models | Dr. Rafael Wampfler</title><link>https://rafael-wampfler.github.io/tags/large-language-models/</link><atom:link href="https://rafael-wampfler.github.io/tags/large-language-models/index.xml" rel="self" type="application/rss+xml"/><description>Large Language Models</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 23 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Personality &amp; Cognitive Architectures for Conversational Agents</title><link>https://rafael-wampfler.github.io/projects/personality-cognitive/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/personality-cognitive/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Standard large language models (LLMs) are stateless by design: each response is generated without a persistent sense of self, stable values, or accumulated memory of prior interactions. This makes them ill-suited for applications requiring a believable, consistent agent personality — such as embodied historical figures, therapeutic companions, or narrative characters. This project develops deep learning-based cognitive frameworks that address this limitation.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;For conversational agents to be genuinely useful in education, mental health, entertainment, and public engagement, they must do more than generate fluent text. They must maintain:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Characterological consistency&lt;/strong&gt;: A stable personality that users recognize and can model predictively&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Value alignment&lt;/strong&gt;: Responses grounded in a coherent set of beliefs and priorities&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Epistemic coherence&lt;/strong&gt;: Awareness of what the agent knows and does not know&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Emotional expressivity&lt;/strong&gt;: Affect that reflects the agent&amp;rsquo;s personality and conversational context&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Achieving all of this while remaining interpretable and safe requires principled architectural choices rather than prompt engineering alone.&lt;/p&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;h3 id="the-bee-cognitive-framework-iva-2025"&gt;The BEE Cognitive Framework (IVA 2025)&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;Belief-Value-Aligned, Explainable, and Extensible (BEE)&lt;/strong&gt; cognitive framework provides a structured architecture for conversational agents. BEE decomposes agent cognition into explicit modules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Belief management&lt;/strong&gt;: A persistent, queryable representation of the agent&amp;rsquo;s knowledge and world model&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Value alignment&lt;/strong&gt;: A constraint layer that ensures generated responses conform to the agent&amp;rsquo;s ethical commitments and personality&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explainability&lt;/strong&gt;: Transparent reasoning traces that allow developers and supervisors to audit agent behavior&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensibility&lt;/strong&gt;: A modular design that supports new knowledge domains and personality profiles without retraining&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;BEE received the &lt;strong&gt;Best Paper Honorable Mention&lt;/strong&gt; at IVA 2025.&lt;/p&gt;
&lt;h3 id="joint-personality-emotion-framework-iva-2025"&gt;Joint Personality-Emotion Framework (IVA 2025)&lt;/h3&gt;
&lt;p&gt;The &lt;strong&gt;Joint Personality-Emotion Framework&lt;/strong&gt; unifies personality modeling (using Big Five trait representations) with momentary emotional state tracking in a single learned architecture. By jointly modeling personality and emotion — which are deeply intertwined in human behavior — the system produces responses that feel emotionally authentic and characterologically stable. Contrastive learning strategies decouple emotion from semantic content, allowing the same underlying personality to express itself across diverse emotional registers.&lt;/p&gt;
&lt;h3 id="personality-infusion-via-chatbot-interactions-cui-2024-imwut-2024"&gt;Personality Infusion via Chatbot Interactions (CUI 2024, IMWUT 2024)&lt;/h3&gt;
&lt;p&gt;We have extensively studied how personality manifests in conversational AI systems. Key findings include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;What personality dimensions does GPT-3 express&lt;/strong&gt; during human-chatbot interactions, and how do these map to the Big Five personality taxonomy (IMWUT 2024)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Dynamic personality infusion&lt;/strong&gt; — how to modulate expressed personality in real time by conditioning language model generation on personality embeddings (CUI 2024)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="narrative-agents-and-emergent-storytelling-aaai-aiide-2025"&gt;Narrative Agents and Emergent Storytelling (AAAI AIIDE 2025)&lt;/h3&gt;
&lt;p&gt;In the domain of interactive narrative, we developed a &lt;strong&gt;Dynamic Cognitive Framework for Guided Emergent Storytelling&lt;/strong&gt; that enables narrative agents to pursue coherent story arcs while remaining responsive to player input. The framework uses explicit belief-desire-intention representations to steer language model generation toward narratively coherent outcomes without sacrificing interactivity.&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;BEE framework achieved Best Paper Honorable Mention at IVA 2025&lt;/li&gt;
&lt;li&gt;Joint Personality-Emotion Framework demonstrated stable personality maintenance across extended dialogues with diverse user populations&lt;/li&gt;
&lt;li&gt;IMWUT 2024 paper provided the first large-scale empirical characterization of GPT-3&amp;rsquo;s expressed personality dimensions&lt;/li&gt;
&lt;li&gt;Dynamic narrative framework enabled coherent 30+ turn story experiences with novel player choices&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="publications"&gt;Publications&lt;/h2&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;A Joint Personality-Emotion Framework for Personality-Consistent Conversational Agents&lt;/em&gt;. In Proceedings of the 25th International Conference on Intelligent Virtual Agents (IVA &amp;lsquo;25), Berlin, Germany, September 16–19, 2025, pp. 1–9. &lt;strong&gt;Best Paper Honorable Mention.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;C. Yang, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;BEE: Belief-Value-Aligned, Explainable, and Extensible Cognitive Framework for Conversational Agents&lt;/em&gt;. In Proceedings of the 25th International Conference on Intelligent Virtual Agents (IVA &amp;lsquo;25), Berlin, Germany, September 16–19, 2025, pp. 1–9. &lt;strong&gt;Best Paper Honorable Mention.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;C. Yang, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2025). &lt;em&gt;Steering Narrative Agents through a Dynamic Cognitive Framework for Guided Emergent Storytelling&lt;/em&gt;. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Edmonton, Canada, November 10–14, 2025, pp. 1–11.&lt;/p&gt;
&lt;p&gt;N. Kovačević, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions&lt;/em&gt;. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, ACM, vol. 8, no. 2, 2024, pp. 1–36.&lt;/p&gt;
&lt;p&gt;N. Kovačević, T. Boschung, C. Holz, M. Gross and &lt;strong&gt;R. Wampfler&lt;/strong&gt; (2024). &lt;em&gt;Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusions&lt;/em&gt;. Proceedings of the 6th International Conference on Conversational User Interfaces (CUI), Luxembourg, July 08–10, 2024, pp. 1–16.&lt;/p&gt;</description></item><item><title>Virtual Psychotherapist</title><link>https://rafael-wampfler.github.io/projects/virtual-psychotherapist/</link><pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/virtual-psychotherapist/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;Access to evidence-based psychotherapy remains severely limited worldwide — constrained by long waitlists, resource scarcity, and geographic disparities. The Virtual Psychotherapist project develops embodied conversational AI agents that complement clinical care by extending access, supporting therapeutic practice, and enabling scalable training.&lt;/p&gt;
&lt;p&gt;This initiative is conducted in close collaboration with the &lt;strong&gt;University of Lucerne&lt;/strong&gt;, providing clinical expertise and direct access to real therapy data.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Two distinct but complementary challenges motivate this project:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Scaling patient access&lt;/strong&gt;: Many individuals who would benefit from psychotherapy cannot access it due to cost, waitlists, or geography. AI-based companions that operate between sessions, provide continuous support, and conduct structured therapeutic conversations could meaningfully improve outcomes at scale.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Improving therapist training&lt;/strong&gt;: Training clinicians in evidence-based interventions requires repeated practice with feedback — but opportunities for safe, standardized training are limited by ethical and resource constraints. Simulated patient systems powered by AI can provide unlimited deliberate practice in controlled settings.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="system-architecture"&gt;System Architecture&lt;/h2&gt;
&lt;p&gt;Both applications are built on a shared platform combining:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Large language model-based dialogue&lt;/strong&gt;: State-of-the-art LLMs for contextually appropriate, therapeutically grounded response generation&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Retrieval-augmented generation (RAG)&lt;/strong&gt;: Responses grounded in evidence-based therapeutic literature, reducing hallucinations and ensuring adherence to clinical frameworks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Real-time psychological analysis&lt;/strong&gt;: Parallel processing pipelines that extract facts, detect psychological flexibility processes, recognize emotions, and monitor safety in real time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Embodied avatar presentation&lt;/strong&gt;: Synchronized speech synthesis and 3D avatar animation delivered through mobile and desktop applications&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clinician oversight&lt;/strong&gt;: Structured analysis outputs accessible to supervising therapists, enabling human-in-the-loop clinical governance&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="patient-facing-application"&gt;Patient-Facing Application&lt;/h3&gt;
&lt;p&gt;The patient-facing component enables individuals to conduct therapeutic conversations with an embodied avatar between sessions. The system follows &lt;strong&gt;process-based therapy&lt;/strong&gt; principles — particularly Acceptance and Commitment Therapy (ACT), an empirically supported approach targeting psychological flexibility through six core processes: acceptance, cognitive defusion, present-moment awareness, self-as-context, values clarification, and committed action.&lt;/p&gt;
&lt;p&gt;Critically, the system is designed for clinical supervision, not autonomous intervention. All session data is structured and accessible to the supervising therapist, who can monitor progress and intervene as needed.&lt;/p&gt;
&lt;p&gt;An evaluation against responses from professional psychotherapists demonstrated that the system&amp;rsquo;s responses were rated significantly higher on understanding, interpersonal effectiveness, collaboration, and ACT alignment — while emphasizing that clinical judgment and the therapeutic relationship remain irreplaceable.&lt;/p&gt;
&lt;h3 id="therapist-training-application"&gt;Therapist Training Application&lt;/h3&gt;
&lt;p&gt;The training application enables psychotherapists to practice and refine therapeutic techniques through role-play interactions with a simulated patient. Key features include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Clinically grounded patient simulation&lt;/strong&gt;: Virtual patient behavior conditioned on profiles derived from real therapy sessions, covering a range of clinical presentations and scenarios (suicidality, resistance, heightened anxiety, therapeutic rupture, and more)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Real-time ACT fidelity feedback&lt;/strong&gt;: An automated evaluator assesses each therapist utterance for adherence to ACT principles, providing immediate visual feedback and the option to retry alternative responses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configurable scenarios&lt;/strong&gt;: Therapists can select specific clinical scenarios to target their practice&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A systematic evaluation across 49 therapy transcripts identified GPT-4o-mini as the optimal feedback model, achieving the closest alignment with human supervisor ACT fidelity ratings.&lt;/p&gt;
&lt;h2 id="safety-and-ethics"&gt;Safety and Ethics&lt;/h2&gt;
&lt;p&gt;Safety is a primary design constraint. The system includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Crisis detection&lt;/strong&gt;: Explicit classification of suicidal ideation and self-harm signals, triggering immediate presentation of crisis resources and mandatory clinician review&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Unsafe-interaction detection&lt;/strong&gt;: Identification of conditions (e.g., active psychosis, mania) where LLM interaction may be counterproductive, with protocol-defined fallback responses&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Non-autonomous design&lt;/strong&gt;: The system is explicitly positioned as a complement to clinical care, not a replacement — structured to require and facilitate clinician oversight&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Patient-facing system responses rated significantly higher than human therapist responses by automated evaluation and expert psychotherapists across understanding, collaboration, and ACT alignment&lt;/li&gt;
&lt;li&gt;Therapist training simulation rated as realistic by practicing psychologists; turn-by-turn feedback shown to increase therapist awareness of intervention choices&lt;/li&gt;
&lt;li&gt;Automated ACT fidelity assessment achieves strong agreement with human expert ratings across 49 therapy transcripts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="research-partners"&gt;Research Partners&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;University of Lucerne&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>MIND — Cognitive Health Monitoring</title><link>https://rafael-wampfler.github.io/projects/mind/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/projects/mind/</guid><description>&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;The MIND (Monitoring and Supporting Cognitive Health) project creates a mobile platform for the early detection of cognitive impairment in aging populations. Early identification of decline — before it significantly impacts daily life — enables timely intervention, supports independent living, and improves long-term outcomes for older adults and their families.&lt;/p&gt;
&lt;p&gt;MIND is part of the &lt;strong&gt;Future Health Technologies 2 (FHT2)&lt;/strong&gt; initiative, a major research program funded by the &lt;strong&gt;National Research Foundation Singapore (NRF) CREATE&lt;/strong&gt; grant.&lt;/p&gt;
&lt;h2 id="motivation"&gt;Motivation&lt;/h2&gt;
&lt;p&gt;Cognitive decline — from mild cognitive impairment to dementia — affects hundreds of millions of people worldwide. Current clinical detection relies predominantly on periodic in-clinic assessments that are episodic, costly, and often catch decline only after substantial impairment has occurred. A continuously operating, passive monitoring system that can detect subtle early warning signs in everyday behavior could transform the standard of care.&lt;/p&gt;
&lt;p&gt;Two key insights motivate the MIND approach:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Movement and navigation encode cognition&lt;/strong&gt;: Changes in how people navigate their environment — route choices, wayfinding strategies, spatial memory — are among the earliest and most sensitive indicators of cognitive decline. Analyzing GPS and sensor traces at scale can reveal these changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Language reflects cognitive health&lt;/strong&gt;: Subtle changes in vocabulary, sentence complexity, topic management, and response latency in everyday conversation are measurable markers of cognitive change, detectable through automated language analysis.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="approach"&gt;Approach&lt;/h2&gt;
&lt;p&gt;MIND integrates three complementary AI systems into a unified mobile platform:&lt;/p&gt;
&lt;h3 id="large-geospatial-models-lgms"&gt;Large Geospatial Models (LGMs)&lt;/h3&gt;
&lt;p&gt;Large Geospatial Models analyze longitudinal GPS and mobility traces to detect patterns indicative of cognitive change. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Route repetitiveness&lt;/strong&gt;: Increasing restriction of daily movement range&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Navigation errors&lt;/strong&gt;: Unusual detours or disorientation in familiar environments&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mobility diversity&lt;/strong&gt;: Changes in the variety of visited locations over time&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;LGMs are trained on large-scale mobility datasets and fine-tuned to identify individual-level deviations from baseline behavior.&lt;/p&gt;
&lt;h3 id="llms-for-conversational-cognitive-assessment"&gt;LLMs for Conversational Cognitive Assessment&lt;/h3&gt;
&lt;p&gt;Large language models analyze conversation transcripts from daily interactions with the MIND app, detecting cognitive markers such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced lexical diversity and increased use of filler words&lt;/li&gt;
&lt;li&gt;Difficulty with topic maintenance and coherence&lt;/li&gt;
&lt;li&gt;Slowed response generation and increased hesitation&lt;/li&gt;
&lt;li&gt;Confusion or confabulation in response to everyday questions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These markers are tracked longitudinally to identify meaningful change relative to the individual&amp;rsquo;s own baseline.&lt;/p&gt;
&lt;h3 id="embodied-avatars-for-guided-assessment-and-training"&gt;Embodied Avatars for Guided Assessment and Training&lt;/h3&gt;
&lt;p&gt;An embodied conversational avatar conducts structured cognitive assessments — such as memory recall tasks, verbal fluency tests, and orientation questions — through natural spoken dialogue. The avatar also guides cognitive training exercises designed to maintain cognitive reserve.&lt;/p&gt;
&lt;p&gt;The embodied format is critical: older adults are more likely to engage regularly with an interactive, socially present agent than with a text-based questionnaire or passive sensor. The avatar adapts its communication style to individual users, adjusting vocabulary, pacing, and support level to ensure accessibility.&lt;/p&gt;
&lt;h3 id="integration-and-privacy"&gt;Integration and Privacy&lt;/h3&gt;
&lt;p&gt;All three systems operate on a shared mobile platform with strong privacy protections. Data is analyzed on-device where possible, and users maintain full control over what is shared with care providers. The platform produces structured reports for geriatricians and primary care physicians, enabling early clinical intervention.&lt;/p&gt;
&lt;h2 id="significance"&gt;Significance&lt;/h2&gt;
&lt;p&gt;MIND represents a convergence of several research threads — geospatial AI, conversational AI, affective computing, and embodied interaction — applied to one of the most pressing health challenges of our time. By enabling early, passive, continuous monitoring, the platform aims to support successful aging in place and to delay or prevent the transition to care dependency.&lt;/p&gt;
&lt;h2 id="research-partners-and-funding"&gt;Research Partners and Funding&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;National Research Foundation Singapore (NRF) CREATE&lt;/strong&gt; — Future Health Technologies 2 (FHT2)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bond University (Australia)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Collaboration with life sciences and medicine partners in Singapore&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Talking to Your Data: Exploring Embodied Conversation as an Interface for Personal Health Reflection</title><link>https://rafael-wampfler.github.io/publications/talking-to-your-data-2026/</link><pubDate>Mon, 23 Mar 2026 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/talking-to-your-data-2026/</guid><description/></item><item><title>Steering Narrative Agents through a Dynamic Cognitive Framework for Guided Emergent Storytelling</title><link>https://rafael-wampfler.github.io/publications/steering-narrative-agents-2025/</link><pubDate>Mon, 10 Nov 2025 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/steering-narrative-agents-2025/</guid><description/></item><item><title>Chatbots With Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusions</title><link>https://rafael-wampfler.github.io/publications/chatbots-with-attitude-2024/</link><pubDate>Mon, 08 Jul 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/chatbots-with-attitude-2024/</guid><description/></item><item><title>The Personality Dimensions GPT-3 Expresses During Human-Chatbot Interactions</title><link>https://rafael-wampfler.github.io/publications/personality-dimensions-gpt3-2024/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://rafael-wampfler.github.io/publications/personality-dimensions-gpt3-2024/</guid><description/></item></channel></rss>