
-
Empathic Computing is a peer-reviewed, open-access journal published quarterly by Science Exploration Press. Empathic Computing is a rapidly emerging field of computing concerned with how to create computer systems that better enable people to understand one another and develop empathy. This includes the use of technology such as Augmented Reality (AR) and Virtual Reality (VR) to enable people to see what another person is seeing in real time, and overlay communication cues on their field of view. Also, the use of physiological sensors, machine learning and artificial intelligence to develop systems that can recognize what people are feeling and convey emotional or cognitive state to a collaborator. The goal is to combine natural collaboration, implicit understanding, and experience capture/sharing in a way that transforms collaboration. more >
Articles
Multimodal emotion recognition with disentangled representations: private-shared multimodal variational autoencoder and long short-term memory framework
-
Aims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The ...
MoreAims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The primary objective is to improve the robustness and generalizability of emotion recognition by effectively leveraging the complementary features of electroencephalogram (EEG) signals and facial expression data.
Methods: We first trained a variational autoencoder using a ResNet-101 architecture on a large-scale facial dataset to develop a robust and generalizable facial feature extractor. This pre-trained model was then integrated into the DMMVAE framework, together with a convolutional neural network-based encoder and decoder for EEG data. The DMMVAE model was trained to disentangle shared and modality-specific latent representations across both EEG and facial data. Following this, the outputs of the encoders were concatenated and fed into a LSTM classifier for emotion recognition.
Results : Two sets of experiments were conducted. First, we trained and evaluated our model on the full dataset, comparing its performance with state-of-the-art methods and a baseline LSTM model employing a late fusion strategy to combine EEG and facial features. Second, to assess robustness, we tested the DMMVAE-LSTM framework under data-limited and modality dropout conditions by training with partial data and simulating missing modalities. The results demonstrate that the DMMVAE-LSTM framework consistently outperforms the baseline, especially in scenarios with limited data, indicating its capacity to learn structured and resilient latent representations.
Conclusion : Our findings underscore the benefits of multimodal generative modeling for emotion recognition, particularly in enhancing classification performance when training data are scarce or partially missing. By effectively learning both shared and private representations, DMMVAE-LSTM framework facilitates more reliable emotion classification and presents a promising solution for real-world applications where acquiring large labeled datasets is challenging.
Less -
Behzad Mahaseni, Naimul Mefraz Khan
-
DOI: https://doi.org/10.70401/ec.2025.0010 - June 29, 2025
Empathic extended reality in the era of generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
Integrating colored lights into multimodal robotic storytelling
-
Aims: Storytelling has evolved alongside human culture, giving rise to new media such as social robots. While these robots employ modalities similar to those used by humans, they can also utilize non-biomimetic modalities, such as color, which are ...
MoreAims: Storytelling has evolved alongside human culture, giving rise to new media such as social robots. While these robots employ modalities similar to those used by humans, they can also utilize non-biomimetic modalities, such as color, which are commonly associated with emotions. As research on the use of colored light in robotic storytelling remains limited, this study investigates its integration through three empirical studies.
Methods: We conducted three studies to explore the impact of colored light in robotic storytelling. The first study examined the effect of emotion-inducing colored lighting in romantic storytelling. The second study employed an online survey to determine appropriate light colors for specific emotions, based on images of the robot’s emotional expressions. The third study compared four lighting conditions in storytelling: emotion-driven colored lights, context-based colored lights, constant white light, and no additional lighting.
Results: The first study found that while colored lighting did not significantly influence storytelling experience or perception of the robot, it made recipients felt more serene. The second study showed improved recognition of amazement, rage, and neutral emotional states when colored light accompanied body language. The third study revealed no significant differences across lighting conditions in terms of storytelling experience, emotions, or robot perception; however, participants generally appreciated the use of colored lights. Emotion-driven lighting received slightly more favorable subjective evaluations.
Conclusion: Colored lighting can enhance the emotional expressiveness of robots. Both emotion- driven and context-based lighting strategies are appropriate for robotic storytelling. Through this series of studies, we contribute to the understanding of how colored lights are perceived in robotic communication, particularly within storytelling contexts.
Less -
Sophia C. Steinhaeusser, ... Birgit Lugrin
-
DOI: https://doi.org/10.70401/ec.2025.0008 - May 10, 2025
Investigating the 'I' in team: development and evaluation of an individual-level IMO model for augmented reality-mediated remote collaboration
-
Aims: This study aims to enhance the design of augmented reality (AR) technologies for remote collaboration by examining the complex relationships among individual factors (user characteristics), psychological and physiological states during ...
MoreAims: This study aims to enhance the design of augmented reality (AR) technologies for remote collaboration by examining the complex relationships among individual factors (user characteristics), psychological and physiological states during AR-mediated remote collaboration, and outcomes within an Input-Mediator-Output (IMO) model. The goal is to evaluate how individual characteristics influence psychological and physiological experiences, as well as task performance, in AR-mediated collaboration.
Methods: We hypothesize and evaluate an IMO model and use correlation analyses to examine the relationships among person-related input variables (e.g., predispositions, traits, attitudes, states, and contextual factors), psychological and physiological emergent states, and performance-related output variables.
Results: Our results demonstrate that individual characteristics significantly influence subjective experiences, physiological responses, and task performance, emphasizing the critical role of individual differences, alongside task- and technology-related factors, in shaping collaboration experiences and performance. These findings highlight the importance of considering individual characteristics in the design of AR tools to optimize user well-being and performance outcomes.
Conclusion: Our study provides a foundational framework for understanding the interplay between individuals, tasks, and technology, underscoring the need for AR tools that align with user characteristics. It also lays the groundwork for future IMO research in AR-mediated remote collaboration, contributing to the development of more effective and health-promoting AR technologies.
Less -
Lisa Thomaschewski, ... Annette Kluge
-
DOI: https://doi.org/10.70401/ec.2025.0007 - April 16, 2025
Empathic Computing: a new journal for a new field of computing
-
Mark Billinghurst
-
DOI: https://doi.org/10.70401/ec.2025.0006 - March 31, 2025
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Building a taxonomy of evidence-based medical eXtended Reality (MXR) applications: towards identifying best practices for design innovation and global collaboration
-
Aims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best ...
MoreAims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best practices for designing and evaluating user experiences and interfaces (UX/UI). The goal is to assist researchers, developers, and practitioners in comparing and extrapolating the best solutions for high-precision MXR tools in medical and wellness contexts.
Methods: To develop the taxonomy, a review of medical and MXR publications was conducted, followed by three systematic mapping studies. Applications were categorized by end-users and purposes. The first mapping cross-referenced digital health technology classifications. The second validated the structure by incorporating over 350 evidence-based MXR apps, with input from twenty XR-HCI researchers. The third, ongoing mapping adds emerging apps, refining the taxonomy further.
Results: The taxonomy is presented in a dynamic database and 3D interactive graph, allowing international researchers to visualize and discuss developed evidence-based medical and wellness XR applications. This formalizes prior efforts to distinguish validated MXR solutions from speculative ones.
Conclusion: The taxonomy focuses solely on evidence-based applications, highlighting areas where VR, AR, and MR have been successfully implemented. It serves as a tool for stakeholders to analyze and understand best practices in MXR design, promoting the development of safe, effective, and user-friendly medical and wellness applications.
Less -
Jolanda G. Tromp, ... Chung V. Le
-
DOI: https://doi.org/10.70401/ec.2025.0002 - November 30, 2024
Friction situations in real-world remote design reviews when using CAD and videoconferencing tools
-
Aims: Recent world events have resulted in companies using remote meeting tools more often in design processes. The shift to remote meeting tools has had a notable impact on collaborative design activities, such as design reviews (DRs). When DRs depend ...
MoreAims: Recent world events have resulted in companies using remote meeting tools more often in design processes. The shift to remote meeting tools has had a notable impact on collaborative design activities, such as design reviews (DRs). When DRs depend on computer-aided design (CAD) software, the lack of direct support for CAD functionalities in videoconferencing applications introduces novel communication challenges, i.e. friction. This study investigates friction encountered in real world remote DRs when using a combination of standard CAD and videoconferencing applications. The objective was to understand the main sources of friction when carrying out DRs using a combination of CAD and videoconferencing applications.
Methods: At a single Swedish automobile manufacturer, 15 DRs of a fixture component were passively observed. These observations were subjected to a qualitative thematic analysis to identify categories and sources of friction during these DRs. The DRs were carried out using a combination of CATIA CAD software and Microsoft Teams for videoconferencing.
Results: The analysis of the 15 remote DRs identified four recurring friction categories: requesting specific viewpoints, indicating specific elements, expressing design change ideas, and evaluating ergonomics. Each category highlights specific challenges that were observed during the DRs and emerged due to constraints imposed by existing methods and technologies for remote meetings.
Conclusion: This study provides a framework for understanding the current sources of friction in remote DRs using videoconferencing tools. These insights can support the future development of DR software tools, guiding the integration of features that address these friction points. Additionally, the results serve as a guideline for organizations to implement methods that reduce friction in remote DRs and improve DR quality and efficacy.
Less -
Francisco Garcia Rivera, ... Beatrice Alenljung
-
DOI: https://doi.org/10.70401/ec.2025.0001 - December 25, 2024
Creating safe environments for children: prevention of trauma in the Extended Verse
-
In the evolving landscape of digital childhood, ensuring safe environments within the Extended Verse (XV) is essential for preventing trauma and fostering positive experiences. This paper proposes a conceptual framework for the integration of advanced ...
MoreIn the evolving landscape of digital childhood, ensuring safe environments within the Extended Verse (XV) is essential for preventing trauma and fostering positive experiences. This paper proposes a conceptual framework for the integration of advanced emotion recognition systems and physiological sensors with virtual and augmented reality technologies to create secure spaces for children. The author presents a theoretical architecture and data flow design that could enable future systems to perform real-time monitoring and interpretation of emotional and physiological responses. This design architecture lays the groundwork for future research and development of adaptive, empathetic interfaces capable of responding to distress signals and mitigating trauma. The paper addresses current challenges, proposes innovative solutions, and outlines an evaluation framework to support an empathic, secure and nurturing virtual environment for young users.
Less -
Nina Jane Patel
-
DOI: https://doi.org/10.70401/ec.2025.0003 - February 17, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Building a taxonomy of evidence-based medical eXtended Reality (MXR) applications: towards identifying best practices for design innovation and global collaboration
-
Aims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best ...
MoreAims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best practices for designing and evaluating user experiences and interfaces (UX/UI). The goal is to assist researchers, developers, and practitioners in comparing and extrapolating the best solutions for high-precision MXR tools in medical and wellness contexts.
Methods: To develop the taxonomy, a review of medical and MXR publications was conducted, followed by three systematic mapping studies. Applications were categorized by end-users and purposes. The first mapping cross-referenced digital health technology classifications. The second validated the structure by incorporating over 350 evidence-based MXR apps, with input from twenty XR-HCI researchers. The third, ongoing mapping adds emerging apps, refining the taxonomy further.
Results: The taxonomy is presented in a dynamic database and 3D interactive graph, allowing international researchers to visualize and discuss developed evidence-based medical and wellness XR applications. This formalizes prior efforts to distinguish validated MXR solutions from speculative ones.
Conclusion: The taxonomy focuses solely on evidence-based applications, highlighting areas where VR, AR, and MR have been successfully implemented. It serves as a tool for stakeholders to analyze and understand best practices in MXR design, promoting the development of safe, effective, and user-friendly medical and wellness applications.
Less -
Jolanda G. Tromp, ... Chung V. Le
-
DOI: https://doi.org/10.70401/ec.2025.0002 - November 30, 2024
Empathic Computing: a new journal for a new field of computing
-
Mark Billinghurst
-
DOI: https://doi.org/10.70401/ec.2025.0006 - March 31, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
Friction situations in real-world remote design reviews when using CAD and videoconferencing tools
-
Aims: Recent world events have resulted in companies using remote meeting tools more often in design processes. The shift to remote meeting tools has had a notable impact on collaborative design activities, such as design reviews (DRs). When DRs depend ...
MoreAims: Recent world events have resulted in companies using remote meeting tools more often in design processes. The shift to remote meeting tools has had a notable impact on collaborative design activities, such as design reviews (DRs). When DRs depend on computer-aided design (CAD) software, the lack of direct support for CAD functionalities in videoconferencing applications introduces novel communication challenges, i.e. friction. This study investigates friction encountered in real world remote DRs when using a combination of standard CAD and videoconferencing applications. The objective was to understand the main sources of friction when carrying out DRs using a combination of CAD and videoconferencing applications.
Methods: At a single Swedish automobile manufacturer, 15 DRs of a fixture component were passively observed. These observations were subjected to a qualitative thematic analysis to identify categories and sources of friction during these DRs. The DRs were carried out using a combination of CATIA CAD software and Microsoft Teams for videoconferencing.
Results: The analysis of the 15 remote DRs identified four recurring friction categories: requesting specific viewpoints, indicating specific elements, expressing design change ideas, and evaluating ergonomics. Each category highlights specific challenges that were observed during the DRs and emerged due to constraints imposed by existing methods and technologies for remote meetings.
Conclusion: This study provides a framework for understanding the current sources of friction in remote DRs using videoconferencing tools. These insights can support the future development of DR software tools, guiding the integration of features that address these friction points. Additionally, the results serve as a guideline for organizations to implement methods that reduce friction in remote DRs and improve DR quality and efficacy.
Less -
Francisco Garcia Rivera, ... Beatrice Alenljung
-
DOI: https://doi.org/10.70401/ec.2025.0001 - December 25, 2024
Frontier Forums
Special Issues
Adaptive Empathic Interactive Media for Therapy
-
Submission Deadline: 05 Dec 2025
-
Published articles: 0
Co-creation for accessible computing through advances in emerging technologies
-
Submission Deadline: 25 Sep 2025
-
Published articles: 0