Table of Contents
Making robots understandable: Augmented reality for enhancing situational awareness in human–robot co-located environments
Aims: Sharing a robot’s intentions is crucial for building human confidence and ensuring safety in robot co-located environments. Communicating planned motion or internal state in a clear and timely manner is challenging, especially when users are ...
More.Aims: Sharing a robot’s intentions is crucial for building human confidence and ensuring safety in robot co-located environments. Communicating planned motion or internal state in a clear and timely manner is challenging, especially when users are occupied with other tasks. Augmented reality (AR) offers an effective medium for delivering visual cues that convey such information. This study evaluates a smartphone-based AR interface designed to communicate a robot’s navigation intentions and enhance users’ situational awareness (SA) in shared human–robot settings.
Methods: We developed a mobile AR application using Unity3D and Google ARCore to display goal locations, planned trajectories, and the real-time motion of a Robot Operating System enabled mobile robot. The system provides three visualization modes:
Results: Participants achieved an average SAGAT score of 86.5%, indicating improved awareness of the robot mission, spatial positioning, and safe zones. AR visualization was particularly effective for identifying obstacles and predicting unobstructed areas. In the
Conclusion: A mobile AR interface can significantly enhance SA in shared human–robot environments by making robot intentions more transparent and comprehensible. Future work will include in-situ evaluations with physical robots, integration of richer robot-state information such as velocity and sensor data, and the exploration of additional visualization strategies that further strengthen safety, predictability, and trust in human–robot collaborative environments.
Less.Sonia Chacko, Vikram Kapila
DOI:https://doi.org/10.70401/ec.2026.0016 - January 15, 2026
Identifying eye movement behavior indicators of social competence during conversation listening: A study using HoloLens 2
Aims: This study aims to investigate the relationship between eye movement behaviors and social competence, particularly focusing on listening patterns in multi-person conversations. The goal is to identify objective, quantifiable behavioral ...
More.Aims: This study aims to investigate the relationship between eye movement behaviors and social competence, particularly focusing on listening patterns in multi-person conversations. The goal is to identify objective, quantifiable behavioral indicators of social competence to inform the development of assessment tools and training interventions.
Methods: A three-person setting was designed with two conversational partners and one primary listener, and immersive eye-tracking data were collected using Microsoft HoloLens 2 during conversations on naturalistic topics. Social competence was rated by a clinical psychiatrist using standardized behavioral criteria, while analysis targeted three pre-selected indicators: nodding frequency, selective attention allocation to speaker’s regions of interest (head versus shoulders), and vertical gaze stability during listening.
Results: The results revealed that nodding frequency showed a strong positive correlation with social competence scores, indicating its potential as a robust nonverbal biomarker. Participants with higher social competence demonstrated greater attention to the speaker’s head region while minimizing focus on less informative areas such as the shoulders. Furthermore, individuals with higher scores exhibited significantly lower vertical gaze dispersion, reflecting more focused and stable attentional control during social listening.
Conclusion: This study establishes reliable eye movement-based indicators of social competence, highlighting their potential for assessing and enhancing social skills in real-world interactions. By integrating multimodal behavioral analysis, the findings provide a theoretical and technical foundation for developing personalized, real-time feedback systems for social competence training.
Less.Yu Fang, ... Hirokazu Kato
DOI:https://doi.org/10.70401/ec.2026.0015 - January 13, 2026
Virtual reality-based compassion meditation for clinical contexts: A co-design study of a loving-kindness meditation prototype
Aims: This study introduces and evaluates a virtual reality (VR) prototype designed for the Loving-Kindness Meditation (LKM) to support mental health rehabilitation and relaxation in clinical contexts. The aims include the co-creation of a VR-based ...
More.Aims: This study introduces and evaluates a virtual reality (VR) prototype designed for the Loving-Kindness Meditation (LKM) to support mental health rehabilitation and relaxation in clinical contexts. The aims include the co-creation of a VR-based mindfulness experience with clinical experts and the evaluation of its usability, user experience, and short-term effects on relaxation, affect, and self-compassion.
Methods: Following a design thinking and co-creation approach, the VR-based LKM experience was developed iteratively with input from clinicians and computer scientists. The final prototype was implemented for the Meta Quest 3 and included five immersive scenes representing phases of the LKM and transition scenes guided by a professionally narrated audio track. Eleven participants (M = 36.5 years, SD = 14.6) experienced the 12-minute session. Pre- and post-session measures included relaxation, positive and negative affect schedule, and self-compassion, complemented in the end by the Igroup Presence Questionnaire, usability measures and a semi-structured qualitative interview.
Results: Participants reported significant decreases in negative affect (t(10) = -2.512, p = .0307, d = -1.037) and stress (t(10) = -3.318, p = .007, d = -1.328), as well as increases in relaxation (t(10) = 5.487, p < .0001, d = 2.471) and self-compassion (t(10) = 2.231, p = .0497, d = 0.283). Usability was rated as excellent (M = 92.5), and presence as good (M = 4.0, SD = 0.43). Qualitative feedback described the experience as calming, aesthetically pleasing, and easy to engage with, highlighting the falling leaves and pulsating orb as effective design elements.
Conclusion: The co-designed VR-LKM prototype was perceived as highly usable and beneficial for inducing relaxation and self-compassion, suggesting its potential as a supportive tool for clinical mindfulness interventions. The results indicate that immersive VR can effectively facilitate engagement and emotional regulation, providing a foundation for future clinical trials and broader implementation in therapeutic and wellness settings.
Less.María Alejandra Quiros-Ramírez, ... Stephan Streuber
DOI:https://doi.org/10.70401/ec.2025.0014 - December 31, 2025
Conversing with AI agents in VR: An early investigation of alignment and modality
Aims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that ...
More.Aims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that express either congruent or incongruent opinions on a controversial issue, and how alignment, modality, and agent behaviors shape perceived conversation quality, psychological comfort, and agent credibility.
Methods: We conducted a 2 (agent opinion: congruent vs. incongruent) × 2 (input modality: text vs. voice) between-subjects experiment with 36 participants who engaged in five-minute virtual reality (VR)-based conversations with a GPT-4-powered AI agent about U.S. gun laws. Participants completed pre- and post-study measures of opinion and emotional states, evaluated the agent, and reflected on the interaction. In addition, dialogue transcripts were analyzed using the Issue-based Information System (IBIS) framework to characterize argument structure and engagement patterns.
Results: Participants engaged willingly with the AI agent regardless of its stance, and qualitative responses suggest that the interactions were generally respectful and characterized by low emotional intensity. Quantitative results show that opinion alignment influenced perceived bias and conversational impact, but did not affect the agent’s competence or likability. While voice input yielded richer dialogue, it also heightened perceived bias. Qualitative findings further highlight participants’ sensitivity to the agent’s ideological stance and their preference for AI agents whose views aligned with their own.
Conclusion: Our study suggests that AI agents embodied in VR can support ideologically challenging conversations without inducing defensiveness or discomfort when designed for neutrality and emotional safety. These findings point to early design directions for conversational agents that scaffold reflection and perspective-taking in politically or ethically sensitive domains.
Less.Frederik Rueb, Misha Sra
DOI:https://doi.org/10.70401/ec.2025.0013 - December 10, 2025
Social resources facilitate pulling actions toward novel social agents than pushing actions in virtual reality
Aims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor ...
More.Aims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor reflecting a user’s interest, motivation, and willingness to engage, we analyzed the response time of pulling or pushing inputs, typical actions showing approach-avoidance tendency, via bare-hand interaction in VR. We specifically investigated how the response time varied according to participants’ social resources, particularly the richness of their social lives characterized by broader networks of friends, social groups, and frequent interactions.
Results: Results showed that participants with richer social lives exhibited faster pulling (vs. pushing) actions toward both same- and opposite-sex avatars. These effects remained significant regardless of participants’ gender, age, and prior VR experience. Notably, the observed effects were specific to social stimuli (i.e., avatars) and were not revealed with non-social stimuli (i.e., a flag). Additionally, the effects did not occur with other indirect interactions (i.e., a mouse wheel or a virtual joystick).
Conclusion: The findings suggest that social resources may facilitate approach-oriented bodily affordances in VR environments.
Less.Jaejoon Jeong, ... Seungwon Kim
DOI:https://doi.org/10.70401/ec.2025.0012 - October 24, 2025