Conversing with AI agents in VR: An early investigation of alignment and modality
-
Aims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that ...
MoreAims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that express either congruent or incongruent opinions on a controversial issue, and how alignment, modality, and agent behaviors shape perceived conversation quality, psychological comfort, and agent credibility.
Methods: We conducted a 2 (agent opinion: congruent vs. incongruent) × 2 (input modality: text vs. voice) between-subjects experiment with 36 participants who engaged in five-minute virtual reality (VR)-based conversations with a GPT-4-powered AI agent about U.S. gun laws. Participants completed pre- and post-study measures of opinion and emotional states, evaluated the agent, and reflected on the interaction. In addition, dialogue transcripts were analyzed using the Issue-based Information System (IBIS) framework to characterize argument structure and engagement patterns.
Results: Participants engaged willingly with the AI agent regardless of its stance, and qualitative responses suggest that the interactions were generally respectful and characterized by low emotional intensity. Quantitative results show that opinion alignment influenced perceived bias and conversational impact, but did not affect the agent’s competence or likability. While voice input yielded richer dialogue, it also heightened perceived bias. Qualitative findings further highlight participants’ sensitivity to the agent’s ideological stance and their preference for AI agents whose views aligned with their own.
Conclusion: Our study suggests that AI agents embodied in VR can support ideologically challenging conversations without inducing defensiveness or discomfort when designed for neutrality and emotional safety. These findings point to early design directions for conversational agents that scaffold reflection and perspective-taking in politically or ethically sensitive domains.
Less -
Frederik Rueb, Misha Sra
-
DOI: https://doi.org/10.70401/ec.2025.0013 - December 10, 2025