Emotion Recognition in Virtual Reality: Toward Adaptive and Responsible Technologies
Time
9:00 AM, November 13, 2025 (Central European Time, CET)4:00 PM, November 13, 2025 (Beijing, China, CST)
Zoom Link: https://us06web.zoom.us/j/89398033486?pwd=2jd7s4KSKpLFqhIpx5EtcIO1SFJg0Q.1
Meeting ID: 893 9803 3486
Password: 990304
Contact Us
Email: ecjournal@sciexplor.comSpeakers
Dr. Davide Andreoletti
Department of Innovative Technologies, Institute of Information Systems and Networking, University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Ticino, Switzerland.
Dr. Davide Andreoletti holds a degree in Telecommunication Engineering and a Ph.D. in Information Technology from Politecnico di Milano, where he specialized in privacy-preserving techniques and machine learning. He is currently a researcher at the Department of Innovative Technologies of the University of Applied Sciences and Arts of Southern Switzerland (SUPSI).
His research lies at the intersection of artificial intelligence, affective computing, privacy, and human-machine interaction. He has worked on emotion recognition in immersive and industrial virtual environments, developing intelligent systems capable of detecting user states such as engagement, comfort, and frustration through multimodal signals. In parallel, his work also focuses on privacy-preserving machine learning, with a particular emphasis on secure and responsible computation in large language models (LLMs).
Prof. Silvia Giordano
Department of Innovative Technologies, Institute of Information Systems and Networking, University of Applied Science and Arts of Southern Switzerland (SUPSI), Lugano, Ticino, Switzerland.
Silvia Giordano, Ph.D. from EPFL, is full professor at SUPSI, head of the Trustworthy and Security group, of the Complex Systems research area, and member of the SUPSI Strategic Research Group. She is CNR associate researcher, and Tianjin University Distinguished Professor. Her interests include Social Computing, Privacy and Security, Industry 4.0, Pervasive Networking, MANETs. She is the IFIP-WG6.3 chair, ACM Distinguished Scientist, ACM Stars in Computer Networking.
Host
Dr. Leif Oppermann
Mixed and Augmented Reality Solutions, Department of Cooperation Systems, Fraunhofer Institute for Applied Information Technology FIT, Fraunhofer FIT, Sankt Augustin, Germany.
Dr. Leif Oppermann heads the Mixed and Augmented Reality Solutions group and is co-head of the Cooperation Systems department at Fraunhofer FIT in Sankt Augustin. After graduating with honors in media informatics in 2003 and worked as a research assistant on AR projects at Harz University of Applied Sciences. From 2004 to 2009, he was a research assistant and fellow at the Mixed Reality Lab at the University of Nottingham, where he earned his doctorate on the cooperative creation of location-based multi-user applications. He has been working at FIT since 2009, where he is responsible for planning and managing research and industry projects, as well as strategic planning. Recent projects include the MR exhibit in the TouchTomorrow-Truck of the Dr. Hans Riegel Foundation (XR Science Award 2024) and the BMDV project “5G Troisdorf IndustrieStadtpark,” which he led from 2020 to 2024. Its “Industrial Metaverse” demonstrator became a showcase result of the ministry and the Fraunhofer Society. He is a co-author of the German VR/AR textbook and has been active in academic teaching for over 20 years, including at Harz University of Applied Sciences, the University of Nottingham, H-BRS, and currently at b-it, where he plans and gives labs and lectures in the field of empathic computing & XR. He is vice president of the Virtual Worlds Association.
Introduction
This webinar explores the development of emotion recognition systems in immersive and adaptive Virtual Reality (VR), highlighting their role in advancing human-centered technologies.
The talk presents a series of studies that investigate how intelligent systems can detect and interpret emotions in VR, addressing challenges unique to virtual environments such as immersion dynamics, motion variability, and contextual ambiguity. The talk will illustrate the application of these systems in diverse settings, including industrial manufacturing, digital twins, product design, and training.
Several studies based on multimodal data—combining movement patterns, physiological signals, and behavioral indicators—will be discussed and compared. The presentation will outline both the challenges and opportunities of emotion recognition in VR, emphasizing that recognition is only one side of the coin: the other is emotion elicitation.
VR enables both elicitation and recognition of emotions more naturally than traditional media, yet also introduces new methodological and ethical hurdles. A novel approach for emotion elicitation in VR, inspired by the theory of flow, will be presented as a reproducible framework for studying human affect in immersive contexts.
Finally, the webinar addresses the ethical and regulatory landscape surrounding affective technologies. As the EU AI Act restricts emotion recognition in public settings, the discussion will extend to the role of privacy-preserving computation in ensuring responsible innovation. Both cryptographic and non-cryptographic approaches, such as secure computation and differential privacy, will be briefly discussed as emerging tools for developing trustworthy and privacy-aware emotion recognition systems that balance technological progress with users’ privacy protection.


