Future trends in Virtual Reality (VR)
Synopsis
Virtual Reality (VR) is a disruptive technology rapidly advancing as computing hardware and software are getting better and cheaper. This chapter attempts to apprise the scholarly community with five futuristic trends that will shape VR. Immersive Social Experiences with Virtual Reality will make social interaction even better. They allow dynamic and multi-sensory shared environments in which users can engage authentically across geographical barriers. Advances in rendering and GPU technology will bring near-photorealistic visuals to VR. This makes immersion even better. Such VR has several applications in medicine, architecture, and education. Haptic feedback advancements will provide users realistic tactile sensations, enhancing interaction and immersion in virtual environments. Full-body tracking lets users use their entire body to control and interact with virtual spaces. This has applications in rehabilitation, sports, and professional training. AI-powered VR will bring personalized experiences and create intelligent virtual agents. The VR content can be dynamically generated according to the environment. This brings in interactivity and realism that was never experienced before.
Keywords: Full body tracking, Haptic feedback, Immersive social experience, Realism, User experience, Virtual environment
Citation: Manda, V. K., Bikkina, A., & Tarnanidis, T. (2025). Future trends in Virtual Reality (VR). In Virtual Reality Technologies and Real Life Applications (pp. 83-96). Deep Science Publishing. https://doi.org/10.70593/978-81-982935-1-0_5
1 Introduction Virtual Reality (VR) is a key disruption technology. It can be called a disruptive technology because of the following characteristics:
- It can alter existing markets fundamentally
- It can fit well in the value networks and
- It changes the way people interact with products and services.
Recent definitions have even evaluated if VR is a socially disruptive technology (Hopster, 2021). One of the criteria for defining aspects of technology is the pace of change. Several changes are happening in hardware and software areas that have influenced and shaped the future of VR. VR is quickly getting integrated with other (disruptive) technologies. Hence, it influences and will be influenced by other disruptive technologies.
2 Literature review
Considering that the topic is futuristic, only published literature from 2020 onwards (except one 2017 publication) is considered for this study.
3 Methods and Materials
Content Analysis is done on the shortlisted publications, and five themes are identified. A narrative review is done to summarize the content. The five themes identified are:
- Immersive Social Experience
- Ultra-Realistic Graphics
- Haptic Feedback Advancements
- Full-Body Tracking (FBT)
- AI-powered VR
4. Results and discussions Immersive Social Experiences
This theme aims to improve virtual social platforms by providing real-time interaction in shared environments. Immersive Social Experiences (ISE) is a virtual Reality (VR) interaction. They engage users in a dynamic and spatially aware environment (Guimarães et al., 2020). Some key elements of ISE are:
- Multi-Sensory Engagement
- Real-Time Interaction
- Shared Environments
- Customization & Personalization
- Emotional Involvement
Examples of some immersive social experiences include shared virtual spaces, avatar representations, spatial audio, haptic feedback, and interactivity. Participants in these environments can interact in shared real-world social settings beyond typical geographical barriers. Virtual spaces allow for exceptional levels of engagement and collaboration. Avatar realism uses photorealistic rendering and facial motion capture. So they replicate user emotions and gestures accurately. These features enhance the authenticity and trust of social interactions and allow participants to perceive non-verbal communication. Spatial audio technologies give an immersion experience by simulating the natural directionality of sounds. Head-related transfer functions (HRTF) and acoustic modeling are some helpful features. They create an environment in which voices and noise align with the virtual positioning of their sources. Users can perceive sound, such as personalized binaural audio playback that is directionally and natural within virtual spaces, enhancing social gatherings’ authenticity. HRTFs provide spatialised headphone playback of 3D sounds. Computational power is now available to perform numerical HRTF calculations to provide this facility (Pollack et al., 2022).
Virtual reality also introduces innovative frameworks for community building and cultural exchanges. Platforms like VRChat and Horizon Worlds enable users to create and personalize virtual environments to suit their tastes and needs. Professional meetings and casual gatherings can be organized on these platforms. It brings a sense of belonging amongst the users. For professional applications, they support working on collaborative projects. Studies showed that immersive VR and high avatar similarity significantly enhance collaboration performance (Suh, 2024). They boost psychological and behavioral engagement. On a personal level, they facilitate interaction and communication, support social needs, and improve interpersonal communications (Zamanifard & Freeman, 2023). So, these platforms build bonds and allow socialization.
Haptic feedback is a technology increasingly used in VR these days. It creates a sense of touch or physical sensation through forces, vibrations, or motions delivered to the user. The user can “feel” or interact with the virtual objects. In VR social platforms, they provide sensations during virtual interaction. These interactions can be a handshake or the sensation of holding an object (Gibbs et al., 2022). These facilities have applications for therapeutic use, such as in mental health support groups or immersive role-playing for conflict resolution. Studies have shown that social VR experiences can increase empathy and social connections among users. Engaging in virtual social interactions has been shown to alleviate symptoms of social anxiety and loneliness and reduce negative thoughts. These tools provided mental support and cared for user well-being during the COVID-19 pandemic (Deighan et al., 2023). Individuals with autism spectrum disorder (ASD) can improve their communication skills, thereby making it an effective therapeutic tool (Halabi et al., 2017). With live streaming, online shoppers can have immersive social experiences through improved interactivity, atmosphere cues, and dynamic characteristics. Such experiences can make shoppers more likely to purchase (Shiu et al., 2023).
Ultra-Realistic Graphics
Many advancements have recently taken place in computer graphics and rendering techniques. Significant advancements are in the areas of:
- Graphics Processing Unit (GPU) technology (Dally et al., 2021)
- Ray and path tracing
- Foveated rendering renders different regions with different qualities taking advantage of the features of the eye (L. Wang et al., 2023)
- Photorealistic textures (Xiang et al., 2022)
- Voxel and 3D spatially-varying lighting (Z. Wang et al., 2021)
- Real-time global illumination (Hu et al., 2021)
- Higher resolution displays and refresh rates (Seetzen et al., 2023)
These advancements are helping VR. Virtual environments can now be created quickly with more details that are very close to reality. Textures, lighting models, and illumination algorithms bring virtual worlds to life. They give breathtaking and believable experiences to immersing users. High-resolution displays and foveated rendering optimize the visual experience. They have reduced latency and maximized image quality. They offer enhanced immersion by eliminating the perceptible “uncanny valley” effect.
The convergence of these technologies brings several applications across various domains. In medical fields, ultra-realistic VR graphics enable more accurate virtual representations of complex anatomical structures. They help in both medical education and practical diagnostic and imaging. Similarly, ultra-realistic VR will be available in architectural design and urban planning. They help clients and stakeholders to experience fully interactive, lifelike models before physical construction begins.
Haptic Feedback Advancements
Haptic feedback involves giving touch sensations to communicate information to the user. It is also known as tactile feedback. The tactile experience is given to the user by applying forces, vibrations, or motions that simulate the sense of touch. Haptic feedback is often seen in smartphones, gaming controllers, and virtual reality systems. They improve user interaction and immersion. There are two significant areas of advancement in this:
- Multi-modal sensory integration
- Precision force feedback mechanisms
Haptic feedback advancements help Virtual Reality (VR) by increasing user immersion and interaction within virtual environments. Recent developments went beyond mere vibrations and pressure sensations. It now integrates complex and nuanced tactile feedback. Modern haptic systems offer multi-modal sensations, simulating textures, temperatures, and intricate physical interactions. Microfluidic actuators are now used. They allow the simulation of varying surface textures and material properties. Piezoelectric sensors facilitate sub-millimeter accurate pressure detection. Electrostatic and ultrasonic haptics are also used these days. These technologies use electrostatic fields and ultrasonic waves to create tactile sensations without direct contact. So, they allow mid-air haptic feedback. This makes the users feel virtual objects and textures with high fidelity. In the VR context, they allow virtual object manipulation. This significantly elevates the realism of VR applications. Electrotactile stimulation arrays are also increasingly used. They provide localized feedback with minimal latency. These can be used to maintain a presence in virtual environments (Zhou et al., 2022). Neuroadaptive haptics calibrate feedback intensity based on individual user sensitivity thresholds. Thermal feedback elements use Peltier modules to enhance temperature-based interactions. Similarly, variable stiffness materials are used to provide dynamic resistance simulation. Advanced haptic controllers utilizing distributed force sensors and linear resonant actuators achieve sophisticated gesture recognition with precise force discrimination.
Advancements are seen in the field of wearable haptic devices and smart fabrics. Such smart textiles and clothing help users realize and become conscious of their errors, making them less prone to making them in the first place (Ramachandran et al., 2021). Haptic gloves and suits give full-body immersion by delivering precise feedback to various body parts. They do not compromise the mobility or comfort of the user. These wearables use an array of actuators to replicate the sense of touch and force. Overall, they enhance the perception of weight, resistance, and impact. Frameworks such as Immersion Cube integrate visual, auditory, and haptic immersion for enhanced VR experiences. They help investigate how much haptic immersion is required to simulate realistic body movements during load handling. Such frameworks help in Ergonomic Assessment in VR (Pfeffer et al., 2024).
AI with haptic feedback systems can be a helpful combination. AI algorithms can analyze user interactions and dynamically adjust the haptic responses. Together, the two technologies can offer an adaptive and personalized haptic experience. As a result, new technology applications can be built for scenarios that avoid physical contact. Medical training (Motaharifar et al., 2021), robotic surgery, and other similar applications (Minopoulos et al., 2023) are a few examples.
Full-Body Tracking (FBT)
Full-body tracking (FBT) allows users to experience more immersion and presence within virtual environments (Caserman et al., 2020). FBT allows users to control and interact with the virtual environment using their entire bodies. This technology involves using various hardware devices such as sensors, cameras, and other tracking devices (Radoeva et al., 2022). These devices capture user movements and translate them into the virtual world inside VR. For example, FBT can simulate sports activities, dance, and other physical movements. Frameworks are now available that do joint-level modeling to give an FBT, allowing the creation of a 3D full-body avatar (Zheng et al., 2023). Several forms of FBT solutions are now becoming available:
- External trackers (like Vive Trackers)
- Camera-based systems (like Kinect)
- Inertial Measurement Unit (IMU) suits
- Magnetic tracking systems
Motion capture involves FBT to record and reproduce the movements of actors and athletes (Cannavò et al., 2024). This technology can be used in film and gaming to create more realistic and engaging experiences. FBT helps in rehabilitation and therapy. It can help patients with mobility issues. FBT helps monitor and analyze individual movements in real time, providing valuable insights for healthcare professionals. It can provide immersive training experiences for professionals such as surgeons or pilots. Since the technology is still emerging, developers need to address several challenges.
- There is a possibility of some occlusion issues occurring when body parts block sensors. The technology needs to capture and translate complex movements accurately. These can involve the user’s fingers or facial expressions. To address this challenge, researchers are exploring the use of machine learning (ML) algorithms and computer vision (CV) techniques. ML and CV improve the accuracy and responsiveness of full-body tracking systems.
- There can be a latency between real and virtual movement. Edge computing, high-performance hardware, and network optimization can address this need.
- There can be a drift in sensor accuracy over time. Regular maintenance and performing sensor fusion by bringing and combining data from multiple sensors can help.
- There is a need for regular recalibration. Automated calibration and developing user-friendly interfaces can help.
Overall, full-body tracking is an important aspect of VR technology. It can enhance the presence and immersion of the use in a virtual environment. As this technology advances, it will likely play an increasingly significant role in various applications, from entertainment and gaming to healthcare and professional training.
AI-Powered VR
The integration of artificial intelligence with virtual reality represents a transformative convergence. It extends beyond conventional VR capabilities and gives a new cyber-virtual experience (Li et al., 2023). There are two sets of primary benefits arising from the integration:
- Enhance the immersive experience, taking to unprecedented levels of interactivity, personalization, and realism
- Creating more dynamic, responsive, and personalized virtual environments
AI algorithms can process vast amounts of data in real time. Connecting with Big Data can give instant insights and help in complex decision-making (Panyaram, 2024). This helps users adapt to VR experiences according to user preferences and behaviors. So it can provide a more interactive and engaging experience. This technology allows for the development of intelligent virtual agents that can interact with users naturally and lifelike. They respond to verbal and non-verbal cues with appropriate actions and dialogue. Examples of verbal cues are spoken words and phrases. Examples of non-verbal cues include facial expressions, gestures, and body language. Users can use the cues to generate appropriate responses based on the context and user preferences.
Neural Networks (NN) and Deep Learning (DL) algorithms enable real-time environmental generation. Virtual Worlds are dynamically created based on user behavior and preferences. Natural Language Processing (NLP) allows virtual characters and environments to engage in realistic conversations with users. It allows immersive storytelling with enhanced communication within virtual workspaces. It can provide personalized guidance, too. NLP enables VR systems to understand and respond to user commands and queries in conversational and local languages. It can provide a more native, localized, and user-friendly interface. For example, NLP can enable voice-activated controls in VR environments. This is akin to driving an automated car. Users can navigate and interact with virtual objects using simple voice commands (Shukla et al., 2024). Further, NLP can convert spoken words into immersive and interactive virtual scenes (Venkatachalam et al., 2024).
Additionally, NLP can be used to develop more sophisticated conversational agents in VR, enabling more natural and engaging interactions between users and virtual characters. Neural Radiance Fields (NeRF) can synthesize novel views and realistic 3D scenes. NeRF uses deep neural networks to model volumetric scene appearance and geometry (Khalid, 2024). DL has been used in Augmented Reality/Virtual Reality (AR/VR) applications for over a decade, although server-side computation is more common when using AR devices (Ghasemi et al., 2022).
Computer Vision (CV) is the branch of AI that involves working with image classification, image segmentation, novel view synthesis, and visual scene understanding (Khalid, 2024). The technique is traditionally used for object detection (Ghasemi et al., 2022) and facial expression recognition. However, these applications require much training and data tagging (Ge et al., 2022). CVs can enhance the immersive qualities of VR systems. Computer vision algorithms can analyze and replicate real-world environments in virtual space, enabling more realistic and accurate virtual representations. For instance, computer vision can enable real-time object tracking, gesture recognition (such as sophisticated hand and body tracking in VR), and spatial mapping in VR. With this, virtual objects can be accurately placed and manipulated within real-world environments. Such innovations ensure that VR applications can mimic real-world complexities with heightened fidelity.
Machine Learning (ML) is an important subset of AI. It plays a crucial role in refining interactions. The system learns from user preferences and behavior (such as social attitude detection) and continually improves the accuracy of responses and the realism of virtual scenarios (Dobre et al., 2022). For instance, AI can analyze user data. The user data can be interaction patterns and emotional responses. ML can use the feedback to tailor the VR experience to their needs and preferences. This adaptability enhances user satisfaction. ML can optimize rendering, reduce latency, and improve the overall visual quality of VR environments. ML and NLP with Big Data can help study electronic medical records (EMRs) and social media posts. This analysis allows for identification patterns, cross-checks with symptoms of various mental disorders, and suggests early detection and intervention (Mittal et al., 2023).
Advanced Machine Learning models have revolutionized avatar customization through generative adversarial networks (GANs). They produce photorealistic digital representations that reflect users’ subtle expressions and emotional states. These AI-driven avatars exhibit contextually appropriate behavioral patterns. They engage in meaningful interactions. They take advantage of transformer architectures for enhanced conversational capabilities and emotional intelligence.
Implementing reinforcement learning algorithms has significantly improved motion prediction and user intent recognition. With this, VR systems can anticipate user actions and preload relevant content. Eventually, this reduces latency and enhances immersion. AI-powered Procedural Content Generation brings tremendous changes to virtual environment design. They use sophisticated algorithms to create vast, detailed landscapes and architectures (such as virtual worlds). These virtual worlds evolve based on user interaction patterns and collective behavioral data. Such worlds are almost impossible to design manually because of the time, energy, and cost factors. AI also improves the realism of virtual characters through advanced animations and behaviors. Eventually, the interactions in the virtual environment will be authentic and engaging.
Recent developments in Neural Rendering (NR) have introduced exceptional levels of visual fidelity (Tewari et al., 2022). Rendering algorithms such as rasterization or ray tracing are traditionally used to create synthetic images. Neural rendering takes inspiration from traditional computer graphics and ML. NR systems optimize real-time rendering processes based on gaze tracking and foveated rendering techniques. These systems dynamically allocate computational resources to areas of user focus while maintaining peripheral awareness, significantly improving performance without compromising visual quality.
Integrating computer vision algorithms with depth-sensing technologies has enhanced object recognition and spatial mapping capabilities. They provide more precise interaction between virtual and physical elements. This advancement has particularly benefited mixed reality applications. So, AI-based LLMs can continuously analyze, generate narratives dynamically, and adapt to changing environmental conditions. They ensure seamless integration of virtual elements within physical spaces, maintaining the story arc (Kumaran et al., 2023).
AI-enabled VR has far-reaching implications for various industries. In educational settings, AI-powered VR can create personalized learning experiences. These intelligent tutoring systems can adapt to each student’s pace and learning style, giving students customized learning paths. This approach has been shown to improve learning outcomes and increase student motivation (Alashwal, 2024). AI can help in making effective treatments for therapeutic applications and rehabilitation by dynamically adjusting therapeutic exercises based on patient progress (İbişağaoğlu, 2024). The technology also has the potential to aid in the treatment of mental health disorders. Diseases such as anxiety and PTSD can be treated by providing exposure therapy in a controlled and immersive environment. AI can simulate complex and high-risk scenarios in professional training. They respond to real-time trainee actions, providing a safe and effective training environment. They can be used to create realistic simulations for medical training. Healthcare professionals can thus practice complex procedures in a safe and controlled environment. The technology can be used in the entertainment sector to create multiplayer VR environments. AI helps create intelligent non-player characters (NPCs) that exhibit lifelike behaviors. They adapt to player strategies, enhancing engagement in gaming, training, and simulation scenarios.
The development of AI-powered VR raises important questions about data privacy and security. As AI algorithms collect and analyze user data, there is a risk of compromising user anonymity and confidentiality. So, it is essential to develop stringent data protection protocols and inform users how their data is used.
Future Research ConsiderationsVirtual reality has been constantly evolving since the 1950s-1960s. It will continue to develop going forward, albeit much faster than earlier. The emergence of Augmented Reality (AR) and Metaverse will further fuel the importance of this technology. Apart from the five futuristic themes identified, future researchers can consider the following five more themes.
- The role of wireless and portable devices, such as lightweight, untethered VR headsets with high-performance capabilities, will be explored from both technology and economic perspectives.
- AR/VR Integration requires the seamless blending of augmented Reality (AR) and VR to offer a mixed-reality experience.
- Several global corporations have seriously explored Virtual Workspaces. This theme will discuss enhanced tools for remote collaboration, virtual offices, and professional training.
- Cloud-based VR Streaming requires researcher focus. Virtual environments generate enormous amounts of data that is to be streamed to the end users. Cloud-based solutions will reduce hardware requirements and can render superior VR experiences.
- Quantum computing and VR can be emerging themes, considering the pace at which quantum computing develops. The theme will focus on quantum advances for faster simulations and realistic physics.
Conclusion
Virtual Reality (VR) will continue to transform industries and human experiences. The rapid advancements in immersive technologies and computing capabilities (hardware and software) will shape the future of VR. Besides improving interaction and communications, they reduce the physical and virtual worlds gap. This chapter highlights key future trends, including immersive social experiences, ultra-realistic graphics, and haptic feedback, which promise higher interaction and authenticity. Full-body tracking and AI-powered VR enhance VR’s potential by enabling personalization, realism, and dynamic adaptability. These advancements have applications in medicine, education, training, and entertainment. There will be new opportunities for innovation and collaboration. There are, of course, challenges, such as latency and data privacy. However, integrating emerging technologies positions VR as a key technology for the future of digital engagement.
References
Alashwal, M. (2024). Empowering Education Through AI: Potential Benefits and Future Implications for Instructional Pedagogy. PUPIL: International Journal of Teaching, Education and Learning, 201–212. https://doi.org/10.20319/ictel.2024.201212
Cannavò, A., Bottino, F., & Lamberti, F. (2024). Supporting motion-capture acting with collaborative Mixed Reality. Computers & Graphics, 124, 104090. https://doi.org/10.1016/j.cag.2024.104090
Caserman, P., Garcia-Agundez, A., & Göbel, S. (2020). A Survey of Full-Body Motion Reconstruction in Immersive Virtual Reality Applications. IEEE Transactions on Visualization and Computer Graphics, 26(10), 3089–3108. https://doi.org/10.1109/TVCG.2019.2912607
Dally, W. J., Keckler, S. W., & Kirk, D. B. (2021). Evolution of the Graphics Processing Unit (GPU). IEEE Micro, 41(6), 42–51. https://doi.org/10.1109/MM.2021.3113475
Deighan, M. T., Ayobi, A., & O’Kane, A. A. (2023). Social Virtual Reality as a Mental Health Tool: How People Use VRChat to Support Social Connectedness and Wellbeing. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3544548.3581103
Dobre, G. C., Gillies, M., & Pan, X. (2022). Immersive machine learning for social attitude detection in virtual reality narrative games. Virtual Reality, 26(4), 1519–1538. https://doi.org/10.1007/s10055-022-00644-4
Ge, H., Zhu, Z., Dai, Y., Wang, B., & Wu, X. (2022). Facial expression recognition based on deep learning. Computer Methods and Programs in Biomedicine, 215, 106621. https://doi.org/10.1016/j.cmpb.2022.106621
Ghasemi, Y., Jeong, H., Choi, S. H., Park, K.-B., & Lee, J. Y. (2022). Deep learning-based object detection in augmented reality: A systematic review. Computers in Industry, 139, 103661. https://doi.org/10.1016/j.compind.2022.103661
Gibbs, J. K., Gillies, M., & Pan, X. (2022). A comparison of the effects of haptic and visual feedback on presence in virtual reality. International Journal of Human-Computer Studies, 157, 102717. https://doi.org/10.1016/j.ijhcs.2021.102717
Guimarães, M., Prada, R., Santos, P. A., Dias, J., Jhala, A., & Mascarenhas, S. (2020). The Impact of Virtual Reality in the Social Presence of a Virtual Agent. Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, 1–8. https://doi.org/10.1145/3383652.3423879
Halabi, O., Abou El-Seoud, S., Alja’am, J., Alpona, H., Al-Hemadi, M., & Al-Hassan, D. (2017). Design of Immersive Virtual Reality System to Improve Communication Skills in Individuals with Autism. International Journal of Emerging Technologies in Learning (iJET), 12(05), 50. https://doi.org/10.3991/ijet.v12i05.6766
Hopster, J. (2021). What are socially disruptive technologies? Technology in Society, 67, 101750. https://doi.org/10.1016/j.techsoc.2021.101750
Hu, J., Yip, M. K., Alonso, G. E., Gu, S., Tang, X., & Jin, X. (2021). Efficient real-time dynamic diffuse global illumination using signed distance fields. The Visual Computer, 37(9–11), 2539–2551. https://doi.org/10.1007/s00371-021-02197-0
İbişağaoğlu, D. (2024). Integrating Virtual Reality and AI for Enhanced Patient Rehabilitation. Next Frontier For Life Sciences and AI, 8(1), 119. https://doi.org/10.62802/3eas9534
Khalid, S. (2024). Self-Supervised Visual Scene Understanding Using Structured Representations For Surgical Applications [University of Toronto]. https://utoronto.scholaris.ca/server/api/core/bitstreams/299268f0-79a8-430d-a62b-b69f0a1d88ee/content
Kumaran, V., Rowe, J., Mott, B., & Lester, J. (2023). SceneCraft: Automating Interactive Narrative Scene Generation in Digital Games with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 19(1), 86–96. https://doi.org/10.1609/aiide.v19i1.27504
Li, K., Lau, B. P. L., Yuan, X., Ni, W., Guizani, M., & Yuen, C. (2023). Toward Ubiquitous Semantic Metaverse: Challenges, Approaches, and Opportunities. IEEE Internet of Things Journal, 10(24), 21855–21872. https://doi.org/10.1109/JIOT.2023.3302159
Minopoulos, G. M., Memos, V. A., Stergiou, K. D., Stergiou, C. L., & Psannis, K. E. (2023). A Medical Image Visualization Technique Assisted with AI-Based Haptic Feedback for Robotic Surgery and Healthcare. Applied Sciences, 13(6), 3592. https://doi.org/10.3390/app13063592
Mittal, A., Dumka, L., & Mohan, L. (2023). A Comprehensive Review on the Use of Artificial Intelligence in Mental Health Care. 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), 1–5. https://doi.org/10.1109/ICCCNT56998.2023.10308255
Motaharifar, M., Norouzzadeh, A., Abdi, P., Iranfar, A., Lotfi, F., Moshiri, B., Lashay, A., Mohammadi, S. F., & Taghirad, H. D. (2021). Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.612949
Panyaram, S. (2024). Integrating Artificial Intelligence with Big Data for Real-Time Insights and Decision-Making in Complex Systems. FMDB Transactions on Sustainable Intelligent Networks, 1(2), 85–95. https://doi.org/10.69888/FTSIN.2024.000211
Pfeffer, S., Roessler, M., Maag, S., Langwaldt, L., Schunggart, L., Strigel, M., Senger, A., & Ochtrop, M. (2024). Virtual Ergonomics—Ergotyping in virtual environments. AHFE International, 157. https://doi.org/10.54941/ahfe1005490
Pollack, K., Kreuzer, W., & Majdak, P. (2022). Perspective Chapter: Modern Acquisition of Personalised Head-Related Transfer Functions – An Overview. In B. F.G. Katz & P. Majdak (Eds.), Advances in Fundamental and Applied Research on Spatial Audio. IntechOpen. https://doi.org/10.5772/intechopen.102908
Radoeva, R., Petkov, E., Kalushkov, T., Valcheva, D., & Shipkovenski, G. (2022). Overview on Hardware characteristics of Virtual Reality Systems. 2022 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), 01–05. https://doi.org/10.1109/hora55278.2022.9799932
Ramachandran, V., Schilling, F., Wu, A. R., & Floreano, D. (2021). Smart Textiles that Teach: Fabric‐Based Haptic Device Improves the Rate of Motor Learning. Advanced Intelligent Systems, 3(11), 2100043. https://doi.org/10.1002/aisy.202100043
Seetzen, H., Heidrich, W., Stuerzlinger, W., Ward, G., Whitehead, L., Trentacoste, M., Ghosh, A., & Vorozcovs, A. (2023). High Dynamic Range Display Systems. In M. C. Whitton (Ed.), Seminal Graphics Papers: Pushing the Boundaries, Volume 2 (1st ed., pp. 39–47). ACM. https://doi.org/10.1145/3596711.3596717
Shiu, J. Y., Liao, S. T., & Tzeng, S.-Y. (2023). How does online streaming reform e-commerce? An empirical assessment of immersive experience and social interaction in China. Humanities and Social Sciences Communications, 10(1), 224. https://doi.org/10.1057/s41599-023-01731-w
Shukla, A. K., Kansal, P., & Parkash, J. (2024). Enhancing Efficiency and Functionality of Voice-Controlled Cars Through NLP Techniques and Additional Features. In J. C. Bansal, S. Borah, S. Hussain, & S. Salhi (Eds.), Computing and Machine Learning (Vol. 1108, pp. 119–133). Springer Nature Singapore. https://doi.org/10.1007/978-981-97-6588-1_10
Suh, A. (2024). How virtual reality influences collaboration performance: A team-level analysis. Information Technology & People. https://doi.org/10.1108/itp-10-2023-1040
Tewari, A., Thies, J., Mildenhall, B., Srinivasan, P., Tretschk, E., Yifan, W., Lassner, C., Sitzmann, V., Martin‐Brualla, R., Lombardi, S., Simon, T., Theobalt, C., Nießner, M., Barron, J. T., Wetzstein, G., Zollhöfer, M., & Golyanik, V. (2022). Advances in Neural Rendering. Computer Graphics Forum, 41(2), 703–735. https://doi.org/10.1111/cgf.14507
Venkatachalam, N., Rayana, M., S, B. V., & S, P. (2024). Voice - Driven Panoramic Imagery: Real-Time Generative AI for Immersive Experiences. 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), 1133–1138. https://doi.org/10.1109/IDCIoT59759.2024.10467441
Wang, L., Shi, X., & Liu, Y. (2023). Foveated rendering: A state-of-the-art survey. Computational Visual Media, 9(2), 195–228. https://doi.org/10.1007/s41095-022-0306-4
Wang, Z., Philion, J., Fidler, S., & Kautz, J. (2021). Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 12518–12527. https://doi.org/10.1109/ICCV48922.2021.01231
Xiang, D., Bagautdinov, T., Stuyck, T., Prada, F., Romero, J., Xu, W., Saito, S., Guo, J., Smith, B., Shiratori, T., Sheikh, Y., Hodgins, J., & Wu, C. (2022). Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing. ACM Transactions on Graphics, 41(6), 1–15. https://doi.org/10.1145/3550454.3555456
Zamanifard, S., & Freeman, G. (2023). A Surprise Birthday Party in VR: Leveraging Social Virtual Reality to Maintain Existing Close Ties over Distance. Lecture Notes in Computer Science, 268–285. https://doi.org/10.1007/978-3-031-28032-0_23
Zheng, X., Su, Z., Wen, C., Xue, Z., & Jin, X. (2023). Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2308.08855
Zhou, Z., Yang, Y., Liu, J., Zeng, J., Wang, X., & Liu, H. (2022). Electrotactile Perception Properties and Its Applications: A Review. IEEE Transactions on Haptics, 15(3), 464–478. https://doi.org/10.1109/TOH.2022.3170723