MMVE '23: Proceedings of the 15th International Workshop on Immersive Mixed and Virtual Environment Systems

MMVE '23: Proceedings of the 15th International Workshop on Immersive Mixed and Virtual Environment Systems

MMVE '23: Proceedings of the 15th International Workshop on Immersive Mixed and Virtual Environment Systems


Full Citation in the ACM Digital Library

Experiencing Rotation and Curvature Gain for Redirected Walking in Virtual Reality

  • Emil Magnar Kjenstad
  • Halvor Kristian Ringsby
  • Rahel Jamal Gaeb
  • Çağrı Erdem
  • Konstantinos Kousias
  • Ozgu Alay
  • Carsten Griwodz

The proliferation of virtual reality (VR) interaction in the wake of the Metaverse trend will place an increasing number of applications and services into virtual environments (VEs). Over the recent years, interactions with the VE have been studied intensely, but very frequently, such interactions are focused on stationary users or users who leverage specialized contraptions to act in the VE (e.g., omni-directional treadmills). The free movement in the VE tends to be achieved by controller input, which creates a huge hurdle to enter and act in it in a natural manner. The target of this study is the translation of the natural walking motion from the real environment (RE) into the VE. In particular, we aim to explore to which extent redirected walking (RW) is achievable without being noticed by the users. Towards this goal, we test two RW methods, i.e., rotation gain and curvature gain. According to the responses of the participants in our study, we find that there is a statistically significant difference with 90% confidence between the levels of gains for rotation gain. On the other hand, levels of gains for curvature gain are not noticeable (i.e., no statistically significant difference is observed).

Real-Time Layered View Synthesis for Free-Viewpoint Video from Unreliable Depth Information

  • Teresa Hernando
  • Daniel Berjón
  • Francisco Morán
  • Javier Usón
  • Cesar Díaz
  • Julián Cabrera
  • Narciso García

In this work we present a novel approach for the generation in real time of synthetic views for free-viewpoint video. Our system is based on purely passive stereo cameras which, under the constraints of real-time operation yield unreliable depth maps, especially in the background areas. To solve this issue we propose a layered synthesis algorithm that combines offline and online sources of colour and depth information to increase the quality of the synthesized image. Foreground objects are synthesized using online depth and colour information. The background, however, is assumed to be stationary so its geometry can be reconstructed offline. However, foreground objects affect the colour of the background, so to render the background we carefully mix online and offline colour information. Finally, foreground and background are combined to form the final image.

Modeling Gamer Quality-of-Experience Using a Real Cloud VR Gaming Testbed

  • Kuan-Yu Lee
  • Jia-Wei Fang
  • Yuan-Chun Sun
  • Cheng-Hsin Hsu

Cloud Virtual Reality (VR) gaming offloads computationally-intensive VR games to resourceful data centers. Ensuring good Quality of Experience (QoE) in cloud VR gaming, however, is inherently challenging as VR gamers demand high visual quality, short response time, and low cybersickness. In this paper, we investigate the QoE of cloud VR gaming in multiple steps. First, we build a cloud VR gaming testbed, which allows us to measure various Quality of Service (QoS) metrics. Second, we carry out a user study to understand the effects of diverse factors, including encoding settings, network conditions, and game genres on gamer QoE, quantified by Mean Opinion Score (MOS). Using our user study results, we construct QoE models for cloud VR gaming, which to the best of our knowledge, has not been done in the literature. Last, we apply our QoE models to develop a bitrate allocation algorithm for multiple cloud VR gamers to achieve better overall QoE compared to the bandwidth-fair bitrate allocation.

360EAVP: A 360-degree Edition-Aware Video Player

  • Gabriel De Castro Araújo
  • Henrique Domingues Garcia
  • Mylene Farias
  • Ravi Prakash
  • Marcelo Carvalho

In this work, we introduce 360EAVP, an open-source Web browser-based application for streaming and visualization of 360-degree edited videos on head-mounted displays (HMD). The proposed application builds upon the Virtual Reality Tile-Based 360-degree Player (VDTP) by adding new functionalities. Specifically, this paper explains the main features introduced by 360EAVP, which are: 1) operation on HMDs based on real-time user's viewport; 2) dynamic editing via "snap-change" or "fade-rotation" combined with "blinking"; 3) visibility evaluation of user's Field of View with respect to the player's cubic projection (for purposes of tile requests); 4) incorporation of editing timing information into the operation of the ABR algorithm; 5) viewport prediction module based on either linear regression or ridge regression algorithms; and 6) data collection and log module during video playback. The introduced application can be freely used to support research on many topics such as optimization of tile-based 360-degree edited video streaming, psycho-physical experiments, dataset generation, and ABR algorithm development, to name a few.

Evaluation of interaction methods in an Extended Reality training environment

  • Carlos Cortés
  • María Rubio
  • Jesus Gutierrez
  • Beatriz Sánchez
  • Pablo Pérez
  • Narciso García

This research paper details the assessment of four distinct interaction techniques for immersive training within the construction industry. The primary focus of this study was to evaluate the extent to which these methods were capable of replicating physical environment interactions within virtual environments. A common challenge associated with immersive training within construction is the need for external tools to facilitate interactions, which can negatively impact user experience. To mitigate this issue, we introduce four immersive training use cases that rely on natural interfaces for interaction. Additionally, we conducted a Quality of Experience (QoE) experiment to validate the efficacy of these use cases and interaction techniques, assessing acceptability, presence, and visual quality. Moreover, this paper presents a classification system for four distinct natural interaction methods, which we evaluated for differences in visual quality.

Adaptive Streaming of Visual Volumetric Video-based Coding Media

  • Srinivas Gudumasu
  • Gireg Maury
  • Ariel Glasroth
  • Ahmed Hamza

High-quality 3D point clouds have recently emerged as an advanced representation of immersive media, enabling new forms of interaction in virtual environments and augmented reality applications. A point cloud consists of a set of points represented in the 3D space using coordinates indicating the location of each point along with one or more attributes, such as the color associated with each point, transparency, time of acquisition, reflectance, or material property. This demo paper presents an adaptive streaming system for visual volumetric video-based coding (V3C) content running on NReal Light AR glasses. The streaming system includes real-time HEVC decoding, DASH streaming, composition on to a scene, and view-dependent rendering. The presented system is implemented based on the latest V3C coding and carriage technologies developed by the Moving Picture Experts Group (MPEG) and DASH-based adaptive streaming techniques to provide the immersive experience while reducing the overall transmission bandwidth.

Design and Implementation of a Web-based Multi-user Virtual Reality System for Virtual Tour of Remote Places

  • Aizawa Shuta
  • Sakurai Shouta
  • Duc Nguyen

In this paper, we present a novel Web-based multi-user Virtual Reality system for a virtual tour of remote places. The proposed system supports not only 360-degree images but also live 360-degree videos. In addition, a synchronization mechanism is proposed to allow multiple users to participate in a virtual tour at the same time. A subjective experiment demonstrates that the proposed system can provide a high level of spatial presence and involvement to users.