NOSSDAV '22: Proceedings of the 32nd Workshop on Network and Operating Systems Support for Digital Audio and Video

NOSSDAV '22: Proceedings of the 32nd Workshop on Network and Operating Systems Support for Digital Audio and Video

NOSSDAV '22: Proceedings of the 32nd Workshop on Network and Operating Systems Support for Digital Audio and Video

Full Citation in the ACM Digital Library

Preserving privacy in mobile spatial computing

  • Nan Wu
  • Ruizhi Cheng
  • Songqing Chen
  • Bo Han

Mapping and localization are the key components in mobile spatial computing to facilitate interactions between users and the digital model of the physical world. To enable localization, mobile devices keep capturing images of the real-world surroundings and uploading them to a server with spatial maps for localization. This leads to privacy concerns on the potential leakage of sensitive information in both spatial maps and localization images (e.g., when used in confidential industrial settings or our homes). Motivated by the above issues, we present a holistic research agenda in this paper for designing principled approaches to preserve privacy in spatial mapping and localization. We introduce our ongoing research, including learning-assisted noise generation to shield spatial maps, distributed architecture with intelligent aggregation to protect localization images, and end-to-end privacy preservation with fully homomorphic encryption. We also discuss the technical challenges, our preliminary results, and open research problems in those areas.

Revisiting super-resolution for internet video streaming

  • Zelong Wang
  • Zhenxiao Luo
  • Miao Hu
  • Di Wu
  • Youlong Cao
  • Yi Qin

Recent advancements of neural-enhanced techniques, especially super-resolution (SR), show great potential in revolutionizing the landscape of Internet video delivery. However, there are still quite a few key questions (e.g., how to choose a proper resolution configuration for training samples, how to set the training patch size, how to perform the best patch selection, how to set the update frequency of SR model) that have not been well investigated and understood. In this paper, we perform a dedicated measurement study to revisit super-resolution techniques for Internet video streaming. Our measurements are based on real-world video datasets, and the results provide a number of important insights: (1) It is possible that the SR model trained with low-resolution patches (e.g., (540p, 1080p) pairs) can achieve almost the same performance as that trained with high-resolution patches (e.g., (1080p, 2160p) pairs); (2) Compared to the saliency of training patches, the size of training patches has little impact on the performance of trained SR model; (3) The improvement of video quality brought by more frequent SR model update is not very significant. We also discuss the implications of our findings for system design, and we believe that our work is essential for paving the way for the success of future neural-enhanced video streaming systems.

Geometry-guided compact compression for light field image using graph convolutional networks

  • Yu Liu
  • Linhui Wei
  • Heming Zhao
  • Jingming Shan
  • Yumei Wang

Light field records the information of the light in space and contributes to regenerate the content effectively, which makes immersive media more promising. In this paper, we propose a geometry-guided compact compression scheme (GCC) for light field image. We regard that the geometry of GCC includes the structure in a single sub-aperture image (SAI) and the relationship among SAIs, which can be used to fully explore the compact representation for light field image. The light field image is grouped into key SAIs and non-key SAIs. The key SAIs are obtained by down-sampling in the angular domain and arranged into the pseudo-sequence that needs to be compressed. We consider the superpixel-based segmentation algorithm to detect the contours and obtain the sketch map for the non-key SAIs. Meanwhile, the graph model is used to establish the relationships among the SAIs by the vertices and edges. On the decoder side, the light field image is reconstructed by the graph convolutional networks, and the sketch map optimizes the details of the recovered images to some extent. Experimental results show the benefit of GCC in terms of rate-distortion performances compare with several state-of-the-art methods for the real-world and synthetic light field datasets. Besides, the proposed GCC is able to generalize over datasets not seen during training.

Measurement of the responses of cloud-based game streaming to network congestion

  • Xiaokun Xu
  • Mark Claypool

Cloud-based game streaming has emerged as a viable way to play games anywhere with a good network connection. While previous research has studied the network turbulence of game streaming traffic, there is as of yet no work exploring how cloud-based game streaming responds to rival connections on a congested network. This paper presents experiments measuring and comparing the network response for three popular commercial streaming services - Google Stadia, NVidia GeForce Now, and Amazon Luna - competing with TCP flows on a congested network. Analysis of the bitrates, loss and latency show that the three systems have different adaptations to network congestion and vary in their fairness to competing TCP flows sharing a bottleneck link.

Does TCP new congestion window validation improve HTTP adaptive streaming performance?

  • Mihail Yanev
  • Stephen McQuistin
  • Colin Perkins

When using HTTP adaptive streaming, video traffic exhibits on-off behaviour with frequent idle periods that can interact poorly with TCP congestion control algorithms. New congestion window validation (New CWV) modifies TCP to allow senders to recover their sending rate more quickly after certain idle periods. While previous work has shown that New CWV can improve transport performance for streaming video, it has not been shown if this translates to improved application performance in terms of playback stability. In this paper, we show that New CWV can reduce video re-buffering events by up to 4%, and limit representation switches by 12%, without any changes to existing rate adaptation algorithms.

PRIOR: deep reinforced adaptive video streaming with attention-based throughput prediction

  • Danfu Yuan
  • Yuanhong Zhang
  • Weizhan Zhang
  • Xuncheng Liu
  • Haipeng Du
  • Qinghua Zheng

Video service providers have deployed dynamic video bitrate adaptation services to fulfill user demands for higher video quality. However, fluctuations and instability of network conditions inhibit the performance promotion of adaptive bitrate (ABR) algorithms. Existing rule-based approaches fail to guarantee accurate throughput estimates, and learning-based algorithms are considerably sensitive to the variability of network. Therefore, how to gain effective and stable throughput estimates has become one of the critical challenges to further enhancing ABR methods. To eliminate this concern, we propose PRIOR, an ABR algorithm that fuses an effective throughput prediction module and a state-of-the-art multi-agent reinforcement learning method to provide a high quality of experience (QoE). PRIOR aims to maximize the QoE metric by straightforwardly utilizing accurate throughput estimates rather than past throughput measurements. Specifically, PRIOR employs a light-weighted prediction module with attention mechanism to obtain effective future throughput. Considering the excellent features introduced by the HTTP/3 protocol, we apply PRIOR to trace-driven simulations and real-world scenarios over HTTP/1.1 and HTTP/3. Trace-driven emulation illustrates that PRIOR outperforms existing ABR schemes over HTTP/1.1 and HTTP/3, and our prediction module can also reinforce the performance of other ABR algorithms. Extensive results on real-world evaluation demonstrate the superiority of PRIOR over existing state-of-the-art ABR schemes.

FLAD: a human-centered video content flaw detection system for meeting recordings

  • Haihan Duan
  • Junhua Liao
  • Lehao Lin
  • Wei Cai

Widely adopted digital cameras and smartphones have generated a large number of videos, which have brought a tremendous workload to video editors. Recently, a variety of automatic/semi-automatic video editing methods have been proposed to tackle this issue in some specific areas. However, for the production of meeting recordings, the existing studies highly depend on additional conditions of conference venues, like infrared camera or special microphone, which are not practical. Moreover, current video quality assessment works mainly focus on the quality loss after compression or encoding rather than the human-centered video content flaws. In this paper, we design and implement FLAD, a human-centered video content flaw detection system for meeting recordings, which could build a bridge between subjective sense and objective measures from a human-centered perspective. The experimental results illustrate the proposed algorithms could achieve the state-of-the-art video content flaw detection performance for meeting recordings.

CAVE: caching 360° videos at the edge

  • Ahmed Ali-Eldin
  • Chirag Goel
  • Mayank Jha
  • Bo Chen
  • Klara Nahrstedt
  • Prashant Shenoy

While 360° videos are gaining popularity due to the emergence of VR technologies, storing and streaming such videos can incur up to 20X higher overheads than traditional HD content. Edge caching, which involves caching and serving 360° videos from edge servers, is one possible approach for addressing these overheads. Prior work on 360° video caching has been based on using past history to cache tiles that are likely to be in a viewer's field of view and has not considered methods to intelligently share a limited edge cache across a set of videos that exhibit large variations in their popularity, size, content, and user abandonment patterns. Towards this end, we present CAVE, an adaptive edge caching framework that intelligently optimizes cache allocation across a set of videos taking into account video content, size, and popularity. Our experiments using realistic video workloads shows CAVE improves cache hit-rates, and thus network saving, by up to 50% over state-of-the-art approaches, while also scaling to up to two thousand videos per edge cache. In addition, in terms of scalability, our developed algorithm is embarrassingly parallel, allowing CAVE to scale beyond state-of-the-art solutions that typically do not support parallelization.

Power-efficient live virtual reality streaming using edge offloading

  • Ziehen Zhu
  • Xianglong Feng
  • Zhongze Tang
  • Nan Jiang
  • Tian Guo
  • Lisong Xu
  • Sheng Wei

This paper aims to address the significant power challenges in live virtual reality (VR) streaming (a.k.a., 360-degree video streaming), where the VR view rendering and the advanced deep learning operations (e.g., super-resolution) consume a considerable amount of power draining the battery-constrained VR headset. We develop EdgeVR, a power optimization technique for live VR streaming, which offloads the on-device VR rendering and deep learning operations to an edge server for power savings. To address the significantly increased motion-to-photon (MtoP) latency due to the edge offloading, we develop a live VR viewport prediction method to pre-render the VR views on the edge server and compensate for the round-trip delays. We evaluate the effectiveness of EdgeVR using an end-to-end live VR streaming system with an empirical VR head movement dataset involving 48 users watching 9 VR videos. The results reveal that EdgeVR achieves power-efficient live VR streaming with low MtoP latency.

Dynamic DNN model selection and inference off loading for video analytics with edge-cloud collaboration

  • Xuezhi Wang
  • Guanyu Gao
  • Xiaohu Wu
  • Yan Lyu
  • Weiwei Wu

The edge-cloud collaboration architecture can support Deep Neural Network-based (DNN) video analytics with low inference delays and high accuracy. However, the video analytics pipelines with edge-cloud collaboration are complex, involving the decision-making for many coupled control knobs. We propose a deep reinforcement learning-based approach, named ModelIO, for dynamic DNN <u>Model</u> selection and <u>I</u>nference <u>O</u>ffloading for video analytics with edge-cloud collaboration. We jointly consider the decision-making for video pre-processing, DNN model selection, local inference, and offloading in a video analytics system to maximize performances. Our method can learn the optimal control policy for video analytics with the edge-cloud collaboration without complex system modeling. We implement a real-world testbed to conduct the experiments to evaluate the performances of our method. The results show that our method can significantly improve the system processing capacity, reduce average inference delays, and maximize overall rewards.

Applying VertexShuffle toward 360-degree video super-resolution

  • Na Li
  • Yao Liu

With the recent successes of deep learning models, the performance of 2D image super-resolution has improved significantly. Inspired by recent state-of-the-art 2D super-resolution models and spherical CNNs, in this paper, we design a novel spherical superresolution (SSR) approach for 360-degree videos. To address the bandwidth waste problem associated with 360-degree video transmission/streaming and save computation, we propose the Focused Icosahedral Mesh to represent a small area on the sphere and construct matrices to rotate spherical content to the focused mesh area. We also propose a novel VertexShuffle operation on the mesh, motivated by the 2D PixelShuffle operation. We compare our SSR approach with state-of-the-art 2D super-resolution models. We show that SSR has the potential to achieve significant benefits when applied to spherical signals.

Atlas level rate distortion optimization for 6DoF immersive video compression

  • Soonbin Lee
  • Jong-Beom Jeong
  • Eun-Seok Ryu

The Moving Picture Experts Group (MPEG) has started an immersive media standard project to enable multi-view video and depth representation in three-dimensional (3D) scenes. The MPEG immersive video (MIV) standard explores the six degree of freedom (6DoF) technologies of immersive content to support motion parallax. Despite the standard being designed to compress multi-view immersive media, MIV coding has not been investigated from the perspective of bit allocation. This paper presents an efficient bit allocation scheme for atlas level compression. The proposed model establishes a model of view synthesis distortion and analyzes the impact of distortion on complete views and patches. This paper also introduces packing alignment to separate two types of patches and characterize the distortion for each MIV atlas. By considering these characteristics, the proposed model derives a bitrate ratio between texture and geometry for model-based view-rendering optimization. Experimental results showed that the proposed method achieved a more accurate reconstruction of sequences under common test conditions (CTCs).