Abstracts, Papers, and Slides

June 7, 2012, Thursday

Debates Room, the Hart House

Keynote Address 1

Dr. Ali C. Begen, Cisco Systems

9:00 a.m. — 10:00 a.m.

TV Everywhere [Presentation]

Chair: Baochun Li, University of Toronto

Abstract: As more and more PC and handheld like devices get connected, consumers are migrating to the Web to watch their favorite shows and movies. Increasingly, the Web is coming to digital TV, which incorporates movie downloads and streaming. Similarly, consumers also want their TV content on alternative devices. What does this mean for service and content providers? What do they have to do to not lose their subscribers and revenue streams? This talk overviews the TV Everywhere technologies available for integrating the emerging over-the-top content into a managed network and making premium content accessible for unmanaged devices. The talk also provides a few real-world use cases.

Session 1: Streaming I

10:30 a.m. — 12:30 p.m.

Session Chair: Gwendal Simon (Telecom Bretagne)

Why Are State-of-the-art Flash-based Multi-tiered Storage Systems Performing Poorly for HTTP Video Streaming? [Presentation]

Moonkyung Ryu, Hyojun Kim, Umakishore Ramachandran (Georgia Institute of Technology)

Abstract: MLC ash memory is a promising technology for building a high-performance and cost-effective video streaming system when it is used as an intermediate level cache in a multi-tiered storage hierarchy. Therefore, we were quite surprised when through extensive measurements we found that two state-of-the-art ash-based multi-tiered storage systems (namely, flashcache and ZFS) have quite disappointing performance for HTTP video streaming using the DASH protocol. We have conducted a thorough analysis to understand the reasons for the poor performance of these two systems. In a nutshell, unless attention is paid to the unique performance characteristics of ash memory-based SSDs, we could end up with suboptimal or even poor performance as we discovered through experimentation with these two systems. Based on the analysis, we present design guidelines for building a cost-effective high-performance HTTP video streaming server.


What Happens when HTTP Adaptive Streaming Players Compete for Bandwidth? [Presentation]

Saamer Akhshabi, Lakshmi Anantakrishnan, Constantine Dovrolis (Georgia Institute of Technology), Ali C. Begen (Cisco Systems)

Abstract: With an increasing demand for high-quality video content over the Internet, it is becoming more likely that two or more adaptive streaming players share the same network bottleneck and compete for available bandwidth. This competition can lead to three performance problems: player instability, unfairness between players, and bandwidth underutilization. However, the dynamics of such competition and the root cause for the previous three problems are not yet well understood. In this paper, we focus on the problem of competing video players and describe how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above. We use two adaptive players to experimentally showcase these issues. Then, focusing on the issue of player instability, we test how several factors (the ON-OFF durations, the available bandwidth and its relation to available bitrates, and the number of competing players) affect stability.


Interactions Between HTTP Adaptive Streaming and TCP [Presentation]

Jairo Esteban, Steven Benno, Andre Beck, Yang Guo, Volker Hilt, Ivica Rimac (Alcatel-Lucent Bell Labs)

Abstract: HTTP adaptive streaming (HAS) is quickly becoming a popular mechanism for delivering on-demand video content over the Internet. The chunked transmission and application-layer adaptation create a very different traffic pattern than traditional progressive video downloads where the entire video is downloaded with a single request. In this paper, we investigate experimentally the interplay between HAS and the network transport control protocol (TCP). We investigate the impact of network delay on achievable throughput and discover that HAS streams cannot fully utilize the available bandwidth due to the start and stop nature of HAS traffic patterns and its interaction with TCP. We investigate TCP pacing as a potential solution to this issue, particularly for packet losses that occur as a result of bursting packets into the network at the start of a transmission. We find that pacing can significantly increase a TCP flow's congestion window but it does not necessarily translate into higher throughput. Instead, we find that packet losses at the end of chunk transmission have a greater impact on throughput.


SmartTransfer: Transferring Your Mobile Multimedia Contents at the "Right" Time [Presentation]

Yichuan Wang, Xin Liu (University of California, Davis), Angela Nicoara (Deutsche Telekom R&D Laboratories USA), Ting-An Lin (National Tsing Hua University), Cheng-Hsin Hsu (National Tsing Hua University)

Abstract: Today's mobile Internet is heavily overloaded by the increasing demand and capability of mobile devices, in particular, multimedia traffic. However, not all traffic is created equal, and a large portion of multimedia contents on the mobile Internet is delay tolerant. We study the problem of capitalizing the content transfer opportunities under better network conditions via postponing the transfers without violating the user-specified deadlines. We propose a new framework called SmartTransfer, which offers a unified content transfer interface to mobile applications. We also develop two scheduling algorithms to opportunistically schedule the content transfers. Via extensive trace-driven simulations, we show that our algorithms outperform a baseline scheduling algorithm by far: up to 17 times improvement in upload throughput and/or at most 20 dBm boost in signal strength. The simulation results also reveal various tradeoff between the two proposed scheduling algorithms. We have implemented our framework and one of the scheduling algorithms on Android, to demonstrate their practicality and efficiency.

Session 2: Cloud and Middleware Support

2:00 p.m. — 3:30 p.m.

Session Chair: Bernard Wong (University of Waterloo)

Quiver: A Middleware for Distributed Gaming [Presentation]

Giuseppe Reina (Eurecom, France), Ernst Biersack (Eurecom, France), Christophe Diot (Technicolor, France)

Abstract: Massively multiplayer online games have become popular in the recent years. Scaling with the number of users is challenging due to the low latency requirements of these games. Peer-to-peer techniques naturally address the scalability issues at the expense of additional complexity to maintain consistency among players. We design and implement Quiver, a middleware that allows an existing game to be played in peer-to-peer mode with minimal changes to the engine. Quiver focuses on achieving scalability by distributing the game state. It achieves consistency by keeping the state synchronized among all the players. We have built a working prototype of Quake II using Quiver. We analyze the changes necessary to Quake II and discuss how generic a software like Quiver can be.


Cloud Transcoder: Bridging the Format and Resolution Gap between Internet Videos and Mobile Devices [Presentation] [Demo]

Zhenhua Li (Peking University), Yan Huang, Gang Liu, Fuchen Wang (Tencent Research), Zhi-Li Zhang (University of Minnesota), Yafei Dai (Peking University)

Abstract: Despite the increasing popularity, Internet video streaming to mobile devices is still challenging. In particular, there has been a format and resolution "gap" between Internet videos and mobile devices, so mobile users have high demand on video transcoding to facilitate their specific devices. However, as a computation-intensive work, video transcoding is greatly challenged by the limited battery capacity of mobile devices. In this paper we propose and implement "Cloud Transcoder", which utilizes an intermediate cloud platform to bridge the "gap" via its special and practical designs. Specifically, Cloud Transcoder only requires the user to upload a video request rather than the video content. After getting the video request, Cloud Transcoder downloads the original video from the Internet, transcodes it on the user's demand, and transfers the transcoded video back to the user with a high data rate via the intra-cloud data transfer acceleration. Therefore, the mobile device only consumes energy in the last step - fast retrieving the transcoded video from the cloud. Running logs of our real-deployed system confirm the efficacy of Cloud Transcoder.


A Content Replication Scheme for Wireless Mesh Networks [Presentation]

Zakwan Al-Arnaout, Qiang Fu, Marcus Frean (Victoria University of Wellington, New Zealand)

Abstract: Wireless Mesh Networks (WMNs) extend Internet access in areas where the wired infrastructure is not available. A problem that arises is the congestion around gateways, delayed access latency and low throughput. Therefore, object replication and placement is essential for multi-hop wireless networks. Many replication schemes are proposed for the Internet, but they are designed for CDNs that have both high bandwidth and high server capacity, which makes them unsuitable for the wireless environment. Object replication has received comparatively less attention from the research community when it comes to WMNs. In this paper, we propose an object replication and placement scheme for WMNs. In our scheme, each mesh router acts as a replica server in a peer-to-peer fashion. Our scheme exploits graph partitioning to build a hierarchy from fine-grained to coarse-grained partitions. The challenge is to replicate content as close as possible to the requesting clients and thus reduce the access latency per object, while minimizing the number of replicas. Using simulation tests, we demonstrate that our scheme is scalable, performing well with respect to the number of replica servers and the number of objects. The proposed scheme can give improved performance in terms of convergence time, throughput, hop count and hit ratio.

Session 3: Multiview and Panoramic Video

4:00 p.m. — 5:00 p.m.

Session Chair: Zimu Liu (University of Toronto)

Evaluation of Distribution of Panoramic Video Sequences in the eXplorative Television Project [Presentation]

Peter Quax, Panagiotis Issaris, Wouter Vanmontfort, Wim Lamotte (Expertise Center for Digital Media / Hasselt University)

Abstract: In this paper, a scalable solution is presented for distributing panoramic video sequences to multiple viewers at high resolution and quality levels. In contrast to traditional broadcast scenarios, panoramic video enables the content consumer to manipulate the camera view direction and viewport size. By segmenting the panoramic input video into a set of separate sequences, transporting them over standard delivery channels and recombining them at end user side, bandwidth utilization is optimized and the quality of the video that is visualized is increased. The proposed solution, called the segmentation approach, is thoroughly explained and evaluated versus a single-stream solution with regards to several metrics, including bandwidth utilization, encoding speed, objective quality levels and seeking performance.


Collaborative View Synthesis for Interactive Multi-view Video Streaming [Presentation]

Fei Chen, Jiangchuan Liu (Simon Fraser University), Edith Cheuk-Han Ngai (Uppsala University)

Abstract: Interactive multi-view video enables users to enjoy the video from different viewpoints. Yet multi-view dramatically increases the video data volume and their computation, making realtime transmission and interactions a challenging task. It therefore calls for efficient view synthesis strategies that flexibly generate visual views. In this paper, we present a collaborative view synthesis strategy for online interactive multi-view video streaming based on Depth-Image Based Rendering (DIBR) view synthesis technology, which generates a visual view with the texture and depth information on both sides. Different from the traditional DIBR algorithm for single view synthesis, we explore the collaboration relationship between different viewpoints synthesis for a range of visual views generation, and propose Shift DIBR (S-DIBR). In S-DIBR, only the projected pixels, rather than all the pixels of the reference view, are utilized for next visual view generation. Therefore, the computation complexity of projection transform, which is the most computation intensive process in the traditional DIBR algorithm, is reduced to fulfill the requirement of online interactive streaming. Experiment results validate the efficiency of our collaborative view synthesis strategy, as well as the bandwidth scalability of the streaming system.

June 8, 2012, Friday

Debates Room, the Hart House

Keynote Address 2

Professor Wu-chi Feng, Portland State University

9:00 a.m. — 10:00 a.m.

Streaming Media Evolution: Where to now? [Presentation]

Chair: Baochun Li, University of Toronto

Abstract: The notion of streaming media has been around for nearly two decades. From a research novelty to now consuming over half of all Internet bandwidth, streaming media is now much more commonplace. This keynote address will give my perspective on the history and evolution of streaming media and lessons learnt. Since the introduction of digital video in the early 1990s, our community has spent significant effort focused on the development of scalable and adaptive mechanisms for streaming media. We have witnessed techniques ranging from interactive multimedia streaming, stored video streaming, peer-to-peer streaming, and remote tele-presence. Several common themes have emerged in these technologies and as we move forward as a community, this keynote will offer some thought and observations on applying lessons learnt to future technologies and a handful of areas where streaming media can make significant impact beyond where we are today.

Session 4: Streaming II

10:30 a.m. — 12:00 p.m.

Session Chair: Pal Halvorsen (University of Oslo, Norway)

On Tile Assignment for Region-of-Interest Video Streaming in a Wireless LAN [Presentation]

Ravindra Guntur, Wei Tsang Ooi (National University of Singapore)

Abstract: We consider the following problem in this paper: A video is encoded as a set of tiles T and is streamed to multiple users via a onehop wireless LAN. Each user selects a region-of-interest (RoI), represented as a subset of T, in the video to watch. The RoI selected by the users may overlap. Each tile may be multicast or unicast. We define the tile assignment problem as: which subset of tiles should be multicast such that every user receives, within a transmission deadline, the subset of tiles pertaining to the RoI the user selected, while minimizing the number of unwanted tiles received by users. We present and evaluate five tile assignment methods. We show that: (i) minimizing transmission delay can lead to significant wasteful reception in the multicast group, (ii) using tile access probability to assign tiles frequently leads to assignments that violate the deadline, and (iii) a fast, greedy, heuristic works well: it performs close to the optimal method and can always find an assignment within the deadline (as long as such assignment exists).


Minimizing Server Throughput for Low-Delay Live Streaming in Content Delivery Networks [Presentation]

Fen Zhou (Telecom Bretagne), Shakeel Ahamad (De Montfort University), Eliya Buyukkaya (Telecom Bretagne), Raouf Hamzaoui (De Montfort University), Gwendal Simon (Telecom Bretagne)

Abstract: Large-scale live streaming systems can experience bottlenecks within the infrastructure of the underlying Content Delivery Network. In particular, the "equipment bottleneck" occurs when the fan-out of a machine does not enable the concurrent transmission of a stream to multiple other equipments. In this paper, we aim to deliver a live stream to a set of destination nodes with minimum throughput at the source and limited increase of the streaming delay. We leverage on rateless codes and cooperation among destination nodes. With rateless codes, a node is able to decode a video block of k information symbols after receiving slightly more than k encoded symbols. To deliver the encoded symbols, we use multiple trees where inner nodes forward all received symbols. Our goal is to build a diffusion forest that minimizes the transmission rate at the source while guaranteeing on-time delivery and reliability at the nodes. When the network is assumed to be lossless and the constraint on delivery delay is relaxed, we give an algorithm that computes a diffusion forest resulting in the minimum source transmission rate. We also propose an effective heuristic algorithm for the general case where packet loss occurs and the delivery delay is bounded. Simulation results for realistic settings show that with our solution the source requires only slightly more than the video bit rate to reliably feed all nodes.


To Chunk or Not to Chunk: Implications for HTTP Streaming Video Server Performance [Presentation]

Jim Summers, Tim Brecht (University of Waterloo), Derek Eager (University of Saskatchewan), Bernard Wong (University of Waterloo)

Abstract: Large amounts of Internet streaming video traffic are being delivered using HTTP to leverage the existing web infrastructure. A fundamental issue in HTTP streaming concerns the granularity of video objects used throughout the HTTP ecosystem (including clients, proxy caches, CDN nodes, and servers). A video may be divided into many files (called chunks), each containing only a few seconds of video at one extreme, or stored in a single unchunked file at the other. In this paper, we describe the pros and cons of using chunked and unchunked videos. We then describe a methodology for fairly comparing the performance implications of video object granularity at web servers. We nd that with conventional servers (userver, nginx and Apache) there is little performance difference between these two approaches. However, by aggressively prefetching and sequentializing disk accesses in the userver, we are able to obtain up to double the throughput when serving requests for unchunked videos when compared with chunked videos (even while performing the same aggressive prefetching with chunked videos). These results indicate that more research is required to ensure that the HTTP ecosystem can handle this important and rapidly growing workload.

Session 5: Content Sharing

1:30 p.m. — 2:30 p.m.

Session Chair: Wu-chi Feng (Portland State University)

Content and Geographical Locality in User-Generated Content Sharing Systems [Presentation]

Kévin Huguenin (EPFL), Anne-Marie Kermarrec (INRIA), Konstantinos Kloudas (INRIA), Francois Taïani (Lancaster University)

Abstract: User Generated Content (UGC), such as YouTube videos, accounts for a substantial fraction of the Internet traffic. To optimize their performance, UGC services usually rely on both proactive and reactive approaches that exploit spatial and temporal locality in access patterns. Alternative types of locality are also relevant and hardly ever considered together. In this paper, we show on a large (more than 650,000 videos) YouTube dataset that content locality (induced by the related videos feature) and geographic locality, are in fact correlated. More specifically, we show how the geographic view distribution of a video can be inferred to a large extent from that of its related videos. We leverage these findings to propose a UGC storage system that proactively places videos close to the expected requests. Compared to a caching-based solution, our system decreases by 16% the number of requests served from a different country than that of the requesting user, and even in this case, the distance between the user and the server is 29% shorter on average.


Video Sharing in Online Social Network: Measurement and Analysis [Presentation]

Haitao Li, Haiyang Wang, Jiangchuan Liu (Simon Fraser University)

Abstract: Online social networks (OSNs) have become popular destinations for connecting friends and sharing information. Recent statistics suggest that OSN users regularly share contents from video sites, and a significant amount of requests of the video sites are indeed from them nowadays. These behaviors have substantially changed the workload of online video services. To better understand this paradigm shift, we conduct a long-term and extensive measurement of video sharing in RenRen, the largest Facebook-like OSN in China. In this paper, we focus on the video popularity distribution and evolution. In particular, we find that the video popularity distribution exhibits perfect power-law feature (while videos in YouTube exhibit a power-law waist with a long truncated tail). Moreover, we observe that the requests for the new published videos generally experience two or three days latency to reach the peak value, and then change dynamically with a series of unpredictable bursts (while in YouTube, videos reach the global peak immediately after introduction to the system, and then the accesses generally decrease overtime, except possibly on some special days). These differences can raise new challenges to content providers. For example, the video popularity is now hard to predict based on their historical requests. We further develop a simple yet effective model to simulate user requests process across videos in OSNs. Trace-based simulation shows that it can well capture the observed features.

Panel Discussion

2:30 p.m. — 3:30 p.m.

Pal Halvorsen (University of Oslo, Norway) [Presentation]
Wu-chi Feng (Portland State University) [Presentation]
Yan Huang (Tencent Inc.) [Presentation]
Tim Brecht (University of Waterloo) [Presentation]

Chair: Baochun Li (University of Toronto)

Session 6: Video Compression

4:00 p.m. — 5:30 p.m.

Session Chair: Quang Minh Khiem Ngo (National University of Singapore)

Sensor-Assisted Camera Motion Analysis and Motion Estimation Improvement for H.264/AVC Video Encoding [Presentation]

Guanfeng Wang, Haiyang Ma, Beomjoo Seo, Roger Zimmermann (National University of Singapore)

Abstract: Camera motion information is one aspect that helps to infer higher-level semantic descriptions in many video applications, e.g., in video retrieval. However, an efficient methodology for annotating camera motion information is still an elusive goal. Here we propose and present a novel and efficient approach for the task of partitioning a video document into sub-shots and characterizing their camera motion. By leveraging location (GPS) and digital compass data, which are available from most current smartphone handsets, we exploit the geographical sensor information to detect transitions between two sub-shots based on the variations of both the camera location and the shooting direction. The advantage of our method lies in its considerable accuracy. Additionally, the computational efficiency of our scheme enables it to be deployed on mobile devices and to process videos while recording. We utilize this capability to show how the HEX motion estimation algorithm in the H.264/AVC encoder can be simplified with the aid of our camera motion information. Our experimental results show that we can reduce the computation of the HEX algorithm by up to 50% while achieving comparable video quality.


CAME: Cloud-Assisted Motion Estimation for Mobile Video Compression and Transmission [Presentation]

Yuan Zhao, Lei Zhang, Xiaoqiang Ma, Jiangchuan Liu (Simon Fraser University), Hongbo Jiang (Huazhong University of Science and Technology)

Abstract: Video streaming has become one of the most popular networked applications and, with the increased bandwidth and computation power of mobile devices, anywhere and anytime streaming has become a reality. Unfortunately, it remains a challenging task to compress high-quality video in real-time in such devices given the excessive computation and energy demands of compression. On the other hand, transmitting the raw video is simply unaffordable from both energy and bandwidth perspective. In this paper, we propose CAME, a novel cloud-assisted video compression method for mobile devices. CAME leverages the abundant cloud server resources for motion estimation, which is known to be the most computation-intensive step in video compression, accounting for over 90% of the computation time. With CAME, a mobile device selects and uploads only the key information of each picture frame to cloud servers for mesh-based motion estimation, eliminating most of the local computation operations. We develop smart algorithms to identify the key mesh nodes, resulting in minimum distortion and data volume for uploading. Our simulation results demonstrate that CAME saves almost 30% energy for video compression and transmission.


Understanding the Impact of Inter-Lens and Temporal Stereoscopic Video Compression [Presentation]

Wu-chi Feng, Feng Liu (Portland State University)

Abstract: As we move toward more ubiquitous stereoscopic video, particularly with multiple (> 2) lenses, the need to understand the efficiency of compression will become increasingly important. In this paper, we explore the impact of spatial (between lenses) and temporal (over time) compression for stereoscopic video images. In particular, because stereoscopic images are taken at the same time, there is expected to be a high correlation between pixels in the horizontal direction due to the fixed nature of the multiple lenses. We propose a vertically reduced search window in order to take advantage of this correlation. Starting with multiple stereoscopic video sequences shot using a production studio 3D camera, we explore the effectiveness of temporal and inter-lens motion compensation for stereoscopic video. Furthermore, the experiments use exhaustive search to remove the effects of heuristic-based motion-compensation techniques.

Closing Remarks

Baochun Li (University of Toronto), Co-chair

Welcome to NOSSDAV 2013

Pal Halvorsen (University of Oslo, Norway)

Maintained by the NOSSDAV 2012 organizing committee.