PhD thesis abstracts

March 2010

PhD thesis abstracts

Aleksandra Kovacevic

Peer-to-Peer Location-based Search: Engineering a Novel Peer-to-Peer Overlay Network

Personalization of Internet services is a significant feature and exploiting the users' location brings the most value to it. Location-based services have a wide application range - from emergency, tracking, and navigation services to informational and entertainment services. In existing centrally managed solutions, the results of location-based search are often incomplete or outdated. Additional information about the searched object (e.g. the menu, facilities, prices) is usually not available, as such a huge amount of data and frequent updates (e.g. the number of free places in restaurant) would overload the server. In a peer-to-peer solution, each peer is responsible for the information about the object it represents, therefore, updating and publishing information is done directly without a single point of failure. It is available to a wide community to join and publish their services, as peer-to-peer systems operate at low costs. The main goal of this thesis is to prove the feasibility of engineering a peer-to-peer solution for fully retrievable location-based search. Following an engineering approach, we first examine the most used and referred overlays of different types (unstructured, structured, and hybrid). Comparative evaluation identifies the influence of their design decisions on quality aspects such as efficiency, scalability, robustness, and stability. The foundation for the design of our solution is based on the findings from this study. The resulting overlay, Globase.KOM is a structured superpeer-based overlay in the form of a tree enhanced with interconnections. Superpeers are chosen from publicly reachable, static peers with more capacity, spare bandwidth, and good network connectivity. The world projection is divided into rectangular zones, which do not overlap. Each zone is assigned to a superpeer, located inside this zone. It is responsible for all peers in the zone. Superpeers form the tree, which is based on the subset-relation of their zones. Further contribution is the clear methodology for evaluating peer-to-peer search overlays, by defining metrics and various workloads that address all crucial quality properties. Additionally, in order to model realistic workloads, we discuss the difference between user behavior in popular file-sharing applications and VoIP applications such as Skype. As an evaluation tool, we select the simulation framework PeerfactSim.KOM and extend it to support various geographical distribution of peers on a world map and a location-aware churn model. The evaluation results prove the efficiency, good load balancing, scalability, robustness, and stability of the system. Query resolution is significantly faster than in related solutions. Additionally, the location-awareness of the overlay results in an efficient mapping of the logical overlay to the physical underlay which reduced total transmission delay and unnecessary underlay traffic. Although uneven load distribution seems to be an issue due to the tree structure of the overlay, we prove very good load balance due to interconnections and a careful zone formation, which together diminish a higher expected overload of superpeers in the higher tree levels. Our solution scales logarithmically with growing network size. It proves to be robust and stable under simultaneous failures of superpeers in higher tree levels, or in the same branch of the tree, and in the case of frequent querying. The biggest influence on the quality of our solution is the choice of identifier space, its management, and interconnections. A peer identifier contains the information responsibility and its location. That allows smart selection of interconnections and efficient greedy routing. Interconnections enable bypassing of the superpeers in higher levels of the tree and therefore allow equal load distribution among the superpeers. A fast recovery time and small performance variation under extreme churn and critical failures is to be credited to the various maintenance strategies used in combination. The simulation results are confirmed by a prototype and its applicability is shown in the examples of distributed multimedia communication and future peer-to-peer based collaborative applications.

Advisor(s): first Referee: Prof. Dr.-Ing. Ralf Steinmetz

SIG MM member(s): Prof. Dr.-Ing. Ralf Steinmetz

ISBN number: urn:nbn:de:tuda-tuprints-18931


Aleksandra Kovacevic

Andreas Petlund

Improving latency for interactive, thin-stream applications over reliable transport

A large number of network services use IP and reliable transport protocols. For applications with constant pressure of data, loss is handled satisfactorily, even if the application is latency sensitive. For applications with data streams consisting of intermittently sent small packets, however, users experience extreme latencies more frequently. Due to the fact that such thin-stream applications are commonly interactive and time-dependent, increased delay may severely reduce the experienced quality of the service. When TCP is used for thin-stream applications, events of highly increased latency are common, caused by the way retransmissions are handled. Other transport protocols that are deployed in the Internet, like SCTP, model their congestion control and reliability on TCP, as do many frameworks that provide reliability on top of unreliable transport. We have tested several application- and transport layer solutions, and based on our findings, we propose sender-side enhancements that reduce the application-layer latency in a manner that is compatible with unmodified receivers running commodity opeating systems like Linux, *BSD, Mac OS X and Windows. We have implemented the mechanisms as modifications to the Linux kernel, both for TCP and SCTP. The mechanisms are dynamically triggered so that they are only active when the kernel identifies the stream as thin. To evaluate the performance of our modifications, we have conducted a wide range of experiments using replayed thin-stream traces captured from real applications as well as artificially generated thin-stream data patterns. From the experiments, effects on latency, redundancy and fairness were evaluated. The analysis of the performed experiments shows great improvements in latency for thin streams when applying the modifications. Surveys where users evaluate their experience of several applications' quality using the modified transport mechanisms confirmed the improvements seen in the statistical analysis. The positive effects of our modifications were shown to be possible without notable effects on fairness for competing streams. We therefore conclude that it is advisable to handle thin streams separately, using our modifications, when transmitting over reliable protocols to reduce retransmission latency.

Advisor(s): Carsten Griwodz (supervisor), Pål Halvorsen (supervisor), Kang Li (opponent), Anna Brunstrom (opponent), Michael Welzl (opponent)

SIG MM member(s): Carsten Griwodz, Pål Halvorsen, Kang Li, Anna Brunstrom


Multimedia Performance Group (MPG), Simula Research Laboratory / University of Oslo

The Media Performance Group investigates the means for improving the performance of distributed, time-dependent multimedia systems. In the context of an application, the group seeks solutions by exploring, understanding and improving a particular bottleneck using an experimental approach. Improvements are found in better operating systems mechanisms, programming tools, processing environments, protocols, distributed architectures, digital media formats or a better understanding of people's perception of media in a context.

This thesis addresses a core research area in MPG, the important requirement of timely data delivery of thin stream applications like games and VoIP. Transport protocol modifications have been implemented in the Linux kernal, and their effect on latency is studied at the transport, application and user level.

Cheng-Hsin Hsu

Efficient Mobile Multimedia Streaming

Modern mobile devices have evolved into small computers that can render multimedia streaming content anywhere and anytime. These devices can extend the viewing time of users and provide more business opportunities for service providers. Mobile devices, however, make a challenging platform for providing high-quality multimedia services. The goal of this thesis is to identify these challenges from various aspects, and propose efficient and systematic solutions to solve them. In particular, we study mobile video broadcast networks in which a base station concurrently transmits multiple video streams over a shared air medium to many mobile devices. We propose algorithms to optimize various quality-of-service metrics, including streaming quality, bandwidth efficiency, energy saving, and channel switching delay. We analytically analyze the proposed algorithms, and we evaluate them using numerical methods and simulations. In addition, we implement the algorithms in a real testbed to show their practicality and efficiency. Our analytical, simulation, and experimental results indicate that the proposed algorithms can: (i) maximize energy saving of mobile devices, (ii) maximize bandwidth efficiency of the wireless network, (iii) minimize channel switching delays on mobile devices, and (iv) efficiently support heterogeneous mobile devices. Last, we give network operators guidelines on choosing solutions suitable for their mobile broadcast networks, which allow them to provide millions of mobile users much better viewing experiences, attract more subscribers, and thus increase the revenues.

Advisor(s): Dr. Joseph Peters, Chair; Dr. Mohamed Hefeeda, Senior Supervisor; Dr. Ramesh Krishnamurti, Supervisor; Dr. Funda Ergun, SFU Examiner; Dr. Charles Krasic, External Examiner

SIG MM member(s): Mohamed Hefeeda, Charles Krasic


Network Systems Lab at SFU

The Network Systems Lab at Simon Fraser University is led by Dr. Mohamed Hefeeda, and is affiliated with the Network Modeling Group at Simon Fraser University. We are interested in the broad areas of computer networking and multimedia systems. We develop algorithms and protocols to enhance the performance of networks, especially the Internet, and to efficiently distribute multimedia content (e.g., video and audio objects) to large scale user communities.

Our current research interests include multimedia networking, peer-to-peer (P2P) systems, wireless sensor networks, network security, and high performance computing. Brief descriptions are given below.

In multimedia networking, we are focusing on distributed streaming in dynamic environments and for heterogeneous clients. Our goal is to analyze and understand scalable coding techniques, and to design several optimization and streaming algorithms to make the best possible use of them in real multimedia systems. This will yield better quality for users, and more efficient utilization of network and server resources. We are also designing algorithms to optimize streaming quality for wireless and mobile clients.

In P2P systems, we are exploring the applicability of the P2P paradigm to build cost-effective content distribution systems. Problems such as sender selection, adaptive object replication, and content caching are being studied. We are also developing models to analyze the new characteristics of the P2P traffic and the impact of these characteristics on the cache replacement policies and object replication strategies. Furthermore, we are devising analytic models to study the dynamics of the P2P system capacity and the impact of various parameters on it.

In network security, we are exploring network monitoring techniques to detect and thwart intrusion and denial-of-service attacks in their early stages by observing unusual traffic patterns injected by such attacks. We are studying the security of multimedia streaming systems that employ multi-layer and fine-grain scalable video streams.

In high performance computing, we are exploring the opportunities of utilizing new architectures such as GPUs, multi-core processors, and distributed clusters (cloud computing) to efficiently solve research problems related to multimedia content analysis, large-scale data analysis, and machine learning techniques.

Zeljko Vrba

Implementation and performance aspects of Kahn process networks - An investigation of problem modeling, implementation techniques, and scheduling strategies

The appearance of commodity multi-core processors has spawned a wide interest in parallel programming, which is widely-regarded as more challenging than sequential programming. Existing distributed processing frameworks like MapReduce and Dryad are intentionally meant for large batch workloads and fail to efficiently support cyclic workloads with deadlines. In this respect, a Kahn Process Networks (KPN) is a model of concurrency that relies exclusively on message passing, and that has some advantages over parallel programming tools in wide use today: simplicity, graphical representation, and determinism.

In this thesis, we investigate the applicability of KPNs to implementing general-purpose parallel computations for multi-core machines. In particular, we investigate 1) how KPNs can be used for modeling general-purpose problems; 2) how an efficient KPN run-time can be implemented; 3) what KPN scheduling strategies give good run-time performance. For these purposes, we have developed Nornir, an efficient run-time system for executing KPNs. With Nornir, we show that it is possible to develop a high-performance KPN run-time for multi-core machines. We experimentally demonstrate that problems expressed in the Kahn model resemble very much their sequential implementations, yet perform much better than when expressed in the MapReduce model, which has become widely-recognized as a simple parallel programming model. Lastly, we use Nornir to evaluate several load-balancing methods: static assignment, work-stealing, our improvement of work-stealing, and a method based on graph partitioning. The understanding brought by this evaluation is significant not only in the context of the Kahn model, but also in the more general context of load-balancing (potentially distributed) applications written in message-passing style.

Advisor(s): Pål Halvorsen (supervisor), Carsten Griwodz (supervisor), Prashant Shenoy (opponent), Lasse Natvig (opponent), Tor Skeie (opponent)

SIG MM member(s): Pål Halvorsen, Carsten Griwodz, Prashant Shenoy


Multimedia Performance Group (MPG), Simula Research Laboratory / University of Oslo

The Media Performance Group investigates the means for improving the performance of distributed, time-dependent multimedia systems. In the context of an application, the group seeks solutions by exploring, understanding and improving a particular bottleneck using an experimental approach. Improvements are found in better operating systems mechanisms, programming tools, processing environments, protocols, distributed architectures, digital media formats or a better understanding of people's perception of media in a context.

In the context of MPG, efficient processing of workloads, especially multimedia workloads, is an important area in order to support the timeliness of multimedia applications. This thesis describes a new distributed processing framework improving upon state-of-the-art being able to support cyclic and iterative workloads.

Previous Section Table of Contents Next Section