+ All Categories
Home > Documents > Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong,...

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong,...

Date post: 22-Dec-2015
Category:
View: 216 times
Download: 2 times
Share this document with a friend
Popular Tags:
29
Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE
Transcript

Loopback: Exploiting Collaborative Cachesfor Large-Scale Streaming

Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE

Outline

Abstract Related work Client collaboration with loopback Loopback analytical model Local repair mechanism enhancing

reliability Conclusion and future work

Abstract(1/2) Two-level streaming architecture

content delivery network(CDN) to deliver video from central server to proxy servers.

Proxy server delivers video with the help of client caches.

Design Feature Loopback approach Local repair scheme

Abstract(2/2)

Objective Reduce the required network bandwidth Reduce load of central server Reduce cache space of a proxy Address client failure problem

Outline

Abstract Related work Client collaboration with loopback Loopback analytical model Local repair mechanism enhancing

reliability Conclusion and future work

Related work P2Cast:

A session is formed by clients arriving close in time.

Application-level forwarding tree.

Server

peer peerpeer

peerpeer

Related work CDN-P2P hybrid architecture:

Divide data into fractions A Client may receive video stream from multiple

peers, A client need to cache an entire video Client needs to caches an entire video

server

peerpeer peer

Outline

Abstract Related work Client collaboration with loopback Loopback analytical model Local repair mechanism enhancing

reliability Conclusion and future work

Basic assumption for client

Each client dynamically caches a portion of a video and storage space is limited

Client delivers only one stream at a time only during its own video playback and for a short period of time after the playback ends

Client may fail or choose to leave while delivering the video data to its peers.

Basic assumption for proxy

Storage space is limited. Bandwidth is limited. The prefix of a video is cached by

proxy server.

Forwarding Ring(1/3)

Clients arriving close to each other in time form a forwarding ring First client receiving data from a proxy. Last client returning data to the proxy.

First client receives the video prefix from the proxy and the remaining portion of a video from the central server

Forwarding Ring(2/3)

Next client join on time: Streamed to the newcomer.

The frames that have been already transmitted are removed from the

buffer. If next request arrive not in time

Oldest frames are passed back to the proxy and evicted from the buffer.

The late newcomer starts a new loop.

Forwarding Ring(3/3)

Proxy does not maintain a copy of a frame after transmitting to a client.

If the demand is high: There are few long loops containing many

clients. The entire video may be cached by the

clients. Proxy only need to forward one stream

to each loop and receive one stream from each loop

Outline

Abstract Related work Client collaboration with loopback Loopback analytical model Local repair mechanism enhancing

reliability Conclusion and future work

Loopback analytical model Analyze the resource usage at the

proxy and the central server load due to a single video under a given client arrival process.

Notation definition: :buffer size at each client :arrival time of the i’ th client. :storage space of the proxy( 0< <1)

Aggregate Loop Buffer Space

Data available locally

Proxy Buffer Space Utilization

Proxy I/O bandwidth usage

Central server load

Outline

Abstract Related work Client collaboration with loopback Loopback analytical model Local repair mechanism enhancing

reliability Conclusion and future work

Client failure effect: Loss data has to be obtained from central server,

incurring delays. May affect succeeding clients in a loop. The higher the demand, the larger the influence

of a failure on the performance Address this issue with redundant caching

schemes. significantly reduces server load shortens the repairing delay caused by

transmitting missing data

Complete-local and partial-local repair

Additional loads saved by local repairs

Outline Abstract Related work Client collaboration with loopback Loopback analytical model Loopback performance for multiple videos Local repair mechanism enhancing

reliability Conclusion and future work

Conclusion Loopback mechanism for exploiting client collaboration in a two-level

video streaming architecture. Improve resource usage

Server Network bandwidth and I/O bandwidth Proxy Network bandwidth and I/O bandwidth Proxy storage space

Analyze the effect of client failures and developed local repair approaches

Future work

Allow varying amount of resources committed by each client

Each client can specify how much disk space can be utilized

According to network bandwidth , each client can decide how many clients he want to serve , and for what period of time


Recommended