+ All Categories
Home > Documents > Michael K. Bradshaw, Bing Wang,

Michael K. Bradshaw, Bing Wang,

Date post: 19-Jan-2016
Category:
Upload: niles
View: 46 times
Download: 4 times
Share this document with a friend
Description:
Periodic Broadcast and Patching Services - Implementation, Measurement, and Analysis in an Internet Streaming Video Testbed. Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose, Prashant Shenoy, and Don Towsley ACM Multimedia 2001. Introduction. Multimedia streaming : - PowerPoint PPT Presentation
Popular Tags:
22
Periodic Broadcast and Patching Services - Implementation, Measurement, and Analysis in an Internet Streaming Video Testbed Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose, Prashant Shenoy, and Don Towsley ACM Multimedia 2001
Transcript
Page 1: Michael K. Bradshaw, Bing Wang,

Periodic Broadcast and Patching Services - Implementation,

Measurement, and Analysis in an Internet Streaming Video

Testbed

Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose,

Prashant Shenoy, and Don Towsley

ACM Multimedia 2001

Page 2: Michael K. Bradshaw, Bing Wang,

Introduction

Multimedia streaming :significant loads place on both server and network resources.Multicast approaches : Batching Periodic Broadcast Patching

Issues :control/signaling overhead, the interaction between disk and CPU scheduling, multicast join/leave times

Page 3: Michael K. Bradshaw, Bing Wang,

Batching

Server batches requests that arrive close together in time and multicast the stream to the set of batched clients.A drawback is that client playback latency increase with an increasing amount of client request aggregation.

Page 4: Michael K. Bradshaw, Bing Wang,

Periodic Broadcast

Server divides a video object into multiple segments, and continuously broadcasts segments over a set of multicast addresses.Earlier portions are broadcast more frequently than later ones to limit playback startup latency.Clients simultaneously listen to multiple addresses, storing future segments for later playback.

Page 5: Michael K. Bradshaw, Bing Wang,

Patching (stream tapping)

Server streams the entire video sequentially to the very first client.Client-side workahead buffering is used to allow a later-arriving client to receive its future playback data by listening to an existing ongoing transmission of the same video. Server need only additionally transmit those earlier frames that were missed by the later-arriving client.

Page 6: Michael K. Bradshaw, Bing Wang,

Server and Client Architecture

Page 7: Michael K. Bradshaw, Bing Wang,

Server Architecture

Server Control Engine (SCE) One listener thread A pool of free scheduler threads One transmission schedule per video

Server Data Engine (SDE) A global buffer cache manager Disk thread (DT) : round-lengthδ Network thread (NT) : round-lengthτ

Page 8: Michael K. Bradshaw, Bing Wang,

Schedule Data Structure

Page 9: Michael K. Bradshaw, Bing Wang,

Signaling between Server and Client

Page 10: Michael K. Bradshaw, Bing Wang,

Testbed (1)

100 Mbps switched Ethernet LANThree machines (server, workload generator and client) with Pentium-II 400 MHz CPU, 400 MB RAM, running Linux OSWorkload Generator generates a background load of client requests in a Poisson manner and logs the timing information for the request to be served

Page 11: Michael K. Bradshaw, Bing Wang,

Testbed (2)

Periodic broadcast : L. Gao, J. Kurose, and D. Towsley.

Efficient schemes for broadcasting popular videos (Greedy Disk-conserving Broadcasting segmentation scheme)

l-GDB : the initial segment is l seconds Subsequent segments are of size 2i-1l

where 1 < i < [log2L]

Page 12: Michael K. Bradshaw, Bing Wang,

Testbed (3)Sample Videos for the experiments

Video Format Length(min)

Frame rate Bandwidth (Mbps)

File size (MB)

# of RTP pkts

Blade1 MPEG-1

12 30 1.99 180.1 155146

Blade2 MPEG-1

15 30 3 337 296706

Demo MPEG-2

2.7 30 2 40.6 351383Mbps, 15min MPEG-1 Blade2 video

Scheme Segs. Segment Lengths (sec)

3-GDB 9 3,6,12,24,48,96,192,384,134.5(768)10-GDB 7 10,20,40,80,160,320,270.9(640)30-GDB 5 30,60,120,240,450.9(480)

Page 13: Michael K. Bradshaw, Bing Wang,

Testbed (4)

Patching algorithm : L. Gao and D. Towsley.

Supplying instantaneous video-on-demand services using controlled multicast. (Threshold-based Controlled Multicast scheme)

When client arrival rate for a video is Poisson with parameterλand the length of a video is L seconds, the threshold is chosen to be (sqrt(2Lλ+1)-1)/λ seconds.

Page 14: Michael K. Bradshaw, Bing Wang,

Performance Metrics

Server Side : System Read Load (SRL) Server Network Throughput (SNT) Deadline Conformance Percentage

(DCP)

Client Side : Client Frame Interarrival Time (CFIT) Reception Schedule Latency (RSL)

Page 15: Michael K. Bradshaw, Bing Wang,

Catching Implications (1)PB :

Page 16: Michael K. Bradshaw, Bing Wang,

Catching Implications (2)Patching :

Page 17: Michael K. Bradshaw, Bing Wang,

Catching Implications (3)

SRL for patching and 10-GDB with LFU caching

Page 18: Michael K. Bradshaw, Bing Wang,

Component BenchmarksConfiguratio

n# Videos # Addresses

per VideoBandwidt

hper Video

NT completion

Time

DT completionTime

I 3 8 16M bits 1.60ms / 33ms

6.16ms / 1sec

II 1 24 48M bits 5.08ms / 33ms

8.39ms / 1sec

Page 19: Michael K. Bradshaw, Bing Wang,

End-End Performance (1)

Client Frame Interarrival Time (CFIT) histogram under 3-GDB, 10-GDB, and 30-GDB at 600 requests per minute.

PB :

Page 20: Michael K. Bradshaw, Bing Wang,

End-End Performance (2)Patching :Request

RateNetwork

LoadCFIT DCP

1 per minute 20.85M bps Similar to the 30-GDB

99.9%

5 per minute 55.27M bps Similar to the 30-GDB

99.9%

Higher rates Bottleneck occurs

- -

Page 21: Michael K. Bradshaw, Bing Wang,

Scheduling Among Videos

Page 22: Michael K. Bradshaw, Bing Wang,

Conclusions

Network bandwidth, rather than server resources, is likely to be the bottleneck. PB : 600 requests per minute Patching : fully loading a 100 Mb network

An initial client startup delay of less than 1.5 sec is sufficient to handle startup signaling and absorb data jitter.Dramatic reductions can be gained via application-level data caching using LFU replacement policy.


Recommended