+ All Categories
Home > Documents > CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing...

CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing...

Date post: 05-Jan-2016
Category:
Upload: archibald-pearson
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
39
CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University
Transcript
Page 1: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

CSE 535 – Mobile Computing

Lecture 8: Data Dissemination

Sandeep K. S. Gupta

School of Computing and Informatics

Arizona State University

Page 2: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Data Dissemination

Page 3: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Communications Asymmetry

Network asymmetry In many cases, downlink bandwidth far exceeds uplink

bandwidth

Client-to-server ratio Large client population, few servers

Data volume Small requests for info, large responses Again, downlink bandwidth more important

Update-oriented communication Updates likely affect a number of clients

Page 4: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Disseminating Data to Wireless Hosts

Broadcast-oriented dissemination makes sense for many applications

Can be one-way or with feedback Sports Stock prices New software releases (e.g., Netscape) Chess matches Music Election Coverage Weather…

Page 5: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Dissemination: Pull

Pull-oriented dissemination can run into trouble when demand is extremely high Web servers crash Bandwidth is exhausted

client

client

client

clientclient

client

client

server

help

Page 6: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Dissemination: Push

Server pushes data to clients No need to ask for data Ideal for broadcast-based media (wireless)

client

client

client

clientclient

client

client

server

Whew!

Page 7: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Broadcast Disks

1

2 3

4

5 6

server

Schedule of data blocks to be transmitted

Page 8: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Broadcast Disks: Scheduling

1

2 3

4

5 6

Round Robin Schedule

1

1 2

1

3 1

Priority Schedule

Page 9: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Priority Scheduling (2)

Random Randomize broadcast schedule Broadcast "hotter" items more frequently

Periodic Create a schedule that broadcasts hotter items more

frequently… …but schedule is fixed "Broadcast Disks: Data Management…" paper uses this

approach Simplifying assumptions

Data is read-only Schedule is computed and doesn't change… Means access patterns are assumed the same

Allows mobile hosts to sleep…

Page 10: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

"Broadcast Disks: Data Management…"

Order pages from "hottest" to coldest Partition into ranges ("disks")—pages in a range have similar

access probabilities Choose broadcast frequency for each "disk" Split each disk into "chunks"

maxchunks = LCM(relative frequencies) numchunks(J) = maxchunks / relativefreq(J)

Broadcast program is then:for I = 0 to maxchunks - 1

for J = 1 to numdisks

Broadcast( C(J, I mod numchunks(J) )

Page 11: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Sample Schedule, From Paper

Relative frequencies 4 2 1

Page 12: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Broadcast Disks: Research Questions

From Vaidya: How to determine the demand for various

information items? Given demand information, how to schedule

broadcast? What happens if there are transmission errors? How should clients cache information?

User might want data item immediately after transmission…

Page 13: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Hot For You Ain't Hot for Me

Hottest data items are not necessarily the ones most frequently accessed by a particular client

Access patterns may have changed Higher priority may be given to other clients Might be the only client that considers this data

important… Thus: need to consider not only probability of access

(standard caching), but also broadcast frequency A bug in the soup: Hot items are more likely to be cached!

(Reduce their frequency?)

Page 14: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Broadcast Disks Paper: Caching

Under traditional caching schemes, usually want to cache "hottest" data

What to cache with broadcast disks? Hottest? Probably not—that data will come around soon! Coldest? Ummmm…not necessarily… Cache data with access probability significantly

higher than broadcast frequency

Page 15: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Caching, Cont.

PIX algorithm (Acharya) Eject the page from local cache with the smallest

value of:

probability of access broadcast frequency

Means that pages that are more frequently accessed may be ejected if they are expected to be broadcast frequently…

Page 16: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Broadcast Disks: Issues

User profiles Provide information about data needs of particular clients "Back channel" for clients to inform server of needs Either advise server of data needs… …or provide "relevance feedback"

Dynamic broadcast Changing data values introduces interesting consistency

issues If processes read values at different times, are the values

the same? Simply guarantee that data items within a particular

broadcast period are identical?

Page 17: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Hybrid Push/Pull

"Balancing Push and Pull for Data Broadcast" (Acharya, et al SIGMOD '97)

"Pull Bandwidth" (PullBW) – portion of bandwidth dedicated to pull-oriented requests from clients

PullBW = 0%"pure" Push

Clients needinga page simply wait

PullBW = 100%Schedule is totally request-based

Page 18: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Interleaved Push and Pull (IPP)

Mixes push and pull Allows client to send requests to the server for missed (or

absent) data items Broadcast disk transmits program plus requested data

items (interleaved) Fixed threshold ThresPerc to limit use of the back channel

by a particular client Sends a pull request for p only if # of slots before p will be

broadcast is greater than ThresPerc ThresPerc is a percentage of the cycle length Also controls server load–as ThresPerc 100%, server is

protected

Page 19: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

CSIM-based Simulation

Measured Client (MC) Client whose performance is being measured

Virtual Client (VC) Models the "rest" of the clients as a single entity… …chewing up bandwidth, making requests…

Assumptions: Front channel and back channel are independent Broadcast program is static—no dynamic profiles Data is read only

Page 20: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Simulation (1)

No feedback to clients!

Page 21: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Simulation (2) Can control ratio of VC to MC requests Noise controls the similarity of the access patterns of

VC and MC Noise == 0 same access pattern PIX algorithm is used to manage client cache VC's access pattern is used to generate the broadcast

(since VC represents a large population of clients) Goal of simulation is to measure tradeoffs between

push and pull under broadcast

Page 22: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Simulation (3)

CacheSize pages are maintained in a local cache SteadyStatePerc models the # of clients in the VC

population that have “filled” caches—e.g., most important pages are in cache

ThinkTimeRatio models intensity of VC request generation relative to MC

ThinkTimeRatio high means more activity on the part of virtual clients

Page 23: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Simulation (4)

Page 24: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experiment 1: Push vs. Pull

Important! PullBW set at 50% in 3a – if server's pull queue fills, requests are dropped!

At PullBW = 10%, reduction in bandwidth hurts push, is insufficient for pull requests!

server death!

Light loads: pull better

Page 25: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experiment 2: Cache Warmup Time for MC

Low server load: pull better. High server load: push better.

Page 26: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experiment 3: Noise: Are you (VC) like me (MC)?

On the left, pure push vs. pure pull. On the right, pure push vs. IPP

!!!

Page 27: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experiment 4: Limiting Greed

If there’s plenty of bandwidth, limiting greed isn’t a good idea

On the other hand…

Page 28: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experiment 5: Incomplete Broadcasts

Not all pages broadcast—non-broadcast pages must be explicitly pulledLesson: Must provide adequate bandwidth or response time will suffer!

In 7b, making clients wait longer before requesting helps…

Server overwhelmed—requests are being dropped!

Page 29: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Incomplete Broadcast: More

Lesson: Careful! At high server loads with lots of pages not broadcast, IPP can be worse than push or pull!

Page 30: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Experimental Conclusions

Light server load: pull better Push provides a safety cushion in case a pull request is dropped,

but only if all pages are broadcast Limits on pull provide a safety cushion that prevents the server from

being crushed Broadcasting all pages can be wasteful But must provide adequate bandwidth to pull omitted pages… Otherwise, at high load, IPP can be worse than pull! Overall: Push and pull tend to beat IPP in certain circumstances But IPP tends to have reasonable performance over a wide variety

of system loads… Punchline: IPP a good compromise in a wide range of

circumstances

Page 31: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Mobile Caching: General Issues

Mobile user/application issues: Data access pattern (reads? writes?) Data update rate Communication/access cost Mobility pattern of the client Connectivity characteristics

disconnection frequency available bandwidth

Data freshness requirements of the user Context dependence of the information

Page 32: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Mobile Caching (2)

Research questions: How can client-side latency be reduced? How can consistency be maintained among all caches and the

server(s)? How can we ensure high data availability in the presence of

frequent disconnections? How can we achieve high energy/bandwidth efficiency? How to determine the cost of a cache miss and how to incorporate

this cost in the cache management scheme? How to manage location-dependent data in the cache? How to enable cooperation between multiple peer caches?

Page 33: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Mobile Caching (3)

Cache organization issues: Where do we cache? (client? proxy? service?) How many levels of caching do we use (in the case of hierarchical

caching architectures)? What do we cache (i.e., when do we cache a data item and for how

long)? How do we invalidate cached items? Who is responsible for invalidations? What is the granularity at which the invalidation is done? What data currency guarantees can the system provide to users? What are the (real $$$) costs involved? How do we charge users? What is the effect on query delay (response time) and system

throughput (query completion rate)?

Page 34: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Weak vs. Strong Consistency

Strong consistency Value read is most current value in system Invalidation on each write can expire outdated values Disconnections may cause loss of invalidation messages Can also poll on every access Impossible to poll if disconnected!

Weak consistency Value read may be “somewhat” out of date TTL (time to live) associated with each value

Can combine TTL with polling e.g., Background polling to update TTL or retrieval of new copy

of data item if out of date

Page 35: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Disconnected Operation Disconnected operation is very desirable for mobile

units Idea: Attempt to cache/hoard data so that when

disconnections occur, work (or play) can continue Major issues:

What data items (files) do we hoard? When and how often do we perform hoarding? How do we deal with cache misses? How do we reconcile the cached version of the data item with

the version at the server?

Page 36: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

One Slide Case Study: Coda

Coda: file system developed at CMU that supports disconnected operation

Cache/hoard files and resolve needed updates upon reconnection

Replicate servers to improve availability What data items (files) do we hoard?

User selects and prioritizes Hoard walking ensures that cache contains the “most

important” stuff When and how often do we perform hoarding?

Often, when connected

Page 37: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

Coda (2)

(OK, two slides) How do we deal with cache misses?

If disconnected, cannot How do we reconcile the cached version of the data item with

the version at the server? When connection is possible, can check before updating When disconnected, use local copies Upon reconnection, resolve updates If there are hard conflicts, user must intervene (e.g., it’s manual—

requires a human brain) Coda reduces the cost of checking items for consistency by

grouping them into volumes If a file within one of these groups is modified, then the volume

is marked modified and individual files within can be checked

Page 38: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

WebExpress

Housel, B. C., Samaras, G., and Lindquist, D. B., “WebExpress: A Client/Intercept Based System for Optimizing Web Browsing in a Wireless Environment,” Mobile Networks and Applications 3:419–431, 1998.

System which intercepts web browsing, providing sophisticated caching and bandwidth saving optimizations for web activity in mobile environments

Major issues: Disconnected operation Verbosity of HTTP protocol Perform Protocol Reduction TCP connection setup time Try to re-use TCP connections Low bandwidth in wireless networks Caching Many responses from web servers are very similar to those seen previously

Use differencing rather than returning complete responses, particularly for CGI-based interactions

Page 39: CSE 535 – Mobile Computing Lecture 8: Data Dissemination Sandeep K. S. Gupta School of Computing and Informatics Arizona State University.

WebExpress (2)

Two intercepts: both on client side and in the wired network

Caching on both client and on wired network + differencing

One TCP connection

Reduce redundantHTTP header info

Reinsert removed HTTP header info on server side


Recommended