Quasirandom Rumor Spreading Tobias Friedrich Max-Planck-Institut für Informatik Saarbrücken.

Post on 01-Apr-2015

218 views 1 download

transcript

QuasirandomRumor Spreading

Tobias Friedrich

Max-Planck-Institut für Informatik Saarbrücken

Tobias Friedrich

Rumor Spreading

Rumor Spreading

Rumor Spreading

Outline

Tobias Friedrich

Randomized Rumor Spreading

Deterministic Rumor Spreading

Quasirandom Rumor Spreading

Outline

Tobias Friedrich

Randomized Rumor Spreading

Model (on a graph G):– Start: One node is informed– Each round, each informed node informs a neighbor chosen

uniformly at random– Broadcast time T(G): Number of rounds necessary to inform all

nodes (maximum taken over all starting nodes)

Round 0: Starting node is informedRound 1: Starting node informs random nodeRound 2: Each informed node informs a random nodeRound 3: Each informed node informs a random nodeRound 4: Each informed node informs a random nodeRound 5: Let‘s hope the remaining two get informed...

Tobias Friedrich

Randomized Rumor Spreading

Model (on a graph G):– Start: One node is informed– Each round, each informed node informs a neighbor chosen

uniformly at random– Broadcast time T(G): Number of rounds necessary to inform all

nodes (maximum taken over all starting nodes)

Application:– Broadcasting updates in distributed databases

simple robust self-organized

Tobias Friedrich

Randomized Rumor Spreading

Model (on a graph G):– Start: One node is informed– Each round, each informed node informs a neighbor chosen

uniformly at random– Broadcast time T(G): Number of rounds necessary to inform all

nodes (maximum taken over all starting nodes)

Results [n: Number of nodes]:– T(G) ≥ log(n) for all graphs G

– T(Kn) = O(log(n)) w.h.p. [Frieze, Grimmet’85]

– T({0,1}d) = O(log(n)) w.h.p. [Feige, Peleg, Raghavan, Upfal’90]

– T(Gn,p) = O(log(n)) w.h.p., p > (1+ε) log(n)/n [Feige et al.’90]

Tobias Friedrich

Deterministic Rumor Spreading?

As above, but now with Propp-Machine:– Each node has a list of its neighbors.– Informed nodes inform their neighbors in the order of

this list.

Problem: Might take long...

Here: n-1 rounds .

1 3 4 5 62

List: 2 3 4 5 6 3 4 5 6 1 4 5 6 1 2 5 6 1 2 3 6 1 2 3 4 1 2 3 4 5

Tobias Friedrich

Quasirandom Rumor Spreading

As above except:– Each node has a list of its neighbors.– Informed nodes inform their neighbors in the order of

this list, but start at a random position in the list

Tobias Friedrich

Quasirandom Rumor Spreading

As above except:– Each node has a list of its neighbors.– Informed nodes inform their neighbors in the order of

this list, but start at a random position in the list

Results:

Tobias Friedrich

Quasirandom Rumor Spreading

As above except:– Each node has a list of its neighbors.– Informed nodes inform their neighbors in the order of

this list, but start at a random position in the list

Results: The log(n) bounds for – complete graphs,

– random graphs Gn,p, p ≥ (1+ε) log(n)/n,

– hypercubes

still hold...

Tobias Friedrich

Quasirandom Rumor Spreading

As above except:– Each node has a list of its neighbors.– Informed nodes inform their neighbors in the order of

this list, but start at a random position in the list

Results: The log(n) bounds for – complete graphs,

– random graphs Gn,p, p ≥ (1+ε) log(n)/n,

– hypercubes

still hold independent from the structure of the lists[Doerr, F., Sauerwald ‘08]

Tobias Friedrich

Quasirandom Rumor Spreading

Results (cont.):– Random graphs Gn,p, p = (log(n)+log(log(n)))/n:

fully randomized: T(Gn,p) = Θ(log(n)2)

quasirandom: T(Gn,p) = Θ(log(n))

– Complete k-regular trees: fully randomized: T(G) = Θ(k log(n)) quasirandom: T(G) = Θ(k log(n)/log(k))

Algorithm Engineering Perspective:– need fewer random bits– easy to implement: Any implicitly existing permutation of

the neighbors can be used for the lists

Tobias Friedrich

Quasirandom Rumor Spreading Proof ingredients:

– Forward Approximation: O(log n) nodes quickly informed O(log n) phases with a constant number of rounds

– set of newly informed nodes is independent– number of informed nodes doubles per phase

afterwards constant fraction informed

– Backward Approximation: if there is one uninformed vertex at time t, then there are at

least Ω(log n) vertices uninformed O(log n) time steps before

– Coupling w.h.p. one of the Ω(n) informed vertices informes one of the

O(log n) uninformed vertices within a single step

Tobias Friedrich

Summary

Quasirandomness: – Simulate a particular aspect of a random object

Surprising results:– Quasirandom walks (see Talk 61, Sat 13:45)– Quasirandom rumor spreading

For future research:– Good news: Quasirandomness can be analyzed– Many open problems– “What is the right dose of randomness?”

Thank you!!