Date post: | 19-Dec-2015 |
Category: |
Documents |
View: | 215 times |
Download: | 0 times |
WS 2006-07
Prof. Dr. Th. Ottmann
Algorithmentheorie
16 – Persistenz und Vergesslichkeit
2 WS 2006-07
Overview
Motivation: Oblivious and persistent structures
Examples: Arrays, search trees
Making structures (partially) persistent:
Structure-copying, path-copying-, DSST-method
Application: Point location
Oblivious structures:
Randomized and uniquely represented structures, c-level jump lists
3 WS 2006-07
Motivation
A structure storing a set of keys is called oblivious, if it is not possible to infer its generation history from its current shape.
A structure is called persistent, if it supports access to multiple versions.
Partially persistent: All versions can be accessed but only the newest version can be modified.
Fully persistent: All versions can be accessed and modified.
Confluently persistent: Two or more old versions can be combined into one new version.
4 WS 2006-07
Example: Arrays
Array:
2 4 8 15 17 43 47 ……
Uniquely represented structure, hence, oblivious!
Access: In time O(log n) by binary search.
Update (Insertion, Deletion): (n)
Caution:Storage structure may still depend on generation history!
5 WS 2006-07
Example: Natural search trees
Only partially oblivious! Insertion history can sometimes be reconstructed. Deleted keys are not visible.
Access, insertion, deletion of keys may take time (n)
1, 3, 5, 7 5, 1, 3, 7
13
57 3
1
5
7
6 WS 2006-07
Simple methods for making structures persistent
Structure-copying method: Make a copy of the data structure each time it is changed. Yields full persistence at the price of (n) time and space per update to a structure of size n.
Store a log-file of all updates! In order to access version i, first carry out i updates, starting with the initial structure, and generate version i. (i) time per access, O(1) space and time per update.
Hybrid-method: Store the complete sequence of updates and additionally each k-th version for a suitably chosen k. Result: Any choice of k causes blowup in either storage space or access time.
Are there any better methods?
7 WS 2006-07
Making data structures persistent
Several constructions to make various data structures persistent have been devised, but no general approach has been taken until the seminal paper by Driscoll, Sarnak, Sleator and Tarjan, 1986.
They propose methods to make linked data structures partially as well as fully persistent.
Let’s first have a look at how to make structures consisting of linked nodes (trees, directed graphs,..) partially persistent.
8 WS 2006-07
Fat node method - partial persistence
Record all changes made to node fields in the nodes.
Each fat node contains the same fields as an ephemeral node and a version stamp
Add the modification history to every node: each field in a node contains a list of version-value pairs
9 WS 2006-07
Fat node method - partial persistence
Modifications
Ephemeral update step i creates new node: create a new fat node with version stamp i and original field values
Ephemeral update step i changes a field: store the field value plus a timestamp
Each node knows what its value was at any previous point in time
Access field f of version i
Choose the value with maximum version stamp no greater than i
10 WS 2006-07
Fat node method - analysis
Time cost per access gives O(log m) slowdown per node
(using binary search on the modification history)
Time and Space cost per update step is O(1)
(to store the modification along with the timestamp at the end of the modification history)
11 WS 2006-07
Fat node method - Example
A partially persistent search tree. Insertions:5,3,13,15,1,9,7,11,10, followed by deletion of item 13.
5
1-10
3 13
2 3
44
151
5 6
9
7
77
11
8
10
910
1010
10
12 WS 2006-07
Path-copying method - partial persistence
Make a copy of the node before changing it to point to the new child. Cascade the change back until root is reached.
Restructuring costs O(height_of_tree) per update operation
Every modification creates a new root.
Maintain an array of roots indexed by timestamps.
13 WS 2006-07
Path-copying method
Path-copying method
5
1 7
3
0
version 0:
14 WS 2006-07
Path-copying method
Path-copying method
5 5
1 1 7
3 3
2
0 1
version 1:Insert (2)
15 WS 2006-07
Path-copying method
Path-copying method
5 5 5
1 1 1 7
3 3 3
2 4
0 1 2
version 1:Insert (2)version 2:Insert (4)
16 WS 2006-07
Path-copying method
Path-copying method
Restructuring costs: O(log n) per update operation, if tree is maintained balanced.
5 5 5
1 1 1 7
3 3 3
2 4
0 1 2
version 1:Insert (2)version 2:Insert (4)
17 WS 2006-07
Node-copying method: DSST
DSST-method: Extend each node by a time-stamped modification box
? All versionsbefore time t
All versionsafter time t
Modification boxes• initially empty• are filled bottom up
k
t: rp
lp rp
18 WS 2006-07
DSST method
5
1
3
7
version 0
19 WS 2006-07
DSST method
5
1
3
2
7
1 lp
version 0:
20 WS 2006-07
DSST method
5
1
3
2
3
4
7
1 lp
version 1:Insert (2)version 2:Insert (4)
21 WS 2006-07
DSST method
The amortized costs (time and space) per update operation are O(1)
5
1
3
2
3
4
72 rp
1 lp
version 1:Insert (2)version 2:Insert (4)
22 WS 2006-07
Node-copying method - partial persistence
Modification
If modification box empty, fill it.
Otherwise, make a copy of the node, using only the latest values, i.e. value in modification box plus the value we want to insert, without using modification box
Cascade this change to the node’s parent
If the node is a root, add the new root to a sorted array of roots
Access time gets O(1) slowdown per node, plus additive O(log m) cost for finding the correct root
23 WS 2006-07
Node-copying method - Example
A partially persistent search tree. Insertions: 5,3,13,15,1,9,7,11,10, followed by deletion of item 13.
5
1-2
22
3
5
13
3-9
15
4
1
513
6
9
7
7
9
11
8
10
9
5
10
1110
24 WS 2006-07
Node-copying method - partial persistence
The amortized costs (time and space) per modification are O(1).Proof: Using the potential technique
25 WS 2006-07
Potential technique
Potential techniqueThe potential is a function of the entire data structure
Definition potential function: A measure of a data structure whose change after an operation corresponds to the time cost of the operation • The initial potential has to be equal to zero and non-negative for all versions• The amortized cost of an operation is the actual cost plus the change in potential• Different potential functions lead to different amortized bounds
26 WS 2006-07
Node-copying method - partial persistence
Definitions
• Live nodes: they form the latest version (reachable from the root of the most recent version), dead otherwise• Full live nodes: live nodes whose modification boxes are full
27 WS 2006-07
Node-copying method - potential paradigm
The potential function f (T): the number of full live nodes in T(initially zero)
The amortized cost of an operation is the actual cost plus the change in potential
Δ f =?
Each modification involves k number of copies, each with a O(1) space and time cost, and one change to a modification box with O(1) time costChange in potential after update operation i: Δ f =
Space: O(k + Δ f), time: O(k + 1 + Δ f)
Hence, a modification takes O(1) amortized space and O(1) amortized time
28 WS 2006-07
Application: Planar Pointlocation
Suppose that the Euclidian plane is subdivided into polygons by n line segments that intersect only at their endpoints.
Given such a polygonal subdivision and an on-line sequence of query points in the plane, the planar point location problem, is to determine for each query point the polygon containing it.
Measure an algorithm by three parameters:
1) The preprocessing time.
2) The space required for the data structure.
3) The time per query.
29 WS 2006-07
Planar point location -- example
30 WS 2006-07
Planar point location -- example
31 WS 2006-07
Solving planar point location (Cont.)
Partition the plane into vertical slabs by drawing a vertical line through each endpoint.
Within each slab the lines are totally ordered.
Allocate a search tree per slab containing the lines at the leaves with each line associate the polygon above it.
Allocate another search tree on the x-coordinates of the vertical lines
32 WS 2006-07
Solving planar point location (Cont.)
To answer query
first find the appropriate slab
then search the slab to find the polygon
33 WS 2006-07
Planar point location -- example
34 WS 2006-07
Planar point location -- analysis
Query time is O(log n)
How about the space ?
(n2)
And so could be the preprocessing time
35 WS 2006-07
Planar point location -- bad example
Total # lines O(n), and number of lines in each slab is O(n).
36 WS 2006-07
Planar point location & persistence
So how do we improve the space bound ?
Key observation: The lists of the lines in adjacent slabs are very similar.
Create the search tree for the first slab.
Then obtain the next one by deleting the lines that end at the corresponding vertex and adding the lines that start at that vertex
How many insertions/deletions are there alltogether ?
2n
37 WS 2006-07
Planar point location & persistence (cont)
Updates should be persistent since we need all search trees at the end.
Partial persistence is enough.
Well, we already have the path copying method, lets use it.What do we get ?
O(n logn) space and O(n log n) preprocessing time.
We can improve the space bound to O(n) by using the DSST method.
38 WS 2006-07
Methods for making structures oblivious
Unique representation of the structure:
Set/size uniqueness: For each set of n keys there is exactly one structure which can store such a set.
The storage is order unique, i.e. the nodes of the strucure are ordered and the keys are stored in ascending order in nodes with ascending numbers.
Randomise the structure:
Assure that the expectation for the occurrence of a structure storing a set M of keys is independent of the way how M was generated.
Observation: The address-assingment of pointers has to be subject under a randomised regime!
39 WS 2006-07
Example of a randomised structure
Z-stratified search tree
On each stratum, randomlychoose the distribution oftrees from Z.
Insertion?Deletion?
… …
… … ....
…..
…..
40 WS 2006-07
Uniquely represented structures
(a) Generation history determines structure
(b) Set-uniqueness:Set determines structure
1, 3, 5, 7 5, 1, 3, 7
1, 3, 5, 7
13
57
3
1
5
7
13
57
41 WS 2006-07
Uniquely represented structures
(c) Size-uniqueness:Size determines structure
1, 3, 5, 7
2, 4, 5, 8 Common structure
Order-uniqueness: Fixed ordering of nodes determines where the keys are to be stored.
1
3
2
4
2
4
5
8
1
3
5
7
42 WS 2006-07
Set- and order-unique structures
Lower bounds?
Assumptions: A dictionary of size n is represented by a graph of n nodes.
- Node degree finite (fixed),- Fixed order of the nodes,- i-th node stores i-largest key.
Operations allowed to change a graph:
• Creation | Removal of a node• Pointer change• Exchange of keys
Theorem: For each set- and order-unique representation of a dictionary with n keys, at least one of the operations access, insertion, or deletion must require time (n1/3).
43 WS 2006-07
Uniquely represented dictionaries
Problem: Find set-unique oder size-unique representations of the ADT „dictionary“
Known solutions:
(1) set-unique, oder-unique• Aragon/Seidel, FOCS 1989: Randomized Search Trees
• universal
• hash-function
• Update as for priority search trees!• Search, insert, delete can be carried out in O(log n) expected time.
(s, h(s))
priority
s Î X
44 WS 2006-07
The Jelly Fish
(2) L. Snyder, 1976, set-unique, oder-unique
Upper Bound: Jelly Fish, search insert delete in time O(n).
body: n nodes
n tentacles of length n each
10
5
1
2
3
6
7
8
11
12
45 WS 2006-07
Lower bound for tree-based structures
set-unique, oder-unique
Lower bound: For “tree-based” structures the following holds:
Update-time · Search-time = Ω (n)
Number of nodes n ≤ h L + 1
L ≥ (n – 1)/h
At least L-1 keys must have moved from leaves to internal nodes. Therefore, update requires time Ω(L).
Delete x1
Insert xn+1 > xn
L leaves
·
xnx1
h
46 WS 2006-07
Cons-structures
(3) Sunder/Tarjan, STOC 1990, Upper bound: (Nearly) full, binary search trees
Einzige erlaubte Operation für Updates:
Search time O(log n)
EinfügenEntfernen in Zeit O(n) möglich
·
··
·31 15 353
L Rx L R
x
Cons, ,
47 WS 2006-07
Jump-lists
(Half-dynamic) 2-level jump-list
2-level jump-liste of size nniini 22 )1(
Search: O(i) = O( ) timeInsertion: Deletion: O( )
time
n
n
22 4113
tail
0 i 2i n
(n-1)/i · i
2 3 5 7 8 10 11 12 14 17 19
48 WS 2006-07
Jump-lists: Dynamization
2-level-jump-list of size n niini 22 )1(
22 4113
search: O(i) = O(n) timeinsert delete
: O(n) time
Can be made fully dynamic:
(i-1)2 i2 n (i+1)2 (i+2)2
tail
0 i 2i n
(n-1)/i · i
2 3 5 7 8 10 11 12 14 17 19
49 WS 2006-07
3-level jump-lists
33 )1( ini
33 43,30 nnin 3
level 2
Search(x): locate x by followinglevel-2-pointers identifying i2 keys among which x may occur,level-1-pointers identifying i keys among which x may occur,level-0-pointers identifying x
time: O(i) = O(n1/3)
0 i 2i i2 i2+i 2·i2
50 WS 2006-07
3-level jump-lists
33 )1( ini
33 43,30 nnin 3
level 2
Update requiresChanging of 2 pointers on level 0Changing of i pointers on level 1Changing of all i pointers onlevel 2
Update time O(i) = O(n1/3)
0 i 2i i2 i2+i 2·i2
51 WS 2006-07
c-level jump-lists
Let
Lower levels:
level 0: all pointers of length 1:
...
level j: all pointers of legth ij-1:
...
level c/2 : ...
Upper levels:
level j: connect in a in list all nodes
1, 1·ij-1+1, 2· ij-1+1, 3· ij-1+1, ...
level c:
cc ini )1(
52 WS 2006-07
c-level jump-lists
Theorem:
For each c ≥ 3, the c-level jump-list is a size and order-unique representation of dictionaries with the following characteristics:
Space requirement O(c·n)Access time O(c·n1/c)
Update time , if n is even
, if n is odd
)( nO
)( 2/)1( ccnO