+ All Categories
Home > Documents > Data Mining:

Data Mining:

Date post: 15-Nov-2014
Category:
Upload: mahendirana
View: 571 times
Download: 0 times
Share this document with a friend
Popular Tags:
149
06/26/22 Data Mining: Concepts and Tec hniques 1 Data Mining: Concepts and Techniques — Chapter 4 — Jiawei Han and Micheline Kamber Department of Computer Science University of Illinois at Urbana-Champaign www.cs.uiuc.edu/~hanj ©2008 Jiawei Han. All rights reserved.
Transcript
Page 1: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

1

Data Mining: Concepts and

Techniques

— Chapter 4 —

Jiawei Han and Micheline Kamber

Department of Computer Science

University of Illinois at Urbana-Champaign

www.cs.uiuc.edu/~hanj©2008 Jiawei Han. All rights reserved.

Page 2: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

2

Page 3: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

3

Chapter 4: Data Cube Technology

Efficient Computation of Data Cubes

Exploration and Discovery in

Multidimensional Databases

Summary

Page 4: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

4

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 5: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

5

Roadmap for Efficient Computation

Preliminary cube computation tricks (Agarwal et al.’96) Computing full/iceberg cubes: 3 methodologies

Top-Down: Multi-Way array aggregation (Zhao, Deshpande & Naughton, SIGMOD’97)

Bottom-Up: Bottom-up computation: BUC (Beyer & Ramarkrishnan,

SIGMOD’99) H-cubing technique (Han, Pei, Dong & Wang: SIGMOD’01)

Integrating Top-Down and Bottom-Up: Star-cubing algorithm (Xin, Han, Li & Wah: VLDB’03)

High-dimensional OLAP: A Minimal Cubing Approach (Li, et al. VLDB’04)

Computing alternative kinds of cubes: Partial cube, closed cube, approximate cube, etc.

Page 6: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

6

Cube: A Lattice of Cuboids

time,item

time,item,location

time, item, location, supplier

all

time item location supplier

time,location

time,supplier

item,location

item,supplier

location,supplier

time,item,supplier

time,location,supplier

item,location,supplier

0-D(apex) cuboid

1-D cuboids

2-D cuboids

3-D cuboids

4-D(base) cuboid

Page 7: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

7

Preliminary Tricks (Agarwal et al. VLDB’96)

Sorting, hashing, and grouping operations are applied to the dimension attributes in order to reorder and cluster related tuples

Aggregates may be computed from previously computed aggregates, rather than from the base fact table

Smallest-child: computing a cuboid from the smallest, previously computed cuboid

Cache-results: caching results of a cuboid from which other cuboids are computed to reduce disk I/Os

Amortize-scans: computing as many as possible cuboids at the same time to amortize disk reads

Share-sorts: sharing sorting costs cross multiple cuboids when sort-based method is used

Share-partitions: sharing the partitioning cost across multiple cuboids when hash-based algorithms are used

Page 8: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

8

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 9: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

9

Multi-Way Array AggregationMulti-Way Array Aggregation

Array-based “bottom-up” algorithm

Using multi-dimensional chunks No direct tuple comparisons Simultaneous aggregation on

multiple dimensions Intermediate aggregate values

are re-used for computing ancestor cuboids

Cannot do Apriori pruning: No iceberg optimization

a ll

A B

A B

A B C

A C B C

C

Page 10: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

10

Multi-way Array Aggregation for Cube Computation (MOLAP)

Partition arrays into chunks (a small subcube which fits in memory). Compressed sparse array addressing: (chunk_id, offset) Compute aggregates in “multiway” by visiting cube cells in the order

which minimizes the # of times to visit each cell, and reduces memory access and storage cost.

What is the best traversing order to do multi-way aggregation?

A

B

29 30 31 32

1 2 3 4

5

9

13 14 15 16

6463626148474645

a1a0

c3c2

c1c 0

b3

b2

b1

b0

a2 a3

C

B

4428 56

4024 52

3620

60

Page 11: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

11

Multi-way Array Aggregation for Cube Computation

A

B

29 30 31 32

1 2 3 4

5

9

13 14 15 16

6463626148474645

a1a0

c3c2

c1c 0

b3

b2

b1

b0

a2 a3

C

4428 56

4024 52

3620

60

B

Page 12: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

12

Multi-way Array Aggregation for Cube Computation

A

B

29 30 31 32

1 2 3 4

5

9

13 14 15 16

6463626148474645

a1a0

c3c2

c1c 0

b3

b2

b1

b0

a2 a3

C

4428 56

4024 52

3620

60

B

Page 13: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

13

Multi-Way Array Aggregation for Cube Computation (Cont.)

Method: the planes should be sorted and computed according to their size in ascending order Idea: keep the smallest plane in the main

memory, fetch and compute only one chunk at a time for the largest plane

Limitation of the method: computing well only for a small number of dimensions If there are a large number of dimensions, “top-

down” computation and iceberg cube computation methods can be explored

Page 14: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

14

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 15: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

15

Bottom-Up Computation (BUC)

BUC (Beyer & Ramakrishnan, SIGMOD’99)

Bottom-up cube computation (Note: top-down in our view!)

Divides dimensions into partitions and facilitates iceberg pruning If a partition does not satisfy

min_sup, its descendants can be pruned

If minsup = 1 compute full CUBE!

No simultaneous aggregation

a ll

A B C

A C B C

A B C A B D A C D B C D

A D B D C D

D

A B C D

A B

1 a ll

2 A 1 0 B 1 4 C

7 A C 1 1 B C

4 A B C 6 A B D 8 A C D 1 2 B C D

9 A D 1 3 B D 1 5 C D

1 6 D

5 A B C D

3 A B

Page 16: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

16

BUC: Partitioning Usually, entire data set

can’t fit in main memory Sort distinct values, partition into blocks that fit Continue processing Optimizations

Partitioning External Sorting, Hashing, Counting Sort

Ordering dimensions to encourage pruning Cardinality, Skew, Correlation

Collapsing duplicates Can’t do holistic aggregates anymore!

Page 17: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

17

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 18: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

18

H-Cubing: Using H-Tree StructureH-Cubing: Using H-Tree Structure

Bottom-up computation Exploring an H-tree

structure If the current

computation of an H-tree cannot pass min_sup, do not proceed further (pruning)

No simultaneous aggregation

a ll

A B C

A C B C

A B C A B D A C D B C D

A D B D C D

D

A B C D

A B

Page 19: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

19

H-tree: A Prefix Hyper-tree

Month CityCust_gr

pProd Cost Price

Jan Tor Edu Printer 500 485

Jan Tor Hhd TV 800 1200

Jan Tor EduCamer

a1160 1280

Feb Mon Bus Laptop 1500 2500

Mar Van Edu HD 540 520

… … … … … …

root

edu hhd bus

Jan Mar Jan Feb

Tor Van Tor Mon

Q.I.Q.I. Q.I.Quant-Info

Sum: 1765Cnt: 2

bins

Attr. Val.Quant-

InfoSide-link

EduSum:2285

…Hhd …Bus …… …Jan …Feb …… …

Tor …Van …Mon …

… …

Headertable

Page 20: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

20

root

Edu. Hhd. Bus.

Jan. Mar. Jan. Feb.

Tor. Van. Tor. Mon.

Q.I.Q.I. Q.I.Quant-Info

Sum: 1765Cnt: 2

bins

Attr. Val.

Quant-Info Side-link

Edu Sum:2285 …Hhd …Bus …… …Jan …Feb …… …

TorTor ……Van …Mon …

… …

Attr. Val.

Q.I.Side-link

Edu …Hhd …Bus …… …

Jan …Feb …… …

HeaderTableHTor

From (*, *, Tor) to (*, Jan, Tor)

Computing Cells Involving “City”

Page 21: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

21

Computing Cells Involving Month But No City

root

Edu. Hhd. Bus.

Jan. Mar. Jan. Feb.

Tor. Van. Tor. Mont.

Q.I.Q.I. Q.I.

Attr. Val.

Quant-Info Side-link

Edu. Sum:2285 …Hhd. …Bus. …

… …Jan. …Feb. …Mar. …

… …Tor. …Van. …Mont. …

… …

1. Roll up quant-info2. Compute cells

involving month but no city

Q.I.

Top-k OK mark: if Q.I. in a child passes top-k avg threshold, so does its parents. No binning is needed!

Page 22: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

22

Computing Cells Involving Only Cust_grp

root

edu hhd bus

Jan Mar Jan Feb

Tor Van Tor Mon

Q.I.Q.I. Q.I.

Attr. Val.

Quant-Info Side-link

EduSum:2285

…Hhd …Bus …

… …Jan …Feb …Mar …… …

Tor …Van …Mon …

… …

Check header table directly

Q.I.

Page 23: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

23

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 24: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

24

Star-Cubing: An Integrating Star-Cubing: An Integrating MethodMethod

D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration, VLDB'03

Explore shared dimensions E.g., dimension A is the shared dimension of ACD and AD ABD/AB means cuboid ABD has shared dimensions AB

Allows for shared computations e.g., cuboid AB is computed simultaneously as ABD

C /C

A C /A C B C /B C

A B C /A B C A B D /A B A C D /A B C D

A D /A B D /B C D

D

A B C D /a ll

Aggregate in a top-down manner but with the bottom-up sub-layer underneath which will allow Apriori pruning

Shared dimensions grow in bottom-up fashion

Page 25: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

25

Iceberg Pruning in Shared DimensionsIceberg Pruning in Shared Dimensions

Anti-monotonic property of shared dimensions If the measure is anti-monotonic, and if the

aggregate value on a shared dimension does not satisfy the iceberg condition, then all the cells extended from this shared dimension cannot satisfy the condition either

Intuition: if we can compute the shared dimensions before the actual cuboid, we can use them to do Apriori pruning

Problem: how to prune while still aggregate simultaneously on multiple dimensions?

Page 26: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

26

Cell TreesCell Trees

Use a tree structure

similar to H-tree to

represent cuboids

Collapses common

prefixes to save memory

Keep count at node

Traverse the tree to

retrieve a particular tuple

Page 27: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

27

Star Attributes and Star NodesStar Attributes and Star Nodes

Intuition: If a single-dimensional aggregate on an attribute value p does not satisfy the iceberg condition, it is useless to distinguish them during the iceberg computation E.g., b2, b3, b4, c1, c2, c4, d1, d2,

d3

Solution: Replace such attributes by a *. Such attributes are star attributes, and the corresponding nodes in the cell tree are star nodes

A B C D Count

a1 b1 c1 d1 1

a1 b1 c4 d3 1

a1 b2 c2 d2 1

a2 b3 c3 d4 1

a2 b4 c3 d4 1

Page 28: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

28

Example: Star ReductionExample: Star Reduction

Suppose minsup = 2 Perform one-dimensional

aggregation. Replace attribute values whose count < 2 with *. And collapse all *’s together

Resulting table has all such attributes replaced with the star-attribute

With regards to the iceberg computation, this new table is a loseless compression of the original table

A B C D Count

a1 b1 * * 2a1 * * * 1a2 * c3 d4 2

A B C D Count

a1 b1 * * 1a1 b1 * * 1a1 * * * 1a2 * c3 d4 1a2 * c3 d4 1

Page 29: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

29

Star TreeStar Tree

Given the new compressed

table, it is possible to

construct the

corresponding cell tree—

called star tree

Keep a star table at the

side for easy lookup of star

attributes

The star tree is a loseless

compression of the original

cell tree

A B C D Count

a1 b1 * * 2

a1 * * * 1

a2 * c3 d4 2

Page 30: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

30

Star-Cubing Algorithm—DFS on Lattice Tree

a ll

A B /B C /C

A C /A C B C /B C

A B C /A B C A B D /A B A C D /A B C D

A D /A B D /B C D

D /D

A B C D

/A

A B /A B

B C D : 5 1

b* : 3 3 b1 : 2 6

c* : 2 7c3 : 2 1 1c* : 1 4

d* : 1 5 d4 : 2 1 2 d* : 2 8

ro o t : 5

a 1 : 3 a 2 : 2

b* : 2b1 : 2b* : 1

d* : 1

c* : 1

d* : 2

c* : 2

d4 : 2

c3 : 2

Page 31: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

31

Multi-Way Multi-Way AggregationAggregation

A B C /A B CA B D /A BA C D /AB C D

A B C D

Page 32: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

32

Star-Cubing Algorithm—DFS on Star-Tree

Page 33: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

33

Multi-Way Star-Tree AggregationMulti-Way Star-Tree Aggregation

Start depth-first search at the root of the base star

tree

At each new node in the DFS, create corresponding

star tree that are descendents of the current tree

according to the integrated traversal ordering

E.g., in the base tree, when DFS reaches a1, the

ACD/A tree is created

When DFS reaches b*, the ABD/AD tree is created

The counts in the base tree are carried over to the

new trees

Page 34: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

34

Multi-Way Aggregation (2)Multi-Way Aggregation (2)

When DFS reaches a leaf node (e.g., d*), start backtracking

On every backtracking branch, the count in the corresponding trees are output, the tree is destroyed, and the node in the base tree is destroyed

Example When traversing from d* back to c*, the a1b*c*/a1b*c* tree is output and destroyed

When traversing from c* back to b*, the a1b*D/a1b* tree is output and destroyed

When at b*, jump to b1 and repeat similar process

Page 35: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

35

Efficient Computation of Data Cubes

General heuristics

Multi-way array aggregation

BUC

H-cubing

Star-Cubing

High-Dimensional OLAP

Page 36: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

36

The Curse of Dimensionality

None of the previous cubing method can handle high dimensionality!

A database of 600k tuples. Each dimension has cardinality of 100 and zipf of 2.

Page 37: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

37

Motivation of High-D OLAP

X. Li, J. Han, and H. Gonzalez, High-Dimensional OLAP: A Minimal Cubing Approach, VLDB'04

Challenge to current cubing methods: The “curse of dimensionality’’ problem Iceberg cube and compressed cubes: only

delay the inevitable explosion Full materialization: still significant overhead in

accessing results on disk High-D OLAP is needed in applications

Science and engineering analysis Bio-data analysis: thousands of genes Statistical surveys: hundreds of variables

Page 38: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

38

Fast High-D OLAP with Minimal Cubing

Observation: OLAP occurs only on a small subset of

dimensions at a time

Semi-Online Computational Model

1. Partition the set of dimensions into shell

fragments

2. Compute data cubes for each shell fragment while

retaining inverted indices or value-list indices

3. Given the pre-computed fragment cubes,

dynamically compute cube cells of the high-

dimensional data cube online

Page 39: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

39

Properties of Proposed Method

Partitions the data vertically

Reduces high-dimensional cube into a set of

lower dimensional cubes

Online re-construction of original high-

dimensional space

Lossless reduction

Offers tradeoffs between the amount of pre-

processing and the speed of online computation

Page 40: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

40

Example Computation

Let the cube aggregation function be count

Divide the 5 dimensions into 2 shell fragments: (A, B, C) and (D, E)

tid A B C D E

1 a1 b1 c1 d1 e1

2 a1 b2 c1 d2 e1

3 a1 b2 c1 d1 e2

4 a2 b1 c1 d1 e2

5 a2 b1 c1 d1 e3

Page 41: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

41

1-D Inverted Indices

Build traditional invert index or RID list

Attribute Value TID List List Size

a1 1 2 3 3

a2 4 5 2

b1 1 4 5 3

b2 2 3 2

c1 1 2 3 4 5 5

d1 1 3 4 5 4

d2 2 1

e1 1 2 2

e2 3 4 2

e3 5 1

Page 42: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

42

Shell Fragment Cubes: Ideas

Generalize the 1-D inverted indices to multi-dimensional ones in the data cube sense

Compute all cuboids for data cubes ABC and DE while retaining the inverted indices

For example, shell fragment cube ABC contains 7 cuboids: A, B, C AB, AC, BC ABC

This completes the offline computation stage

111 2 3 1 4 5a1 b1

04 5 2 3a2 b2

24 54 5 1 4 5a2 b1

22 31 2 3 2 3a1 b2

List SizeTID ListIntersectionCell

Page 43: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

43

Shell Fragment Cubes: Size and Design

Given a database of T tuples, D dimensions, and F shell fragment size, the fragment cubes’ space requirement is:

For F < 5, the growth is sub-linear Shell fragments do not have to be disjoint Fragment groupings can be arbitrary to allow for

maximum online performance Known common combinations (e.g.,<city, state>)

should be grouped together. Shell fragment sizes can be adjusted for optimal balance

between offline and online computation

O TD

F

(2F 1)

Page 44: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

44

ID_Measure Table

If measures other than count are present, store in ID_measure table separate from the shell fragments

tid count sum

1 5 70

2 3 10

3 8 20

4 5 40

5 2 30

Page 45: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

45

The Frag-Shells Algorithm

1. Partition set of dimension (A1,…,An) into a set of k fragments

(P1,…,Pk).

2. Scan base table once and do the following

3. insert <tid, measure> into ID_measure table.

4. for each attribute value ai of each dimension Ai

5. build inverted index entry <ai, tidlist>

6. For each fragment partition Pi

7. build local fragment cube Si by intersecting tid-lists in

bottom- up fashion.

Page 46: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

46

Frag-Shells (2)

A B C D E F …

ABCCube

DEFCube

D CuboidEF Cuboid

DE CuboidCell Tuple-ID List

d1 e1 {1, 3, 8, 9}d1 e2 {2, 4, 6, 7}d2 e1 {5, 10}… …

Dimensions

Page 47: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

47

Online Query Computation: Query

A query has the general form

Each ai has 3 possible values

1. Instantiated value

2. Aggregate * function

3. Inquire ? function

For example, returns a 2-

D data cube.

a1,a2,,an :M

3 ? ? * 1: count

Page 48: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

48

Online Query Computation: Method

Given the fragment cubes, process a query as

follows

1. Divide the query into fragment, same as

the shell

2. Fetch the corresponding TID list for each

fragment from the fragment cube

3. Intersect the TID lists from each fragment

to construct instantiated base table

4. Compute the data cube using the base

table with any cubing algorithm

Page 49: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

49

Online Query Computation: Sketch

A B C D E F G H I J K L M N …

OnlineCube

Instantiated Base Table

Page 50: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

50

Experiment: Size vs. Dimensionality (50 and 100

cardinality)

(50-C): 106 tuples, 0 skew, 50 cardinality, fragment size 3. (100-C): 106 tuples, 2 skew, 100 cardinality, fragment size 2.

Page 51: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

51

Experiment: Size vs. Shell-Fragment Size

(50-D): 106 tuples, 50 dimensions, 0 skew, 50 cardinality. (100-D): 106 tuples, 100 dimensions, 2 skew, 25 cardinality.

Page 52: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

52

Experiment: Run-time vs. Shell-Fragment Size

106 tuples, 20 dimensions, 10 cardinality, skew 1, fragment size 3, 3 instantiated dimensions.

Page 53: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

53

Experiments on Real World Data

UCI Forest CoverType data set 54 dimensions, 581K tuples Shell fragments of size 2 took 33 seconds and

325MB to compute 3-D subquery with 1 instantiate D: 85ms~1.4 sec.

Longitudinal Study of Vocational Rehab. Data 24 dimensions, 8818 tuples Shell fragments of size 3 took 0.9 seconds and

60MB to compute 5-D query with 0 instantiated D: 227ms~2.6 sec.

Page 54: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

54

High-D OLAP: Further Implementation Considerations

Incremental Update: Append more TIDs to inverted list Add <tid: measure> to ID_measure table

Incremental adding new dimensions Form new inverted list and add new fragments

Bitmap indexing May further improve space usage and speed

Inverted index compression Store as d-gaps Explore more IR compression methods

Page 55: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

55

Chapter 4: Data Cube Technology

Efficient Computation of Data Cubes Exploration and Discovery in

Multidimensional Databases Discovery-Driven Exploration of Data

Cubes Sampling Cube Prediction Cube Regression Cube

Summary

Page 56: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

56

Discovery-Driven Exploration of Data Cubes

Hypothesis-driven exploration by user, huge search space

Discovery-driven (Sarawagi, et al.’98) Effective navigation of large OLAP data cubes pre-compute measures indicating exceptions,

guide user in the data analysis, at all levels of aggregation

Exception: significantly different from the value anticipated, based on a statistical model

Visual cues such as background color are used to reflect the degree of exception of each cell

Page 57: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

57

Kinds of Exceptions and their Computation

Parameters SelfExp: surprise of cell relative to other cells at

same level of aggregation InExp: surprise beneath the cell PathExp: surprise beneath cell for each drill-

down path Computation of exception indicator (modeling

fitting and computing SelfExp, InExp, and PathExp values) can be overlapped with cube construction

Exception themselves can be stored, indexed and retrieved like precomputed aggregates

Page 58: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

58

Examples: Discovery-Driven Data Cubes

Page 59: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

59

Complex Aggregation at Multiple Granularities: Multi-Feature Cubes

Multi-feature cubes (Ross, et al. 1998): Compute complex queries involving multiple dependent aggregates at multiple granularities

Ex. Grouping by all subsets of {item, region, month}, find the maximum price in 1997 for each group, and the total sales among all maximum price tuples

select item, region, month, max(price), sum(R.sales)

from purchases

where year = 1997

cube by item, region, month: R

such that R.price = max(price) Continuing the last example, among the max price tuples, find

the min and max shelf live, and find the fraction of the total sales due to tuple that have min shelf life within the set of all max price tuples

Page 60: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

60

Chapter 4: Data Cube Technology

Efficient Computation of Data Cubes Exploration and Discovery in

Multidimensional Databases Discovery-Driven Exploration of Data Cubes Sampling Cube

X. Li, J. Han, Z. Yin, J.-G. Lee, Y. Sun, “Sampling Cube: A Framework for Statistical OLAP over Sampling Data”, SIGMOD’08

Prediction Cube Regression Cube

Page 61: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

61

Statistical Surveys and OLAP

Statistical survey: A popular tool to collect information about a population based on a sample Ex.: TV ratings, US Census, election polls

A common tool in politics, health, market research, science, and many more

An efficient way of collecting information (Data collection is expensive)

Many statistical tools available, to determine validity Confidence intervals Hypothesis tests

OLAP (multidimensional analysis) on survey data highly desirable but can it be done well?

Page 62: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

62

Surveys: Sample vs. Whole Population

Age\Education High-school College Graduate

18

19

20

Data is only a sample of population

Page 63: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

63

Problems for Drilling in Multidim. Space

Age\Education High-school College Graduate

18

19

20

Data is only a sample of population but samples could be small when drilling to certain multidimensional space

Page 64: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

64

Traditional OLAP Data Analysis Model

Assumption: data is entire population Query semantics is population-based , e.g., What is

the average income of 19-year-old college graduates?

Age Education Income

Full Data Warehouse

DataCube

Page 65: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

65

OLAP on Survey (i.e., Sampling) Data

Age/Education High-school College Graduate

18

19

20

Semantics of query is unchanged Input data has changed

Page 66: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

66

OLAP with Sampled Data Where is the missing link?

OLAP over sampling data but our analysis target would still like to be on population

Idea: Integrate sampling and statistical knowledge with traditional OLAP tools

Input Data Analysis Target

Analysis Tool

Population Population Traditional OLAP

Sample Population Not Available

Page 67: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

67

Challenges for OLAP on Sampling Data

Computing confidence intervals in OLAP context

No data? Not exactly. No data in subspaces in cube Sparse data Causes include sampling bias and query

selection bias Curse of dimensionality

Survey data can be high dimensional Over 600 dimensions in real world example Impossible to fully materialize

Page 68: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

68

Example 1: Confidence Interval

Age/Education High-school College Graduate

18

19

20

What is the average income of 19-year-old high-school students?

Return not only query result but also confidence interval

Page 69: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

69

Confidence Interval

Confidence interval at : x is a sample of data set; is the mean of sample

tc is the critical t-value, calculated by a look-up

is the estimated standard error of the mean

Example: $50,000 ± $3,000 with 95% confidence Treat points in cube cell as samples Compute confidence interval as traditional sample

set Return answer in the form of confidence interval

Indicates quality of query answer User selects desired confidence interval

Page 70: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

70

Efficient Computing Confidence Interval Measures

Efficient computation in all cells in data cube

Both mean and confidence interval are algebraic

Why confidence interval measure is algebraic?

is algebraic

where both s and l (count) are algebraic

Thus one can calculate cells efficiently at more general

cuboids without having to start at the base cuboid each

time

Page 71: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

71

Example 2: Query Expansion

Age/Education High-school College Graduate

18

19

20

What is the average income of 19-year-old college students?

Page 72: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

72

Boosting Confidence by Query Expansion

From the example: The queried cell “19-year-old college students” contains only 2 samples

Confidence interval is large (i.e., low confidence). why?

Small sample size High standard deviation with samples

Small sample sizes can occur at relatively low dimensional selections

Collect more data?― expensive! Use data in other cells? Maybe, but have to

be careful

Page 73: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

73

Intra-Cuboid Expansion: Choice 1

Age/Education High-school College Graduate

18

19

20

Expand query to include 18 and 20 year olds?

Page 74: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

74

Intra-Cuboid Expansion: Choice 2

Age/Education High-school College Graduate

18

19

20

Expand query to include high-school and graduate students?

Page 75: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

75

Intra-Cuboid Expansion

If other cells in the same cuboid satisfy both the following

1. Similar semantic meaning

2. Similar cube value Then can combine other cells’ data into

own to “boost” confidence Only use if necessary Bigger sample size will decrease

confidence interval

Page 76: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

76

Intra-Cuboid Expansion (2)

Cell segment similarity Some dimensions are clear: Age Some are fuzzy: Occupation May need domain knowledge

Cell value similarity How to determine if two cells’ samples come from the same

population? Two-sample t-test (confidence-based)

Example:

Page 77: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

77

Inter-Cuboid Expansion

If a query dimension is Not correlated with cube value But is causing small sample size by drilling

down too much Remove dimension (i.e., generalize to *) and

move to a more general cuboid Can use two-sample t-test to determine similarity

between two cells across cuboids Can also use a different method to be shown later

Page 78: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

78

Query Expansion

Page 79: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

79

Query Expansion Experiments (2)

Real world sample data: 600 dimensions and 750,000 tuples

0.05% to simulate “sample” (allows error checking)

Page 80: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

80

Query Expansion Experiments (3)

Page 81: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

81

Sampling Cube Shell: Handing “Curse of Dimensionality”

Real world data may have > 600 dimensions Materializing the full sampling cube is unrealistic Solution: Only compute a “shell” around the full

sampling cube Method: Selectively compute the best cuboids to

include in the shell Chosen cuboids will be low dimensional Size and depth of shell dependent on user

preference (space vs. accuracy tradeoff) Use cuboids in the shell to answer queries

Page 82: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

82

Sampling Cube Shell Construction

Top-down, iterative, greedy algorithm

1. Top-down ― start at apex cuboid and slowly expand to higher dimensions: follow the cuboid lattice structure

2. Iterative ― add one cuboid per iteration

3. Greedy ― at each iteration choose the best cuboid to add to the shell

Stop when either size limit is reached or it is no longer beneficial to add another cuboid

Page 83: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

83

Sampling Cube Shell Construction (2)

How to measure quality of a cuboid? Cuboid Standard Deviation (CSD)

s(ci): the sample standard deviation of the samples in ci

n(ci): # of samples in ci, n(B): the total # of samples in B

small(ci) returns 1 if n(ci) ≥ min_sup and 0 otherwise. Measures the amount of variance in the cells of a cuboid Low CSD indicates high correlation with cube value

good High CSD indicates little correlation bad

Page 84: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

84

Sampling Cube Shell Construction (3)

Goal (Cuboid Standard Deviation Reduction: CSDR) Reduce CSD ― increase query information Find the cuboids with the least amount of CSD

Overall algorithm Start with apex cuboid and compute its CSD Choose next cuboid to build that will reduce the

CSD the most from the apex Iteratively choose more cuboids that reduce the

CSD of their parent cuboids the most (Greedy selection―at each iteration, choose the cuboid with the largest CSDR to add to the shell)

Page 85: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

85

Computing the Sampling Cube Shell

Page 86: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

86

Query Processing

1. A more specific cuboid exists ― aggregate to the query dimensions level and answer query

2. A more general cuboid exists ― use the value in the cell

3. Multiple more general cuboids exist ― use the most confident value

If query matches cuboid in shell, use it If query does not match

Page 87: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

87

Query Accuracy

Page 88: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

88

Sampling Cube: Sampling-Aware OLAP

1. Confidence intervals in query processing Integration with OLAP queries Efficient algebraic query processing

2. Expand queries to increase confidence Solves sparse data problem Inter/Intra-cuboid query expansion

3. Cube materialization with limited space Sampling Cube Shell

Page 89: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

89

Chapter Summary

Efficient algorithms for computing data cubes Multiway array aggregation BUC H-cubing Star-cubing High-D OLAP by minimal cubing

Further development of data cube technology Discovery-drive cube Multi-feature cubes Sampling cubes

Page 90: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

90

References (I) S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R.

Ramakrishnan, and S. Sarawagi. On the computation of multidimensional aggregates. VLDB’96

D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data warehouses. SIGMOD’97

R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE’97

K. Beyer and R. Ramakrishnan. Bottom-Up Computation of Sparse and Iceberg CUBEs.. SIGMOD’99

Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang, Multi-Dimensional Regression Analysis of Time-Series Data Streams, VLDB'02

G. Dong, J. Han, J. Lam, J. Pei, K. Wang. Mining Multi-dimensional Constrained Gradients in Data Cubes. VLDB’ 01

J. Han, Y. Cai and N. Cercone, Knowledge Discovery in Databases: An Attribute-Oriented Approach, VLDB'92

J. Han, J. Pei, G. Dong, K. Wang. Efficient Computation of Iceberg Cubes With Complex Measures. SIGMOD’01

L. V. S. Lakshmanan, J. Pei, and J. Han, Quotient Cube: How to Summarize the Semantics of a Data Cube, VLDB'02

Page 91: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

91

References (II) X. Li, J. Han, and H. Gonzalez, High-Dimensional OLAP: A Minimal Cubing

Approach, VLDB'04 X. Li, J. Han, Z. Yin, J.-G. Lee, Y. Sun, “Sampling Cube: A Framework for

Statistical OLAP over Sampling Data”, SIGMOD’08 K. Ross and D. Srivastava. Fast computation of sparse datacubes.

VLDB’97 K. A. Ross, D. Srivastava, and D. Chatziantoniou. Complex aggregation at

multiple granularities. EDBT'98 S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven exploration of

OLAP data cubes. EDBT'98 G. Sathe and S. Sarawagi. Intelligent Rollups in Multidimensional OLAP

Data. VLDB'01 D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by

Top-Down and Bottom-Up Integration, VLDB'03 D. Xin, J. Han, Z. Shao, H. Liu, C-Cubing: Efficient Computation of Closed

Cubes by Aggregation-Based Checking, ICDE'06 W. Wang, H. Lu, J. Feng, J. X. Yu, Condensed Cube: An Effective Approach

to Reducing Data Cube Size. ICDE’02 Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based algorithm

for simultaneous multidimensional aggregates. SIGMOD’97

Page 92: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

92

Page 93: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

93

Explanation on Multi-way array aggregation “a0b0” chunk

a0b0c0 c1 c2 c3

a0b1c0 c1 c2 c3

a0b2c0 c1 c2 c3

a0b3c0 c1 c2 c3

xxxx x x x x

x x x x

a0a0

b0 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Page 94: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

94

a0b1 chunk

a0b0c0 c1 c2 c3

a0b1c0 c1 c2 c3

a0b2c0 c1 c2 c3

a0b3c0 c1 c2 c3

yyyy xy xy xy xy

x x x x

y y y y

a0a0

b1 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Done with a0b0

Page 95: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

95

a0b2 chunk

a0b0c0 c1 c2 c3

a0b1c0 c1 c2 c3

a0b2c0 c1 c2 c3

a0b3c0 c1 c2 c3

zzzz xyz xyz xyz xyz

x x x x

y y y y

z z z z

a0a0

b2 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Done with a0b1

Page 96: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

96

Table Visualization

a0b0c0 c1 c2 c3

a0b1c0 c1 c2 c3

a0b2c0 c1 c2 c3

a0b3c0 c1 c2 c3

uuuu xyzu xyzu xyzu xyzu

x x x x

y y y y

z z z z

u u u u

a0a0

b3 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Done with a0b2

Page 97: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

97

Table Visualization

a1b0c0 c1 c2 c3

a1b1c0 c1 c2 c3

a1b2c0 c1 c2 c3

a1b3c0 c1 c2 c3

xxxx x x x x

xx xx xx xx

y y y y

z z z z

u u u u

a1a1

b0 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Done with a0b3Done with a0c*

Page 98: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

98

a3b3 chunk (last one)

a3b0c0 c1 c2 c3

a3b1c0 c1 c2 c3

a3b2c0 c1 c2 c3

a3b3c0 c1 c2 c3

uuuu xyzu xyzu xyzu xyzu

xxxx xxxx xxxx xxxx

yyyy yyyy yyyy yyyy

zzzz zzzz zzzz zzzz

uuuu uuuu uuuu uuuu

a3a3

b0 c0 c1 c2 c3

b0

b1

b2

b3

c0 c1 c2 c3

Finish

Done with a0b3Done with a0c*Done with b*c*

Page 99: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

99

Memory Used

A: 40 distinct values B: 400 distinct values C: 4000 distinct values

ABC: Sort Order Plane AB: Need 1 chunk (10 * 100 * 1) Plane AC: Need 4 chunks (10 * 1000 * 4) Plane BC: Need 16 chunks (100 * 1000 * 16)

Total memory: 1,641,000

Page 100: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

100

Memory Used

A: 40 distinct values B: 400 distinct values C: 4000 distinct values

CBA: Sort Order Plane CB: Need 1 chunk (1000 * 100 * 1) Plane CA: Need 4 chunks (1000 * 10 * 4) Plane BA: Need 16 chunks (100 * 10 * 16)

Total memory: 156,000

Page 101: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

101

H-Cubing Example

root: 4

a1: 3

b1:2 b2:1 b1: 1

a2: 1

c1: 1 c2: 1 c3: 1 c1: 1

a1 3

a2 1

b1 3

b2 1

c1 2

c2 1

c3 1

(*,*,*,*) : 4

H-tablecondition: ???

H-Tree: 1Output

Side-links are used to create a virtual treeIn this example for clarity we will print theresulting virtual tree, not the real tree

Page 102: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

102

project C: ??c1

root: 4

a1: 1

b1:1 b1: 1

a2: 1

(*,*,*,*) : 4(*,*,*,c1): 2

H-tablecondition: ??c1

H-Tree: 1.1Output

a1 1

a2 1

b1 2

b2 1

Page 103: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

103

project ?b1c1

root: 4

a1: 1 a2: 1

(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2

H-tablecondition: ?b1c1

H-Tree: 1.1.1Output

a1 1

a2 1

Page 104: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

104

aggregate: ?*c1

root: 4

a1: 1 a2: 1

(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2

H-tablecondition: ??c1

H-Tree: 1.1.2Output

a1 1

a2 1

Page 105: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

105

Aggregate ??*

root: 4

a1: 3

b1:2 b2:1 b1: 1

a2: 1

a1 3

a2 1

b1 3

b2 1

(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2

H-tablecondition: ???

H-Tree: 1.2Output

Page 106: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

106

Project ?b1*

root: 4

a1: 2 a2: 1

a1 2

a2 1

(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2(*,b1,*): 3(a1,b1,*):2

H-tablecondition: ???

H-Tree: 1.2.1Output

After this we also project on a1b1*

Page 107: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

107

Aggregate ?**

root: 4

a1: 3 a2: 1

a1 3

a2 1

(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2(*,b1,*): 3(a1,b1,*):2(a1,*,*): 3

H-tablecondition: ???

H-Tree: 1.2.2Output

Here we also project on a1

Finish

Page 108: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

108

3. Explanation of Star Cubing

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

NULL

Page 109: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

109

Step 1

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

output

Page 110: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

110

Step 2

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

Page 111: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

111

Step 3

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

b*: 1

a1b*D/a1b*:1

Page 112: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

112

Step 4

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

b*: 1

a1b*D/a1b*:1

c*: 1

c*: 1

a1b*c*/a1b*c*:1

Page 113: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

113

Step 5

root: 5

a1: 3

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

b*: 1

a1b*D/a1b*:1

c*: 1

c*: 1

a1b*c*/a1b*c*:1

d*: 1

d*: 1

d*: 1

Page 114: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

114

Mine subtree ABC/ABC

root: 5

a1: 3

b*: 1

c*: 1

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

b*: 1

a1b*D/a1b*:1

c*: 1

c*: 1

a1b*c*/a1b*c*:1

d*: 1

d*: 1

d*: 1mine this subtreebut nothing to do

remove

Page 115: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

115

Mine subtree ABD/AB

root: 5

a1: 3

b*: 1 b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

b*: 1

a1b*D/a1b*:1

c*: 1

c*: 1

d*: 1

d*: 1

d*: 1mine this subtreebut nothing to do

remove

Page 116: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

116

Step 6

root: 5

a1: 3

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

(a1,b1,*):2b*: 1

c*: 1

c*: 1

d*: 1

d*: 1

b1: 2

a1b1D/a1b1:2

Page 117: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

117

Step 7

root: 5

a1: 3

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

(a1,b1,*):2b*: 1

c*: 1

c*: 3

d*: 1

d*: 1

b1: 2

a1b1D/a1b1:2

c*: 2

a1b1c*/a1b1c*:2

Page 118: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

118

Step 8

root: 5

a1: 3

b1: 2

c*: 2

d*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

(a1,b1,*):2b*: 1

c*: 1

c*: 3

d*: 1

d*: 3

b1: 2

a1b1D/a1b1:2

c*: 2

a1b1c*/a1b1c*:2

d*: 2

d*: 3

Page 119: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

119

Mine subtree ABC/ABC

root: 5

a1: 3

b1: 2

c*: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3

(a1,b1,*):2b*: 1

c*: 1

c*: 3

d*: 1

d*: 3

b1: 2

a1b1D/a1b1:2

c*: 2

a1b1c*/a1b1c*:2

d*: 2

d*: 3mine this subtreebut nothing to do

remove

Page 120: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

120

Mine subtree ABC/ABC

root: 5

a1: 3

b1: 2

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3(a1,b1,*):2

b*: 1

c*: 1

c*: 3

d*: 1

d*: 3

b1: 2

a1b1D/a1b1:2

c*: 2

d*: 2

d*: 3

mine this subtreebut nothing to do

(all interior nodes *)remove

Page 121: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

121

Mine subtree ABC/ABC

root: 5

a1: 3 a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a1CD/a1:3

output

(a1,*,*) : 3(a1,b1,*):2

b*: 1

c*: 1

c*: 3

d*: 1

d*: 3

b1: 2

c*: 2

d*: 2

mine this subtreebut nothing to do

(all interior nodes *)remove

Page 122: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

122

Step 9

root: 5

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3

(a1,b1,*):2

(a2,*,*): 2

b*: 1

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

Page 123: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

123

Step 10

root: 5

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2b*D/a2b*:2

Page 124: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

124

Step 11

root: 5

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2b*D/a2b*:2

c3: 2

a2b*c3/a2b*c3:2

c3: 2

Page 125: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

125

Step 11

root: 5

a2: 2

b*: 2

c3: 2

d4: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2b*D/a2b*:2

c3: 2

a2b*c3/a2b*c3:2

c3: 2

d4: 2

d4: 2

Page 126: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

126

Mine subtree ABC/ABC

root: 5

a2: 2

b*: 2

c3: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2b*D/a2b*:2

c3: 2

a2b*c3/a2b*c3:2

c3: 2

d4: 2

d4: 2

mine subtreenothing to do

remove

Page 127: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

127

Step 11

root: 5

a2: 2

b*: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

a2b*D/a2b*:2

c3: 2

c3: 2

d4: 2

d4: 2

mine subtreenothing to do

remove

Page 128: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

128

Mine Subtree ACD/A

root: 5

a2: 2

ABCD-Tree ABCD-Cuboid Tree

BCD:5

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

c3: 2

c3: 2

d4: 2

d4: 2

mine subtreeAC/AC, AD/A

remove

Page 129: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

129

Recursive Mining – step 1ACD/A tree ACD/A-Cuboid Tree

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2

c3: 2

d4: 2

a2D/a2:2

Page 130: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

130

Recursive Mining – Step 2ACD/A tree ACD/A-Cuboid Tree

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2c3: 2

d4: 2 a2c3/a2c3:2

a2D/a2:2

Page 131: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

131

Recursive Mining - BacktrackACD/A tree ACD/A-Cuboid Tree

a2CD/a2:2

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2

c3: 2

d4: 2

a2c3/a2c3:2

a2D/a2:2

d4: 2

Same as beforeAs we backtrack

recursively mine child trees

Page 132: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

132

Mine Subtree BCD

root: 5

ABCD-Tree ABCD-Cuboid Tree

BCD:5

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2

b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

c3: 2

d4: 2

mine subtreeBC/BC, BD/B, CD

remove

Page 133: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

133

Mine Subtree BCDBCD-Tree BCD-Cuboid Tree

BCD:5

output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2

b*: 3

c*: 1

d*: 1

b1: 2

c*: 2

d*: 2

c3: 2

d4: 2

Will Skip this step

You may do it as an exercise

Page 134: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

134

Finish

root: 5

ABCD-Tree ABCD-Cuboid Tree output

(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2BCD tree patterns

Page 135: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

135

Efficient Computation of Data Cubes

General heuristics Multi-way array aggregation BUC H-cubing Star-Cubing High-Dimensional OLAP Computing non-monotonic measures Compressed and closed cubes

Page 136: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

136

Computing Cubes with Non-Antimonotonic Iceberg Conditions

J. Han, J. Pei, G. Dong, K. Wang. Efficient Computation of Iceberg Cubes With Complex Measures. SIGMOD’01

Most cubing algorithms cannot compute cubes with non-antimonotonic iceberg conditions efficiently

Example CREATE CUBE Sales_Iceberg AS

SELECT month, city, cust_grp, AVG(price), COUNT(*)

FROM Sales_Infor

CUBEBY month, city, cust_grp

HAVING AVG(price) >= 800 AND COUNT(*) >= 50

How to push constraint into the iceberg cube computation?

Page 137: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

137

Non-Anti-Monotonic Iceberg Condition

Anti-monotonic: if a process fails a condition, continue processing will still fail

The cubing query with avg is non-anti-monotonic! (Mar, *, *, 600, 1800) fails the HAVING clause (Mar, *, Bus, 1300, 360) passes the clause

CREATE CUBE Sales_Iceberg AS

SELECT month, city, cust_grp,

AVG(price), COUNT(*)

FROM Sales_Infor

CUBEBY month, city, cust_grp

HAVING AVG(price) >= 800 AND

COUNT(*) >= 50

Month CityCust_gr

pProd Cost Price

Jan Tor Edu Printer 500 485

Jan Tor Hld TV 800 1200

Jan Tor EduCamer

a1160 1280

Feb Mon Bus Laptop 1500 2500

Mar Van Edu HD 540 520

… … … … … …

Page 138: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

138

From Average to Top-k Average

Let (*, Van, *) cover 1,000 records Avg(price) is the average price of those 1000 sales Avg50(price) is the average price of the top-50

sales (top-50 according to the sales price Top-k average is anti-monotonic

The top 50 sales in Van. is with avg(price) <= 800 the top 50 deals in Van. during Feb. must be with avg(price) <= 800

Month CityCust_gr

pProd Cost Price

… … … … … …

Page 139: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

139

Binning for Top-k Average

Computing top-k avg is costly with large k Binning idea

Avg50(c) >= 800 Large value collapsing: use a sum and a count to

summarize records with measure >= 800 If count>= 50, no need to check “small”

records Small value binning: a group of bins

One bin covers a range, e.g., 600~800, 400~600, etc.

Register a sum and a count for each bin

Page 140: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

140

Computing Approximate top-k average

Range Sum Count

Over 800

28000 20

600~800 10600 15400~600 15200 30

… … …

Top 50

Approximate avg50()=

(28000+10600+600*15)/

50=952

Suppose for (*, Van, *), we have

Month City Cust_grp Prod Cost Price

… … … … … …

The cell may pass the HAVING clause

Page 141: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

141

Weakened Conditions Facilitate Pushing

Accumulate quant-info for cells to compute average iceberg cubes efficiently Three pieces: sum, count, top-k bins Use top-k bins to estimate/prune descendants Use sum and count to consolidate current cell

Approximate avg50()

Anti-monotonic, can be computed

efficiently

real avg50()

Anti-monotonic, but

computationally costly

avg()

Not anti-monotoni

c

strongestweakest

Page 142: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

142

Computing Iceberg Cubes with Other Complex Measures

Computing other complex measures

Key point: find a function which is weaker but

ensures certain anti-monotonicity

Examples

Avg() v: avgk(c) v (bottom-k avg)

Avg() v only (no count): max(price) v

Sum(profit) (profit can be negative): p_sum(c) v if p_count(c) k; or otherwise, sumk(c) v

Others: conjunctions of multiple conditions

Page 143: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

143

Efficient Computation of Data Cubes

General heuristics Multi-way array aggregation BUC H-cubing Star-Cubing High-Dimensional OLAP Computing non-monotonic measures Compressed and closed cubes

Page 144: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

144

Compressed Cubes: Condensed or Closed Cubes

W. Wang, H. Lu, J. Feng, J. X. Yu, Condensed Cube: An Effective Approach to Reducing Data Cube Size, ICDE’02.

This is challenging even for icerberg cube: Suppose 100 dimensions, only 1 base cell with count = 10. How many aggregate cells if count >= 10?

Condensed cube

Only need to store one cell (a1, a2, …, a100, 10), which

represents all the corresponding aggregate cells Efficient computation of the minimal condensed cube

Closed cube Dong Xin, Jiawei Han, Zheng Shao, and Hongyan Liu, “C-

Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking”, ICDE'06.

Page 145: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

145

Exploration and Discovery in Multidimensional Databases

Discovery-Driven Exploration of Data Cubes

Complex Aggregation at Multiple

Granularities: Multi-Feature Cubes

Cube-Gradient Analysis

Page 146: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

146

Cube-Gradient (Cubegrade)

Analysis of changes of sophisticated measures in multi-dimensional spaces Query: changes of average house price in

Vancouver in ‘00 comparing against ’99 Answer: Apts in West went down 20%, houses

in Metrotown went up 10% Cubegrade problem by Imielinski et al.

Changes in dimensions changes in measures Drill-down, roll-up, and mutation

Page 147: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

147

From Cubegrade to Multi-dimensional Constrained Gradients in Data Cubes

Significantly more expressive than association rules Capture trends in user-specified measures

Serious challenges Many trivial cells in a cube “significance

constraint” to prune trivial cells Numerate pairs of cells “probe constraint” to

select a subset of cells to examine Only interesting changes wanted “gradient

constraint” to capture significant changes

Page 148: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

148

MD Constrained Gradient Mining

Significance constraint Csig: (cnt100) Probe constraint Cprb: (city=“Van”,

cust_grp=“busi”, prod_grp=“*”) Gradient constraint Cgrad(cg, cp):

(avg_price(cg)/avg_price(cp)1.3)

Dimensions Measures

cid Yr City Cst_grp Prd_grp Cnt Avg_price

c1 00 Van Busi PC 300 2100c2 * Van Busi PC 2800 1800c3 * Tor Busi PC 7900 2350c4 * * busi PC 58600 2250

Base cell

Aggregated cell

Siblings

Ancestor

Probe cell: satisfied Cprb (c4, c2) satisfies Cgrad!

Page 149: Data Mining:

04/08/23 Data Mining: Concepts and Techniques

149

Efficient Computing Cube-gradients

Compute probe cells using Csig and Cprb

The set of probe cells P is often very small

Use probe P and constraints to find gradients Pushing selection deeply

Set-oriented processing for probe cells

Iceberg growing from low to high dimensionalities

Dynamic pruning probe cells during growth

Incorporating efficient iceberg cubing method


Recommended