+ All Categories
Home > Documents > Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to...

Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to...

Date post: 27-Jul-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
23
Clustering 2: Hierarchical clustering Rebecca C. Steorts Predictive Modeling: STA 521 September 2015 Optional reading: ISL 10.3, ESL 14.3 1
Transcript
Page 1: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Clustering 2: Hierarchical clustering

Rebecca C. SteortsPredictive Modeling: STA 521

September 2015

Optional reading: ISL 10.3, ESL 14.3

1

Page 2: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

From K-means to hierarchical clusteringRecall two properties of K-means (K-medoids) clustering:

1. It fits exactly K clusters (as specified)

2. Final clustering assignment depends on the chosen initialcluster centers

I Assume pairwise dissimilarites dij between data points.

I Hierarchical clustering produces a consistent result, withoutthe need to choose initial starting positions (number ofclusters).

Catch: choose a way to measure the dissimilarity between groups,called the linkage

I Given the linkage, hierarchical clustering produces a sequenceof clustering assignments.

I At one end, all points are in their own cluster, at the otherend, all points are in one cluster

2

Page 3: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Agglomerative vs divisive

Two types of hierarchical clustering algorithms

Agglomerative (i.e., bottom-up):

I Start with all points in their own group

I Until there is only one cluster, repeatedly: merge the twogroups that have the smallest dissimilarity

Divisive (i.e., top-down):

I Start with all points in one cluster

I Until all points are in their own cluster, repeatedly: split thegroup into two resulting in the biggest dissimilarity

Agglomerative strategies are simpler, we’ll focus on them. Divisivemethods are still important, but we won’t be able to cover them inlecture

3

Page 4: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Simple example

Given these data points, an agglomerative algorithm might decideon a clustering sequence as follows:

●●

●●

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0.2

0.4

0.6

0.8

Dimension 1

Dim

ensi

on 2

1

23

4

56

7

Step 1: {1}, {2}, {3}, {4}, {5}, {6}, {7};Step 2: {1}, {2, 3}, {4}, {5}, {6}, {7};Step 3: {1, 7}, {2, 3}, {4}, {5}, {6};Step 4: {1, 7}, {2, 3}, {4, 5}, {6};Step 5: {1, 7}, {2, 3, 6}, {4, 5};Step 6: {1, 7}, {2, 3, 4, 5, 6};Step 7: {1, 2, 3, 4, 5, 6, 7}.

4

Page 5: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

We can also represent the sequence of clustering assignments as adendrogram:

●●

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0.2

0.4

0.6

0.8

Dimension 1

Dim

ensi

on 2

1

23

4

56

7

1 7 4 5 6 2 3

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Hei

ght

Note that cutting the dendrogram horizontally partitions the datapoints into clusters

5

Page 6: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

What’s a dendrogram?

Dendrogram: convenient graphic to display a hierarchical sequenceof clustering assignments. This is simply a tree where:

I Each node represents a group

I Each leaf node is a singleton (i.e., a group containing a singledata point)

I Root node is the group containing the whole data set

I Each internal node has two daughter nodes (children),representing the the groups that were merged to form it

Remember: the choice of linkage determines how we measuredissimilarity between groups of points

If we fix the leaf nodes at height zero, then each internal node isdrawn at a height proportional to the dissmilarity between its twodaughter nodes

6

Page 7: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Linkages

I Given points X1, . . . Xn, and dissimilarities dij between eachpair Xi and Xj .

I (Think of Xi ∈ Rp and dij = ‖Xi −Xj‖2; note: this isdistance, not squared distance)

I At any level, clustering assignments can be expressed by setsG = {i1, i2, . . . ir}, giving indices of points in this group.

I Let nG be the size of G (here nG = r).

I Bottom level: each group looks like G = {i},I top level: only one group, G = {1, . . . n}

7

Page 8: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Linkages

Linkage: function d(G,H) that takes two groups G,H and returnsa dissimilarity score between them

Agglomerative clustering, given the linkage:

I Start with all points in their own group

I Until there is only one cluster, repeatedly: merge the twogroups G,H such that d(G,H) is smallest

8

Page 9: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Single linkage

In single linkage (i.e., nearest-neighbor linkage), the dissimilaritybetween G,H is the smallest dissimilarity between two points inopposite groups:

dsingle(G,H) = mini∈G, j∈H

dij

Example (dissimilarities dij aredistances, groups are markedby colors): single linkage scoredsingle(G,H) is the distance ofthe closest pair

●●

● ●

●● ●

● ●

●●

●●

●●

−2 −1 0 1 2

−2

−1

01

2

9

Page 10: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Single linkage example

Here n = 60, Xi ∈ R2, dij = ‖Xi −Xj‖2. Cutting the tree ath = 0.9 gives the clustering assignments marked by colors

●●

●●

● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

0.0

0.2

0.4

0.6

0.8

1.0

Hei

ght

Cut interpretation: for each point Xi, there is another point Xj inits cluster with dij ≤ 0.9

10

Page 11: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Complete linkage

In complete linkage (i.e., furthest-neighbor linkage), dissimilaritybetween G,H is the largest dissimilarity between two points inopposite groups:

dcomplete(G,H) = maxi∈G, j∈H

dij

Example (dissimilarities dij aredistances, groups are markedby colors): complete linkagescore dcomplete(G,H) is thedistance of the furthest pair

●●

● ●

●● ●

● ●

●●

●●

●●

−2 −1 0 1 2

−2

−1

01

2

11

Page 12: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Complete linkage example

Same data as before. Cutting the tree at h = 5 gives the clusteringassignments marked by colors

●●

●●

● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

01

23

45

6

Hei

ght

Cut interpretation: for each point Xi, every other point Xj in itscluster satisfies dij ≤ 5

12

Page 13: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Average linkage

In average linkage, the dissimilarity between G,H is the averagedissimilarity over all points in opposite groups:

daverage(G,H) =1

nG · nH

∑i∈G, j∈H

dij

Example (dissimilarities dij aredistances, groups are markedby colors): average linkagescore daverage(G,H) is the av-erage distance across all pairs

(Plot here only shows dis-tances between the blue pointsand one red point)

●●

● ●

●● ●

● ●

●●

●●

●●

−2 −1 0 1 2

−2

−1

01

2

13

Page 14: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Average linkage example

Same data as before. Cutting the tree at h = 1.5 gives clusteringassignments marked by the colors

●●

●●

● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Hei

ght

Cut interpretation: there really isn’t a good one!hi

14

Page 15: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Common properties

Single, complete, average linkage share the following properties:

I These linkages operate on dissimilarities dij , and don’t needthe points X1, . . . Xn to be in Euclidean space

I Running agglomerative clustering with any of these linkagesproduces a dendrogram with no inversions

Second property, in words: disimilarity scores between mergedclusers only increases as we run the algorithm

Means that we can draw a proper dendrogram, where the height ofa parent is always higher than height of its daughters

15

Page 16: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Example of a dendrogram with no inversions

1 7 4 5 6 2 3

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Hei

ght

0.11

0.26

0.37

0.49

0.64

0.72

16

Page 17: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Shortcomings of single, complete linkage

Single and complete linkage can have some practical problems:

I Single linkage suffers from chaining. In order to merge twogroups, only need one pair of points to be close, irrespectiveof all others. Therefore clusters can be too spread out, andnot compact enough

I Complete linkage avoids chaining, but suffers from crowding.Because its score is based on the worst-case dissimilaritybetween pairs, a point can be closer to points in other clustersthan to points in its own cluster. Clusters are compact, butnot far enough apart

Average linkage tries to strike a balance. It uses average pairwisedissimilarity, so clusters tend to be relatively compact and relativelyfar apart

17

Page 18: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Example of chaining and crowding

●●

●●

●●

●● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

Single

●●

●●

●●

●● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

Complete

●●

●●

●●

●● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

Average

18

Page 19: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Shortcomings of average linkage

Average linkage isn’t perfect, it has its own problems:

I It is not clear what properties the resulting clusters have whenwe cut an average linkage tree at given height h. Single andcomplete linkage trees each had simple interpretations

I Results of average linkage clustering can change with amonotone increasing transformation of dissimilarities dij . I.e.,if h is such that h(x) ≤ h(y) whenever x ≤ y, and we useddissimilarites h(dij) instead of dij , then we could get differentanswers

Depending on the context, second problem may be important orunimportant. E.g., it could be very clear what dissimilarities shouldbe used, or not

Note: results of single, complete linkage clustering are unchangedunder monotone transformations (Homework 1)

19

Page 20: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Example of a change with monotone increasingtransformation

●●

●●

●●

●● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

Avg linkage: distance

●●

●●

●●

●● ●

● ●

−2 −1 0 1 2 3

−2

−1

01

23

Avg linkage: distance^2

20

Page 21: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Hierarchical agglomerative clustering in R

The function hclust in the base package performs hierarchicalagglomerative clustering using single, complete, or average linkage

E.g.,

d = dist(x)

tree.avg = hclust(d, method="average")

plot(tree.avg)

21

Page 22: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Recap: hierarchical agglomerative clustering

Hierarchical agglomerative clustering: start with all data points intheir own groups, and repeatedly merge groups, based on linkagefunction. Stop when points are in one group (this is agglomerative;there is also divisive)

This produces a sequence of clustering assignments, visualized by adendrogram (i.e., a tree). Each node in the tree represents a group,and its height is proportional to the dissimilarity of its daughters

Three most common linkage functions: single, complete, averagelinkage. Single linkage measures the least dissimilar pair betweengroups, complete linkage measures the most dissimilar pair,average linkage measures the average dissimilarity over all pairs

Each linkage has its strengths and weaknesses

22

Page 23: Clustering 2: Hierarchical clusteringrcs46/lectures_2015/05-clus2/05-clus2.pdf · From K-means to hierarchical clustering Recall two properties of K-means(K-medoids) clustering: 1.It

Next time: more hierarchical clustering, and choosing thenumber of clusters

Choosing the number of clusters: an open problem in statistics

●●

●●

● ●● ●

●●

●● ●

●●

●●

● ●

●●

●●

●● ●

●●

●●

●●

● ●

●●

●●

● ●●

●●

●●

● ●●●

●●

●●

●● ●

●●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

0.0 0.5 1.0 1.5

−0.

50.

00.

51.

01.

5

K = 3

●●

●●

● ●● ●

●●

●● ●

●●

●●

● ●

●●

●●

●● ●

●●

●●

●●

● ●

●●

●●

● ●●

●●

●●

● ●●●

●●

●●

●● ●

●●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

0.0 0.5 1.0 1.5

−0.

50.

00.

51.

01.

5

K = 4

●●

●●

● ●● ●

●●

●● ●

●●

●●

● ●

●●

●●

●● ●

●●

●●

●●

● ●

●●

●●

● ●●

●●

●●

● ●●●

●●

●●

●● ●

●●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

0.0 0.5 1.0 1.5

−0.

50.

00.

51.

01.

5

K = 5

3.0 3.5 4.0 4.5 5.0

1618

2022

2426

Within−cluster variation

23


Recommended