71
CSE 5243 INTRO. TO DATA MINING Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University

CSE 5243 INTRO. TO DATA MINING

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han

Cluster Analysis: Basic Concepts and Methods

Huan Sun, CSE@The Ohio State University

Page 2: CSE 5243 INTRO. TO DATA MINING

2

Chapter 10. Cluster Analysis: Basic Concepts and Methods

Cluster Analysis: An Introduction

Partitioning Methods

Hierarchical Methods

Density- and Grid-Based Methods

Evaluation of Clustering

Summary

Page 3: CSE 5243 INTRO. TO DATA MINING

3

K-means Clustering

Partitional clustering approach

Each cluster is associated with a centroid (center point)

Each point is assigned to the cluster with the closest centroid

Number of clusters, K, must be specified

The basic algorithm is very simple Often chosen

randomly

Typically the mean of

the points in the cluster

Measured by Euclidean

distance, cosine similarity,

etc.

Page 4: CSE 5243 INTRO. TO DATA MINING

4

Initial centroids are often chosen randomly.

Clusters produced vary from one run to another.

The centroid is (typically) the mean of the points in the cluster.

‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.

K-means will converge for common similarity measures mentioned above.

Most of the convergence happens in the first few iterations.

Often the stopping condition is changed to ‘Until relatively few points change clusters’

K-means Clustering – Details

Page 5: CSE 5243 INTRO. TO DATA MINING

5

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroids

Repeat

• Form K clusters by assigning each point to its closest centroid

• Re-compute the centroids (i.e., mean point) of each cluster

Until convergence criterion is satisfied

Assign points to clusters

Recomputecluster centers

Redo point assignment

Execution of the K-Means Clustering Algorithm

Next iteration

Page 6: CSE 5243 INTRO. TO DATA MINING

6

Evaluating K-means Clusters

Most common measure is Sum of Squared Error (SSE)

For each point, the error is the distance to the nearest cluster

To get SSE, we square these errors and sum them.

Xi is a data point in cluster Ck and ck is the representative point for cluster Ck

◼ can show that ck corresponds to the center (mean) of the cluster

2

1

( ) || ||i Ck

K

i k

k x

SSE C x c=

= −Using Euclidean Distance

=> attempt to minimize SSE

Page 7: CSE 5243 INTRO. TO DATA MINING

7

Derivation of K-means to Minimize SSE

Example: one-dimensional dataStep 4: how to update centroid

Page 8: CSE 5243 INTRO. TO DATA MINING

8

Other distance measures

https://www-users.cs.umn.edu/~kumar001/dmbook/ch7_clustering.pdf

Page 9: CSE 5243 INTRO. TO DATA MINING

9

Derivation of K-means to Minimize SSE

Example: What if we choose Manhattan distance?Step 4: how to update centroid

Page 10: CSE 5243 INTRO. TO DATA MINING

10

Partitioning Algorithms: From Optimization Angle

Partitioning method: Discovering the groupings in the data by optimizing a specific

objective function and iteratively improving the quality of partitions

K-partitioning method: Partitioning a dataset D of n objects into a set of K clusters so

that an objective function is optimized (e.g., the sum of squared distances is

minimized, where ck is the "center" of cluster Ck)

A typical objective function: Sum of Squared Errors (SSE)

Problem definition: Given K, find a partition of K clusters that optimizes the chosen

partitioning criterion

Global optimal: Needs to exhaustively enumerate all partitions

Heuristic methods (i.e., greedy algorithms): K-Means, K-Medians, K-Medoids, etc.

2

1

( ) || ||i Ck

K

i k

k x

SSE C x c=

= −

Page 11: CSE 5243 INTRO. TO DATA MINING

11

Importance of Choosing Initial Centroids (1)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Optimal

Clustering

Page 12: CSE 5243 INTRO. TO DATA MINING

12

Importance of Choosing Initial Centroids (2)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

yIteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

Sub-optimal

Clustering

Page 13: CSE 5243 INTRO. TO DATA MINING

13

Solutions to Initial Centroids Problem

Multiple runs

Helps, but probability is not on your side

Sample to determine initial centroids

Select more than k initial centroids and then select among these initial centroids

Select most widely separated

Page 14: CSE 5243 INTRO. TO DATA MINING

14

Pre-processing and Post-processing

Pre-processing

Normalize the data

Eliminate outliers

Post-processing

Eliminate small clusters that may represent outliers

Split ‘loose’ clusters, i.e., clusters with relatively high SSE

Merge clusters that are ‘close’ and that have relatively low SSE

Can use these steps during the clustering process

◼ ISODATA

Page 15: CSE 5243 INTRO. TO DATA MINING

15

K-Means++

Original proposal (MacQueen’67): Select K seeds randomly

Need to run the algorithm multiple times using different seeds

❑ There are many methods proposed for better initialization of k seeds

❑ K-Means++ (Arthur & Vassilvitskii’07):

❑ The first centroid is selected at random

❑ The next centroid selected is the one that is farthest from the currently selected

(selection is based on a weighted probability score)

❑ The selection continues until K centroids are obtained

Page 16: CSE 5243 INTRO. TO DATA MINING

16

K-Means++

Page 17: CSE 5243 INTRO. TO DATA MINING

17

Handling Outliers: From K-Means to K-Medoids

The K-Means algorithm is sensitive to outliers!—since an object with an extremely large value may substantially distort the distribution of the data

K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster

The K-Medoids clustering algorithm:

◼ Select K points as the initial representative objects (i.e., as initial K medoids)

◼ Repeat

◼ Assigning each point to the cluster with the closest medoid

◼ Randomly select a non-representative object oi

◼ Compute the total cost S of swapping the medoid m with oi

◼ If S < 0, then swap m with oi to form the new set of medoids

◼Until convergence criterion is satisfied

Page 18: CSE 5243 INTRO. TO DATA MINING

18

Limitations of K-means

K-means has problems when clusters are of differing

Sizes

Densities

Non-globular shapes

K-means has problems when the data contains outliers.

Page 19: CSE 5243 INTRO. TO DATA MINING

19

Limitations of K-means: Differing Size

https://www-users.cs.umn.edu/~kumar001/dmbook/ch7_clustering.pdf

Page 20: CSE 5243 INTRO. TO DATA MINING

20

Limitations of K-means: Differing Density

https://www-users.cs.umn.edu/~kumar001/dmbook/ch7_clustering.pdf

Page 21: CSE 5243 INTRO. TO DATA MINING

21

Limitations of K-means: Non-globular Clusters

https://www-users.cs.umn.edu/~kumar001/dmbook/ch7_clustering.pdf

Page 22: CSE 5243 INTRO. TO DATA MINING

22

Overcoming K-means Limitations:

Original Points K-means Clusters

Breaking Clusters to Subclusters

Page 23: CSE 5243 INTRO. TO DATA MINING

23

K-Medians: Handling Outliers by Computing Medians

Medians are less sensitive to outliers than means

Think of the median salary vs. mean salary of a large firm when adding a few top

executives!

Page 24: CSE 5243 INTRO. TO DATA MINING

24

K-Medians: Handling Outliers by Computing Medians

Medians are less sensitive to outliers than means

Think of the median salary vs. mean salary of a large firm when adding a few top

executives!

K-Medians: Instead of taking the mean value of the object in a cluster as a reference

point, medians are used (corresponding to L1-norm as the distance measure)

The criterion function for the K-Medians algorithm:

The K-Medians clustering algorithm:

◼ Select K points as the initial representative objects (i.e., as initial K medians)

◼ Repeat

◼ Assign every point to its nearest median

◼ Re-compute the median using the median of each individual feature

◼Until convergence criterion is satisfied

1

| |i Ck

K

ij kj

k x

S x med=

= −

Page 25: CSE 5243 INTRO. TO DATA MINING

25

K-Medoids: PAM (Partitioning around Medoids)

In general, pick actual data points as "cluster center"

Page 26: CSE 5243 INTRO. TO DATA MINING

26

K-Medoids: PAM (Partitioning around Medoids)

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

K = 2

Arbitrary choose Kobject as initial medoids

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Assign each remaining object to nearest medoids

Compute total cost of swapping

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Swapping O and Oramdom

If quality is improved

Select initial K-Medoids randomly

Repeat

Object re-assignment

Swap medoid m with oi if it

improves the clustering quality

Until convergence criterion is satisfied

Randomly select a non-medoid object,Oramdom

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 27: CSE 5243 INTRO. TO DATA MINING

27

K-Medoids: PAM (Partitioning around Medoids)

Which one is more robust in the presence of noise and outliers ?

A. K-Means

B. K-Medoids

Page 28: CSE 5243 INTRO. TO DATA MINING

28

K-Modes: Clustering Categorical Data

K-Means cannot handle non-numerical (categorical) data

Mapping categorical value to 1/0 cannot generate quality clusters for high-dimensional

data

K-Modes: An extension to K-Means by replacing means of clusters with modes

Dissimilarity measure between object X and the center of a cluster Z

Φ(xj, zj) = 1 – njr/nl when xj = zj ; 1 when xj ǂ zj

◼ where zj is the categorical value of attribute j in Zl, nl is the number of objects in cluster l, and nj

r is the number of objects whose attribute value is r

This dissimilarity measure (distance function) is frequency-based

Algorithm is still based on iterative object cluster assignment and centroid update

A fuzzy K-Modes method is proposed to calculate a fuzzy cluster membership value for each object to each cluster

A mixture of categorical and numerical data: Using a K-Prototype method

Page 29: CSE 5243 INTRO. TO DATA MINING

29

Chapter 10. Cluster Analysis: Basic Concepts and Methods

Cluster Analysis: An Introduction

Partitioning Methods

Hierarchical Methods

Density- and Grid-Based Methods

Evaluation of Clustering

Summary

Page 30: CSE 5243 INTRO. TO DATA MINING

30

Hierarchical Clustering

Produces a set of nested clusters organized as a hierarchical tree

Can be visualized as a dendrogram

A tree-like diagram that records the sequences of merges or splits

1 3 2 5 4 60

0.05

0.1

0.15

0.2

1

2

3

4

5

6

1

23 4

5

Page 31: CSE 5243 INTRO. TO DATA MINING

31

Dendrogram: Shows How Clusters are Merged/Splitted

Dendrogram: Decompose a set of data objects into a tree of clusters by multi-level

nested partitioning

A clustering of the data objects is obtained by cutting the dendrogram at the desired

level, then each connected component forms a cluster

Hierarchical clustering

generates a dendrogram

(a hierarchy of clusters)

Page 32: CSE 5243 INTRO. TO DATA MINING

32

Strengths of Hierarchical Clustering

Do not have to assume any particular number of clusters

Any desired number of clusters can be obtained by ‘cutting’ the dendogramat the proper level

They may correspond to meaningful taxonomies

Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

Page 33: CSE 5243 INTRO. TO DATA MINING

33

Hierarchical Clustering

Two main types of hierarchical clustering

Agglomerative:

Divisive:

Page 34: CSE 5243 INTRO. TO DATA MINING

34

Hierarchical Clustering

Two main types of hierarchical clustering

Agglomerative:

◼ Start with the points as individual clusters

◼ At each step, merge the closest pair of clusters

until only one cluster (or k clusters) left

◼ Build a bottom-up hierarchy of clusters

Divisive:

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

aa b

d e

c d e

a b c d e

agglomerative

Page 35: CSE 5243 INTRO. TO DATA MINING

35

Hierarchical Clustering

Two main types of hierarchical clustering

Agglomerative:

◼ Start with the points as individual clusters

◼ At each step, merge the closest pair of clusters

until only one cluster (or k clusters) left

◼ Build a bottom-up hierarchy of clusters

Divisive:

◼ Start with one, all-inclusive cluster

◼ At each step, split a cluster until each cluster

contains a point (or there are k clusters)

◼ Generate a top-down hierarchy of clusters

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

aa b

d e

c d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative

divisive

Page 36: CSE 5243 INTRO. TO DATA MINING

36

Hierarchical Clustering

Two main types of hierarchical clustering

Agglomerative:

◼ Start with the points as individual clusters

◼ At each step, merge the closest pair of clusters until only one cluster (or k clusters) left

Divisive:

◼ Start with one, all-inclusive cluster

◼ At each step, split a cluster until each cluster contains a point (or there are k clusters)

Traditional hierarchical algorithms use a similarity or distance matrix

Merge or split one cluster at a time

Page 37: CSE 5243 INTRO. TO DATA MINING

37

Agglomerative Clustering Algorithm

More popular hierarchical clustering technique

Basic algorithm is straightforward

1. Compute the proximity matrix

2. Let each data point be a cluster

3. Repeat

4. Merge the two closest clusters

5. Update the proximity matrix

6. Until only a single cluster remains

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

aa b

d e

c d e

a b c d e

agglomerative

Page 38: CSE 5243 INTRO. TO DATA MINING

38

Agglomerative Clustering Algorithm

More popular hierarchical clustering technique

Basic algorithm is straightforward

1. Compute the proximity matrix

2. Let each data point be a cluster

3. Repeat

4. Merge the two closest clusters

5. Update the proximity matrix

6. Until only a single cluster remains

Key operation is the computation of the proximity of two clusters

Different approaches to defining the distance/similarity between clustersdistinguish the different algorithms

Page 39: CSE 5243 INTRO. TO DATA MINING

39

Starting Situation

Start with clusters of individual points and a proximity matrix

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

12 data points

Page 40: CSE 5243 INTRO. TO DATA MINING

40

Intermediate Situation

After some merging steps, we have some clusters

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 41: CSE 5243 INTRO. TO DATA MINING

41

Intermediate Situation

We want to merge the two closest clusters (C2 and C5) and update the proximity matrix.

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 42: CSE 5243 INTRO. TO DATA MINING

42

How do we update the proximity matrix?

After Merging

C1

C4

C2 U C5

C3

? ? ? ?

?

?

?

C2

U C5C1

C1

C3

C4

C2 U C5

C3 C4

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 43: CSE 5243 INTRO. TO DATA MINING

43

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Similarity?

MIN

MAX

Group Average

Distance Between Centroids

Proximity Matrix

Page 44: CSE 5243 INTRO. TO DATA MINING

44

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Page 45: CSE 5243 INTRO. TO DATA MINING

45

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Page 46: CSE 5243 INTRO. TO DATA MINING

46

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Page 47: CSE 5243 INTRO. TO DATA MINING

47

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Proximity Matrix

MIN

MAX

Group Average

Distance Between Centroids

Page 48: CSE 5243 INTRO. TO DATA MINING

48

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

Page 49: CSE 5243 INTRO. TO DATA MINING

49

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 50: CSE 5243 INTRO. TO DATA MINING

50

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00

Page 51: CSE 5243 INTRO. TO DATA MINING

51

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 52: CSE 5243 INTRO. TO DATA MINING

52

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 53: CSE 5243 INTRO. TO DATA MINING

53

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 I4 I5

{I1,I2} 1.00 0.70 0.65 0.50

I3 0.70 1.00 0.40 0.30

I4 0.65 0.40 1.00 0.80

I5 0.50 0.30 0.80 1.00

Update proximity matrix with new

cluster {I1, I2}

Page 54: CSE 5243 INTRO. TO DATA MINING

54

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 I4 I5

{I1,I2} 1.00 0.70 0.65 0.50

I3 0.70 1.00 0.40 0.30

I4 0.65 0.40 1.00 0.80

I5 0.50 0.30 0.80 1.00

1 2 3 4 5Update proximity matrix with new

cluster {I1, I2}

Page 55: CSE 5243 INTRO. TO DATA MINING

55

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 {I4,I5}

{I1,I2} 1.00 0.70 0.65

I3 0.70 1.00 0.40

{I4,I5} 0.65 0.40 1.00

1 2 3 4 5Update proximity matrix with new

cluster {I1, I2} and {I4, I5}

Page 56: CSE 5243 INTRO. TO DATA MINING

56

Cluster Similarity: MIN or Single Link

Similarity of two clusters is based on the two most similar (closest)

points in the different clusters

Determined by one pair of points, i.e., by one link in the proximity graph.

1 2 3 4 5

{I1,I2, I3} {I4,I5}

{I1,I2, I3} 1.00 0.65

{I4,I5} 0.65 1.00

Only two clusters are left.

Page 57: CSE 5243 INTRO. TO DATA MINING

57

Hierarchical Clustering: MIN

Nested Clusters Dendrogram

1

2

3

4

5

6

1

2

3

4

5

3 6 2 5 4 10

0.05

0.1

0.15

0.2

Page 58: CSE 5243 INTRO. TO DATA MINING

58

Strength of MIN

Original Points Two Clusters

• Can handle non-elliptical shapes

Page 59: CSE 5243 INTRO. TO DATA MINING

59

Limitations of MIN

Original Points Two Clusters

• Sensitive to noise and outliers

Page 60: CSE 5243 INTRO. TO DATA MINING

60

Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least similar (most distant)

points in the different clusters

Determined by all pairs of points in the two clusters

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 61: CSE 5243 INTRO. TO DATA MINING

61

Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least similar (most distant)

points in the different clusters

Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5

I1 1.00 0.10 0.60 0.20

I3 0.10 1.00 0.40 0.30

I4 0.60 0.40 1.00 0.80

I5 0.20 0.30 0.80 1.001 2 3 4 5

Page 62: CSE 5243 INTRO. TO DATA MINING

62

Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least similar (most distant)

points in the different clusters

Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5

I1 1.00 0.10 0.60 0.20

I3 0.10 1.00 0.40 0.30

I4 0.60 0.40 1.00 0.80

I5 0.20 0.30 0.80 1.001 2 3 4 5

Which two clusters should be merged next?

Page 63: CSE 5243 INTRO. TO DATA MINING

63

Cluster Similarity: MAX or Complete Linkage

Similarity of two clusters is based on the two least similar (most distant)

points in the different clusters

Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5

I1 1.00 0.10 0.60 0.20

I3 0.10 1.00 0.40 0.30

I4 0.60 0.40 1.00 0.80

I5 0.20 0.30 0.80 1.001 2 3 4 5

Merge {3} with {4,5}, why?

Page 64: CSE 5243 INTRO. TO DATA MINING

64

Hierarchical Clustering: MAX

Nested Clusters Dendrogram

3 6 4 1 2 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.41

2

3

4

5

6

1

2 5

3

4

Page 65: CSE 5243 INTRO. TO DATA MINING

65

Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

Page 66: CSE 5243 INTRO. TO DATA MINING

66

Limitations of MAX

Original Points Two Clusters

•Tends to break large clusters

•Biased towards globular clusters

Page 67: CSE 5243 INTRO. TO DATA MINING

67

Cluster Similarity: Group Average

Proximity of two clusters is the average of pairwise proximity between points in the two clusters.

Need to use average connectivity for scalability since total proximity favors large clusters

||Cluster||Cluster

)p,pproximity(

)Cluster,Clusterproximity(ji

ClusterpClusterp

ji

jijj

ii

=

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 68: CSE 5243 INTRO. TO DATA MINING

68

Hierarchical Clustering: Group Average

Nested Clusters Dendrogram

3 6 4 1 2 50

0.05

0.1

0.15

0.2

0.251

2

3

4

5

6

1

2

5

3

4

Page 69: CSE 5243 INTRO. TO DATA MINING

69

Hierarchical Clustering: Group Average

Compromise between Single and Complete Link

Strengths

Less susceptible to noise and outliers

Limitations

Biased towards globular clusters

Page 70: CSE 5243 INTRO. TO DATA MINING

70

Hierarchical Clustering: Time and Space requirements

O(N2) space since it uses the proximity matrix.

N is the number of points.

O(N3) time in many cases

There are N steps and at each step the size, N2, proximity matrix must be

updated and searched

Complexity can be reduced to O(N2 log(N) ) time for some approaches

Page 71: CSE 5243 INTRO. TO DATA MINING

71

Hierarchical Clustering: Problems and Limitations

Once a decision is made to combine two clusters, it cannot be undone

No objective function is directly minimized

Different schemes have problems with one or more of the following:

Sensitivity to noise and outliers

Difficulty handling different sized clusters and convex shapes

Breaking large clusters