52
1 amber, Eick: Object Similarity & Clustering for COSC 6340 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity Assessment March 25: Clustering and UHDM 2 March 30: Data Warehouses and OLAP KDD --- 2004 Lectures

Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

Embed Size (px)

Citation preview

Page 1: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

1Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

March 4+9: Introduction to KDD March 11: Association Rule

Mining March 23: Similarity

Assessment March 25: Clustering and

UHDM2

March 30: Data Warehouses and OLAP

KDD --- 2004 Lectures

Page 2: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

2Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Clustering andSimilarity Assessment

©Jiawei Han and Micheline Kamberwith major Additions and Modifications by Ch.

EickOrganization for COSC 6340:

1. What is Clustering?2. Object Similarity

Assessment3. K-means/medoid

Clustering4. Grid-based Clustering5. Work at UH

Page 3: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

4Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Motivation: Why Clustering?

Problem: Identify (a small number of) groups of similar objects in a given (large) set of object.

Goals: Find representatives for homogeneous groups Data Compression

Find “natural” clusters and describe their properties ”natural” Data Types

Find suitable and useful grouping ”useful” Data Classes

Find unusual data object Outlier Detection

Page 4: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

5Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Examples of Clustering Applications

Plant/Animal Classification Book Ordering Cloth Sizes Fraud Detection (Find outlier)

Page 5: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

6Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Requirements of Clustering in Data Mining

Scalability Ability to deal with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to

determine input parameters Able to deal with noise and outliers Insensitive to order of input records High dimensionality Incorporation of user-specified constraints Interpretability and usability

Page 6: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

7Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Data Structures for Clustering

Data matrix (n objects, p attributes)

(Dis)Similarity matrix (nxn)

npx...nfx...n1x

...............ipx...ifx...i1x

...............1px...1fx...11x

0...)2,()1,(

:::

)2,3()

...ndnd

0dd(3,1

0d(2,1)

0

Page 7: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

8Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Quality Evaluation of Clusters

Dissimilarity/Similarity metric: Similarity is expressed in terms of a normalized distance function d, which is typically metric; typically: (oi, oj) = 1 - d (oi, oj)

There is a separate “quality” function that measures the “goodness” of a cluster.

The definitions of similarity functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio-scaled variables.

Weights should be associated with different variables based on applications and data semantics.

It is hard to define “similar enough” or “good enough” the answer is typically highly subjective.

Page 8: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

9Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Challenges in Obtaining Object Similarity Measures

Many Types of Variables Interval-scaled variables Binary variables and nominal variables Ordinal variables Ratio-scaled variables

Objects are characterized by variables

belonging to different types (mixture of

variables)

Page 9: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

10Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Case Study: Patient Similarity The following relation is given (with 10000 tuples):Patient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Domains

ssn: 9 digits weight between 30 and 650; mweight=158 sweight=24.20

height between 0.30 and 2.20 in meters; mheight=1.52

sheight=19.2

cancer-sev: 4=serious 3=quite_serious 2=medium 1=minor eye-color: {brown, blue, green, grey } age: between 3 and 100; mage=45 sage=13.2

Task: Define Patient Similarity

Page 10: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

11Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Generating a Global Similarity Measure from Single Variable Similarity Measures

Assumption: A database may contain up to six types of variables: symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.

1. Standardize variable and associate similarity measure i with the standardized i-th variable and determine weight wi of the i-th variable.

2. Create the following global (dis)similarity measure :

f

fjif

ji

wpf

woopfoo

1

*)(1

,),(

Page 11: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

12Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

A Methodology to Obtain a Similarity Matrix

1. Understand Variables 2. Remove (non-relevant and redundant) Variables3. (Standardize and) Normalize Variables (typically

using z-scores or variable values are transformed to numbers in [0,1])

4. Associate (Dis)Similarity Measure df/f with each Variable

5. Associate a Weight (measuring its importance) with each Variable

6. Compute the (Dis)Similarity Matrix7. Apply Similarity-based Data Mining Technique

(e.g. Clustering, Nearest Neighbor, Multi-dimensional Scaling,…)

Page 12: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

13Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Interval-scaled Variables

Standardize data using z-scores

Calculate the mean absolute deviation:

where

Calculate the standardized measurement (z-

score)

Using mean absolute deviation is more robust than

using standard deviation

.)...21

1nffff

xx(xn m

|)|...|||(|121 fnffffff

mxmxmxns

f

fifif s

mx z

Page 13: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

14Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Normalization in [0,1]

Problem: If non-normalized variables are used the

maximum distance between two values can be greater

than 1.

Solution: Normalize interval-scaled variables using

where minf denotes the minimum value and maxf denotes

the maximum value of the f-th attribute in the data set

and is constant that is choses depending on the

similarity measure (e.g. if Manhattan distance is used

is chosen to be 1).

)*)min/((max)min( fffifif

z x

Page 14: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

15Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Other Normalizations

Goal:Limit the maximum distance to 1 Start using a distance measure df(x,y) Determine the maximum distance dmaxf

that can occur for two values of the f-th attribute (e.g. dmaxf=maxf-minf ).

Define f(x,y)=1- (df(x,y)/ dmaxf)

Advantage: Negative similarities cannot occur.

Page 15: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

16Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Similarity Between Objects

Distances are normally used to measure the similarity or dissimilarity between two data objects

Some popular ones include: Minkowski distance:

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp)

are two p-dimensional data objects, and q is a positive integer

If q = 1, d is Manhattan distance

qq

pp

qq

jx

ix

jx

ix

jx

ixjid )||...|||(|),(

2211

||...||||),(2211 pp jxixjxixjxixjid

Page 16: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

17Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Similarity Between Objects (Cont.)

If q = 2, d is Euclidean distance:

Properties d(i,j) 0 d(i,i) = 0 d(i,j) = d(j,i) d(i,j) d(i,k) + d(k,j)

Also one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures.

)||...|||(|),( 22

22

2

11 pp jx

ix

jx

ix

jx

ixjid

Page 17: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

18Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Similarity with respect to a Set of Binary Variables

A contingency table for binary data

pdbcasum

dcdc

baba

sum

0

1

01

Object i

Object j

dcbada ji

cbaa ji

sym

Jaccard

),(

),(

Ignores agree-ments in O’s

Considers agree-ments in 0’s and 1’sto be equivalent.

Page 18: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

19Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Similarity between Binary Variable Sets

Example

gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be

set to 0

Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4

Jack M Y N P N N NMary F Y N P N P NJim M Y P N N N N

25.0211

1),(

33.0111

1),(

67.0102

2),(

maryjim

jimjack

maryjack

Jacc

Jacc

Jacc

Page 19: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

20Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Nominal Variables

A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green

Method 1: Simple matching m: # of matches, p: total # of variables

Method 2: use a large number of binary variables creating a new binary variable for each of the M

nominal states

pmpood ji

),(

Page 20: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

21Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Ordinal Variables

An ordinal variable can be discrete or continuous order is important (e.g. UH-grade, hotel-rating) Can be treated like interval-scaled

replacing xif by their rank:

map the range of each variable onto [0, 1] by replacing the f-th variable of i-th object by

compute the dissimilarity using methods for interval-scaled variables

11

fM

ifif

rz

}{1,...,Mif

r f

Page 21: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

22Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Ratio-Scaled Variables

Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale,

such as AeBt or Ae-Bt Methods:

treat them like interval-scaled variables — not a good choice! (why?)

apply logarithmic transformation

yif = log(xif)

treat them as continuous ordinal data treat their rank as interval-scaled.

Page 22: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

23Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Case Study --- NormalizationPatient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Relevance: ssn no; eye-color minor; other major Attribute Normalization:

ssn remove! weight between 30 and 650; mweight=158 sweight=24.20;

transform to zweight= (xweight-158)/24.20 (alternatively,

zweight=(xweight-30)/620));

height normalize like weight! cancer_sev: 4=serious 3=quite_serious 2=medium

1=minor; transform 4 to 1, 3 to 2/3, 2 to 1/3, 1 to 0 and then normalize like weight!

age: normalize like weight!

Page 23: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

24Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Case Study --- Weight Selection and Similarity Measure Selection

Patient(ssn, weight, height, cancer-sev, eye-color, age) For normalized weight, height, cancer_sev, age values use

Manhattan distance function; e.g.:

weight(w1,w2)= 1 | ((w1-158)/24.20 ) ((w2-158)/24.20) |

For eye-color use: eye-color(c1,c2)= if c1=c2 then 1 else 0

Weight Assignment: 0.2 for eye-color; 1 for all others

Final Solution --- chosen Similarity Measure :Let o1=(s1,w1,h1,cs1,e1,a1) and o2=(s2,w2,h2,cs2,e2,a2)

(o1,o2):= (weight(w1,w2) + height(h1,h2) + cancer-

sev(cs1,cs2) + age(a1,a2) + 0.2* eye-color(e1,e2) ) /4.2

Page 24: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

25Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Major Clustering Approaches

Partitioning algorithms: Construct various partitions and

then evaluate them by some criterion

Hierarchy algorithms: Create a hierarchical decomposition

of the set of data (or objects) using some criterion

Density-based: based on connectivity and density

functions

Grid-based: based on a multiple-level granularity structure

Model-based: A model is hypothesized for each of the

clusters and the idea is to find the best fit of that model to

each other

Page 25: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

26Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Partitioning Algorithms: Basic Concept

Partitioning method: Construct a partition of a database D of n objects into a set of k clusters

Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion Global optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids

algorithms k-means (MacQueen’67): Each cluster is represented

by the center of the cluster k-medoids or PAM (Partition around medoids)

(Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

Page 26: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

27Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

The K-Means Clustering Method

Given k, the k-means algorithm is implemented in 4 steps: Partition objects into k nonempty subsets Compute seed points as the centroids of the

clusters of the current partition. The centroid is the center (mean point) of the cluster.

Assign each object to the cluster with the nearest seed point.

Go back to Step 2, stop when no more new assignment.

Page 27: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

28Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

The K-Means Clustering Method

Example

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 28: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

29Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Comments on the K-Means Method Strength

Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n.

Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms

Weakness Applicable only when mean is defined, then what about

categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Not suitable to discover clusters with non-convex

shapes

Page 29: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

30Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

PAM (Partitioning Around Medoids) (1987)

PAM (Kaufman and Rousseeuw, 1987), built in Splus Use real object to represent the cluster

Select k representative objects arbitrarily For each pair of non-selected object h and selected

object i, calculate the total swapping cost TCih

For each pair of i and h,

If TCih < 0, i is replaced by h

Then assign each non-selected object to the most similar representative object

repeat steps 2-3 until there is no change

Page 30: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

31Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

PAM Clustering: Total swapping cost TCih=jCjih

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

j

ih

t

Cjih = 0

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

t

i h

j

Cjih = d(j, h) - d(j, i)

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

h

i t

j

Cjih = d(j, t) - d(j, i)

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

t

ih j

Cjih = d(j, h) - d(j, t)

Page 31: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

32Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

CLARANS (“Randomized” CLARA) (1994)

CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94)

CLARANS draws sample of neighbors dynamically The clustering process can be presented as searching a

graph where every node is a potential solution, that is, a set of k medoids

If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum

It is more efficient and scalable than both PAM and CLARA Focusing techniques and spatial access structures may

further improve its performance (Ester et al.’95)

Page 32: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

33Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Grid-Based Clustering Method

Using multi-resolution grid data structure Several interesting methods

STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997)

WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98)

A multi-resolution clustering approach using wavelet method

CLIQUE: Agrawal, et al. (SIGMOD’98)

Page 33: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

34Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

STING: A Statistical Information Grid Approach

Wang, Yang and Muntz (VLDB’97) The spatial area area is divided into rectangular

cells There are several levels of cells corresponding to

different levels of resolution

Page 34: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

STING: A Statistical Information Grid Approach (2)

Each cell at a high level is partitioned into a number of smaller cells in the next lower level

Statistical info of each cell is calculated and stored beforehand and is used to answer queries

Parameters of higher level cells can be easily calculated from parameters of lower level cell

count, mean, s, min, max type of distribution—normal, uniform, etc.

Use a top-down approach to answer spatial data queries

Start from a pre-selected layer—typically with a small number of cells

For each cell in the current level compute the confidence interval

Page 35: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

STING: A Statistical Information Grid Approach (3)

Remove the irrelevant cells from further consideration

When finish examining the current layer, proceed to the next lower level

Repeat this process until the bottom layer is reached

Advantages: Query-independent, easy to parallelize,

incremental update O(K), where K is the number of grid cells at the

lowest level Disadvantages:

All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected

Page 36: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

37Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98). Automatically identifying subspaces of a high

dimensional data space that allow better clustering than original space

CLIQUE can be considered as both density-based and grid-based It partitions each dimension into the same number of

equal length interval It partitions an m-dimensional data space into non-

overlapping rectangular units A unit is dense if the fraction of total data points

contained in the unit exceeds the input model parameter

A cluster is a maximal set of connected dense units within a subspace

Page 37: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

38Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

CLIQUE: The Major Steps

Partition the data space and find the number of points that lie inside each cell of the partition.

Identify the subspaces that contain clusters using the Apriori principle

Identify clusters: Determine dense units in all subspaces of interests Determine connected dense units in all subspaces

of interests.

Generate minimal description for the clusters Determine maximal regions that cover a cluster of

connected dense units for each cluster Determination of minimal cover for each cluster

Page 38: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

39Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Sala

ry

(10,

000)

20 30 40 50 60age

54

31

26

70

20 30 40 50 60age

54

31

26

70

Vac

atio

n(w

eek)

age

Vac

atio

n

Salary 30 50

= 3

Page 39: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

40Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Strength and Weakness of CLIQUE

Strength It automatically finds subspaces of the highest

dimensionality such that high density clusters exist in those subspaces

It is insensitive to the order of records in input and does not presume some canonical data distribution

It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases

Weakness The accuracy of the clustering result may be

degraded at the expense of simplicity of the method

Page 40: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

41Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Creating Environments for Database Clustering; problems related to Multi-relational Data Mining [ER04].

Distance Function Learning [EVR03] Supervised Clustering [EZZ04] Using Clustering to Enhance Classifiers

[ICDM03], [ECAI04], [PKDD04] not discussed

Using SQL Queries for Data Summarization [KDD96]; [RYU98]; not discussed

Work at UH related to Similarity Assessment and Clustering

Work at UH related to Similarity Assessment and Clustering

Work at UH

Page 41: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

42Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Data Extraction Tool

DBMS

Clustering Tool

User Interface

A set of clusters

Similarity measure

Similarity Measure Tool

Default choices and domain information

Library of similarity measures

Type and weight

information

ObjectView

Library of clustering algorithms

CAL-FULL/UH Database Clustering Similarity Assessment

Environments

CAL-FULL/UH Database Clustering Similarity Assessment

Environments

LearningTool

TrainingDate

Work at UH

Page 42: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

43Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Prototypes of Similarity Assessment Tools

Prototypes of Similarity Assessment Tools

Prototype1 (CAL State Fullerton): Supported the interactive definition of similarity measures; knowledge representation format does not rely on modular units; provides a nearest neighbor clustering algorithm for database clustering; functions were supported outside a DBMS

Prototype 2 (UH 2002): Similarity measures are defined using a special language (not interactively); tool supports modular units and functions are provided using a Java/SQL-Server 2000 framework; functions were partially moved inside a DBMS (although some are still inside Java); analysis results are stored in the database and therefore available for further analysis.

Prototype 3 (UH): Learn Distance Functions for Classification Problems. Currently Investigated!Work at UH

Page 43: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

Objectives Supervised Clustering: Maximize Cluster Purity while keeping the number of clusters low.

Work at UH

Page 44: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

45Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Research Goals Supervised Clustering

Develop representative-based supervised clustering algorithms.

Show the benefits of supervised clustering in case studies that center on summary generation, distance function learning, and classification.

Work at UH

Page 45: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

46Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

What is a good object distance function for supervised similarity

assessment?

Objective: Learn good distance functions for classification tasks.

Our approach: Apply a clustering algorithm with the distance function to be evaluated that returns a predetermined number of clusters k. The more pure the obtained clusters are the better is the quality of .

Our goal is to learn the weights of an object distance function such that all the clusters are pure (or as pure is possible); for more details see [ERV03] Paper.

f

fjif

ji

wpf

woopfoo

1

*)(1

,),(

Work at UH

Page 46: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

47Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Idea: Coevolving Clusters and Similarity Functions

Clusters X SimilarityFunction

Reinforcement Learning

Clustering

Goodness of The Similarity

Function

q(X) Clustering Evaluation

Work at UH

q(X):=percentage_of_minority_examples + penalty(k)penalty(k):= If kc then 0 else sqrt((k-c)/n))withk:= number of clusters generatedn:= number of objects in the datasetc:= number of classes in the dataset

Page 47: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

Idea CR*-ApproachLet be a clustering algorithm and Error(,O)=Error’((,O)) an error

function that measures class purity in clusters, class coverage, and assigns a penalty for large numbers of clusters.

While not done do1. Cluster with respect to (,O) receiving clusters C and report

Error’((,O))2. If Error’((,O)) is small enough stop reporting the error, C, and 3. For each cluster determine majority class4. For each cC adjust weights wj locally

x:=examples belonging to majority classo:= non-majority-class examples

x x

Clusterx

X x

xo oo

Decrease weight for modular unit

Increase weight for modular unit

Cluster

X x o

o o

Idea: Move examples of the majority class closer to each other

Work at UH

Page 48: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

49Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Weight Adjustment within a Cluster

Let wi be the current weight of the i-th modular unit

Let i be the average absolute deviation for the examples that belong to the cluster with respect to i

Let i be the average absolute deviation for the examples of the cluster that belong to the majority class with respect to i

Learning: Then weights are adjusted as follows with respect to a particular cluster: wi’=wi+ (i – i) or betterwi’=wi+ wimin(max(i – i)

with being the learning rate and maximal adjustment (e.g. if a weight can be maximally increased/decreased by 20%) per weight per cluster.

Remark: If the cluster is ‘pure’ or does not contain 2 or more elements of a particular class, no weight adjustment takes place.

Work at UH

Page 49: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

50Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Summary: Problems and Challenges for Clustering

Considerable progress has been made in scalable clustering methods Partitioning: k-means, k-medoid, CLARANS, EM Hierarchical: BIRCH, CURE Density-based: DBSCAN, CLIQUE, OPTICS Grid-based: STING, WaveCluster Model-based: Autoclass, Denclue, Cobweb

Current clustering techniques do not address all the requirements adequately

Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries

Page 50: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

51Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

Summary Object Similarity & Clustering

Cluster analysis groups objects based on their similarity and has wide applications

Appropriate similarity measures have to be chosen for various types of variables and combined into a global similarity measure.

Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods

Methods to measure, compute, and learn object similarity are quite important, not only for clustering, but also for nearest neighbor approaches, information retrieval in general, and for data visualization.

Page 51: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

52Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

References (1)

R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98

M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973. M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to

identify the clustering structure, SIGMOD’99. P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World

Scietific, 1996 M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for

discovering clusters in large spatial databases. KDD'96. M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial

databases: Focusing techniques for efficient class identification. SSD'95. D. Fisher. Knowledge acquisition via incremental conceptual clustering.

Machine Learning, 2:139-172, 1987. D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An

approach based on dynamic systems. In Proc. VLDB’98. S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for

large databases. SIGMOD'98. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.

Page 52: Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity

53Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340

References (2) L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to

Cluster Analysis. John Wiley & Sons, 1990. E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large

datasets. VLDB’98. G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to

Clustering. John Wiley and Sons, 1988. P. Michaud. Clustering techniques. Future Generation Computer systems, 13,

1997. R. Ng and J. Han. Efficient and effective clustering method for spatial data

mining. VLDB'94. E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very

large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105. G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution

clustering approach for very large spatial databases. VLDB’98. W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to

Spatial Data Mining, VLDB’97. T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering

method for very large databases. SIGMOD'96.