36
INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨utze’s, linked from http://informationretrieval.org/ IR 21/26: Linear Classifiers and Flat clustering Paul Ginsparg Cornell University, Ithaca, NY 12 Nov 2009 1 / 36

INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

INFO 4300 / CS4300Information Retrieval

slides adapted from Hinrich Schutze’s,linked from http://informationretrieval.org/

IR 21/26: Linear Classifiers and Flat clustering

Paul Ginsparg

Cornell University, Ithaca, NY

12 Nov 2009

1 / 36

Page 2: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Overview

1 Recap

2 Evaluation

3 How many clusters?

4 Discussion

2 / 36

Page 3: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Outline

1 Recap

2 Evaluation

3 How many clusters?

4 Discussion

3 / 36

Page 4: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Linear classifiers

Linear classifiers compute a linear combination or weightedsum

i wixi of the feature values.

Classification decision:∑

i wixi > θ?

. . . where θ (the threshold) is a parameter.

(First, we only consider binary classifiers.)

Geometrically, this corresponds to a line (2D), a plane (3D) ora hyperplane (higher dimensionalities)

Assumption: The classes are linearly separable.

Can find hyperplane (=separator) based on training set

Methods for finding separator: Perceptron, Rocchio, NaiveBayes – as we will explain on the next slides

4 / 36

Page 5: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Which hyperplane?

5 / 36

Page 6: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Which hyperplane?

For linearly separable training sets: there are infinitely manyseparating hyperplanes.

They all separate the training set perfectly . . .

. . . but they behave differently on test data.

Error rates on new data are low for some, high for others.

How do we find a low-error separator?

Perceptron: generally bad; Naive Bayes, Rocchio: ok; linearSVM: good

6 / 36

Page 7: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Linear classifiers: Discussion

Many common text classifiers are linear classifiers: NaiveBayes, Rocchio, logistic regression, linear support vectormachines etc.

Each method has a different way of selecting the separatinghyperplane

Huge differences in performance on test documents

Can we get better performance with more powerful nonlinearclassifiers?

Not in general: A given amount of training data may sufficefor estimating a linear boundary, but not for estimating amore complex nonlinear boundary.

7 / 36

Page 8: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

How to combine hyperplanes for > 2 classes?

?

(e.g.: rank and select top-ranked classes)

8 / 36

Page 9: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

What is clustering?

(Document) clustering is the process of grouping a set ofdocuments into clusters of similar documents.

Documents within a cluster should be similar.

Documents from different clusters should be dissimilar.

Clustering is the most common form of unsupervised learning.

Unsupervised = there are no labeled or annotated data.

9 / 36

Page 10: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Classification vs. Clustering

Classification: supervised learning

Clustering: unsupervised learning

Classification: Classes are human-defined and part of theinput to the learning algorithm.

Clustering: Clusters are inferred from the data without humaninput.

However, there are many ways of influencing the outcome ofclustering: number of clusters, similarity measure,representation of documents, . . .

10 / 36

Page 11: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Flat vs. Hierarchical clustering

Flat algorithms

Usually start with a random (partial) partitioning of docs intogroupsRefine iterativelyMain algorithm: K -means

Hierarchical algorithms

Create a hierarchyBottom-up, agglomerativeTop-down, divisive

11 / 36

Page 12: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Flat algorithms

Flat algorithms compute a partition of N documents into aset of K clusters.

Given: a set of documents and the number K

Find: a partition in K clusters that optimizes the chosenpartitioning criterion

Global optimization: exhaustively enumerate partitions, pickoptimal one

Not tractable

Effective heuristic method: K -means algorithm

12 / 36

Page 13: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Set of points to be clustered

b

b

b

b

b

b

b bb

b

b

b

b

b

b

b

bb

bb

13 / 36

Page 14: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Set of points to be clustered

b

b

b

b

b

b

b bb

b

b

b

b

b

b

b

bb

bb

14 / 36

Page 15: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

K -means

Each cluster in K -means is defined by a centroid.

Objective/partitioning criterion: minimize the average squareddifference from the centroid

Recall definition of centroid:

~µ(ω) =1

|ω|

~x∈ω

~x

where we use ω to denote a cluster.

We try to find the minimum average squared difference byiterating two steps:

reassignment: assign each vector to its closest centroidrecomputation: recompute each centroid as the average of thevectors that were assigned to it in reassignment

15 / 36

Page 16: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Random selection of initial cluster centers

×

×b

b

b

b

b

b

b bb

b

b

b

b

b

b

b

bb

bb

Centroids after convergence?

16 / 36

Page 17: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Centroids and assignments after convergence

2

2

2

2

1

1

1 122

1

1

1

11

1

11

1 1

××

17 / 36

Page 18: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

k-means clustering

Goal

cluster similar data points

Approach:given data points and distance function

select k centroids ~µa

assign ~xi to closest centroid ~µa

minimize∑

a,i d(~xi , ~µa)

Algorithm:

randomly pick centroids, possibly from data points

assign points to closest centroidaverage assigned points to obtain new centroids

repeat 2,3 until nothing changes

Issues:

- takes superpolynomial time on some inputs- not guaranteed to find optimal solution

+ converges quickly in practice18 / 36

Page 19: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Outline

1 Recap

2 Evaluation

3 How many clusters?

4 Discussion

19 / 36

Page 20: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

What is a good clustering?

Internal criteria

Example of an internal criterion: RSS in K -means

But an internal criterion often does not evaluate the actualutility of a clustering in the application.

Alternative: External criteria

Evaluate with respect to a human-defined classification

20 / 36

Page 21: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

External criteria for clustering quality

Based on a gold standard data set, e.g., the Reuters collectionwe also used for the evaluation of classification

Goal: Clustering should reproduce the classes in the goldstandard

(But we only want to reproduce how documents are dividedinto groups, not the class labels.)

First measure for how well we were able to reproduce theclasses: purity

21 / 36

Page 22: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

External criterion: Purity

purity(Ω,C ) =1

N

k

maxj

|ωk ∩ cj |

Ω = ω1, ω2, . . . , ωK is the set of clusters andC = c1, c2, . . . , cJ is the set of classes.

For each cluster ωk : find class cj with most members nkj in ωk

Sum all nkj and divide by total number of points

22 / 36

Page 23: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Example for computing purity

x

o

x x

x

x

o

x

o

o ⋄o x

⋄ ⋄

x

cluster 1 cluster 2 cluster 3

To compute purity:5 = maxj |ω1 ∩ cj | (class x, cluster 1);4 = maxj |ω2 ∩ cj | (class o, cluster 2);and3 = maxj |ω3 ∩ cj | (class ⋄, cluster 3).Purity is (1/17) × (5 + 4 + 3) = 12/17 ≈ 0.71.

23 / 36

Page 24: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Rand index

Definition: RI = TP+TNTP+FP+FN+TN

Based on 2x2 contingency table of all pairs of documents:same cluster different clusters

same class true positives (TP) false negatives (FN)different classes false positives (FP) true negatives (TN)

TP+FN+FP+TN is the total number of pairs.

There are(

N2

)

pairs for N documents.

Example:(

172

)

= 136 in o/⋄/x example

Each pair is either positive or negative (the clustering puts thetwo documents in the same or in different clusters) . . .

. . . and either “true” (correct) or “false” (incorrect): theclustering decision is correct or incorrect.

24 / 36

Page 25: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

As an example, we compute RI for the o/⋄/x example. We firstcompute TP + FP. The three clusters contain 6, 6, and 5 points,respectively, so the total number of “positives” or pairs ofdocuments that are in the same cluster is:

TP + FP =

(

62

)

+

(

62

)

+

(

52

)

= 40

Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄pairs in cluster 3, and the x pair in cluster 3 are true positives:

TP =

(

52

)

+

(

42

)

+

(

32

)

+

(

22

)

= 20

Thus, FP = 40 − 20 = 20. FN and TN are computed similarly.

25 / 36

Page 26: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Rand measure for the o/⋄/x example

same cluster different clusterssame class TP = 20 FN = 24different classes FP = 20 TN = 72

RI is then (20 + 72)/(20 + 20 + 24 + 72) = 92/136 ≈ 0.68.

26 / 36

Page 27: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Two other external evaluation measures

Two other measures

Normalized mutual information (NMI)

How much information does the clustering contain about theclassification?Singleton clusters (number of clusters = number of docs) havemaximum MITherefore: normalize by entropy of clusters and classes

F measure

Like Rand, but “precision” and “recall” can be weighted

27 / 36

Page 28: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Evaluation results for the o/⋄/x example

purity NMI RI F5

lower bound 0.0 0.0 0.0 0.0maximum 1.0 1.0 1.0 1.0value for example 0.71 0.36 0.68 0.46

All four measures range from 0 (really bad clustering) to 1 (perfectclustering).

28 / 36

Page 29: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Outline

1 Recap

2 Evaluation

3 How many clusters?

4 Discussion

29 / 36

Page 30: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

How many clusters?

Either: Number of clusters K is given.

Then partition into K clustersK might be given because there is some external constraint.Example: In the case of Scatter-Gather, it was hard to showmore than 10–20 clusters on a monitor in the 90s.

Or: Finding the “right” number of clusters is part of theproblem.

Given docs, find K for which an optimum is reached.How to define “optimum”?We can’t use RSS or average squared distance from centroidas criterion: always chooses K = N clusters.

30 / 36

Page 31: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Exercise

Suppose we want to analyze the set of all articles published bya major newspaper (e.g., New York Times or SuddeutscheZeitung) in 2008.

Goal: write a two-page report about what the major newsstories in 2008 were.

We want to use K -means clustering to find the major newsstories.

How would you determine K?

31 / 36

Page 32: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Simple objective function for K (1)

Basic idea:

Start with 1 cluster (K = 1)Keep adding clusters (= keep increasing K )Add a penalty for each new cluster

Trade off cluster penalties against average squared distancefrom centroid

Choose K with best tradeoff

32 / 36

Page 33: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Simple objective function for K (2)

Given a clustering, define the cost for a document as(squared) distance to centroid

Define total distortion RSS(K) as sum of all individualdocument costs (corresponds to average distance)

Then: penalize each cluster with a cost λ

Thus for a clustering with K clusters, total cluster penalty isKλ

Define the total cost of a clustering as distortion plus totalcluster penalty: RSS(K) + Kλ

Select K that minimizes (RSS(K) + Kλ)

Still need to determine good value for λ . . .

33 / 36

Page 34: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Finding the “knee” in the curve

2 4 6 8 10

1750

1800

1850

1900

1950

number of clusters

resi

dual

sum

of s

quar

es

Pick the number of clusters where curve “flattens”. Here: 4 or 9.

34 / 36

Page 35: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Outline

1 Recap

2 Evaluation

3 How many clusters?

4 Discussion

35 / 36

Page 36: INFO 4300 / CS4300 Information Retrieval [0.5cm] slides adapted … · 2013. 2. 6. · 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster

Discussion 6

Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified DataProcessing on Large Clusters. Usenix SDI ’04, 2004.http://www.usenix.org/events/osdi04/tech/full papers/dean/dean.pdf

See also (Jan 2009):http://michaelnielsen.org/blog/write-your-first-mapreduce-program-in-20-minutes/

part of lectures on “google technology stack”:http://michaelnielsen.org/blog/lecture-course-the-google-technology-stack/

(including PageRank, etc.)

See Recap Lecture 22 for slides

36 / 36