35
Dept of EEE K-means Algorithm 1

Neural nw k means

Embed Size (px)

Citation preview

Page 1: Neural nw k means

Dept of EEE

K-means Algorithm

1

Page 2: Neural nw k means

Abstract

k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning

2

Page 3: Neural nw k means

ContentsAbstract………………………………………………………………………....11. Introduction……………………………………………………………...…32. The k-means algorithm…………………………..........................................43. How the k-mean clustering algorithm works………………………….…...54. Task Formulation…………………………………………………………..6

4.1 K-means implementation………………………………………….…64.2 Estimation of parameters of a Gaussian mixture…………………….84.3 Unsupervised learning………………………………………………..9

5. Limitations………………………………………………………………..136. Difficulties with k-means…………………………………………………147. Available software……………………………………………………...…158. Applications of the k-Means Clustering Algorithm………………………159. Conclusion……………………………………………………………...…16References…………………………………………………………………......17

3

Page 4: Neural nw k means

The k-means Algorithm1 IntroductionIn this project, we describe the k-means algorithm, a straightforward and widely-used clustering algorithm. Given a set of objects (records), the goal of clustering or segmentation is to divide these objects into groups or “clusters” such that objects within a group tend to be more similar to one another as compared to objects belonging to different groups. In other words, clustering algorithms place similar points in the same cluster while placing dissimilar points in different clusters. Note that, in contrast to supervised tasks such as regression or classification where there is a notion of a target value or class label, the objects that form the inputs to a clustering procedure do not come with an associated target. Therefore clustering is often referred to as unsupervised learning. Because there is no need for labelled data, unsupervised algorithms are suitable for many applications where labeled data is difficult to obtain. Unsupervised tasks such as clustering are also often used to explore and characterize the dataset before running a supervised learning task. Since clustering makes no use of class labels, some notion of similarity must be defined based on the attributes of the objects. The definition of similarity and the method in which points are clustered differ based on the clustering algorithm being applied. Thus, different clustering algorithms are suited to different types of data sets and different purposes. The “best” clustering algorithm to use therefore depends on the application. It is not uncommon to try several different algorithms and choose depending on which is the most useful.

The k-means algorithm is a simple iterative clustering algorithm that partitions a given dataset into a user-specified number of clusters, k. The algorithm is simple to implement and run, relatively fast, easy to adapt, and common in practice. It is historically one of the most important algorithms in data mining. Historically, k-means in its essential form has been discovered by several researchers across different disciplines, most notably Lloyd (1957,1982), Forgey (1965), Friedman and Rubin(1967), and McQueen(1967). A detailed history of k-means along with descriptions of several variations are given in Jain and Dubes. Gray and Neuhoff provide a nice historical background for k-means placed in the larger context of hill-climbing algorithms.In the rest of this project, we will describe how k-means works, discuss the limitations of k-means, difficulties and some applications of this algorithm.

4

Page 5: Neural nw k means

2 The k-means algorithmThe k-means algorithm applies to objects that are represented by points in a d-dimensional vector space. Thus, it clusters a set of d-dimensional vectors, D={x i∨i=1 ,... , N } where x i∈Rddenotes the ith object or “data point”. As discussed in the

introduction, k-means is a clustering algorithm that partitions D into k clusters

of points. That is, the k-means algorithm clusters all of the data points in D such that each point x i falls in one and only one of the k partitions. One can keep track of which point is in which cluster by assigning each point a cluster ID. Points with the same cluster ID are in the same cluster, while points with different cluster IDs are in different clusters. One can denote this with a cluster membership vector m of length N, where mi

is the cluster ID of x i. The value of

k is an input to the base algorithm. Typically, the value for k is based on

criteria such as prior knowledge of how many clusters actually appear in D, how many clusters are desired for the current application, or the types of clusters found by exploring/experimenting with different values of k . How k is chosen is not necessary for understanding how k-means partitions the dataset D, and we will discuss how to choose k when it is not pre-specified in a later section.

In k-means, each of thek clusters is represented by a single point in Rd. Let us denote this set of cluster representatives as the set

C={c j∨ j=1 ,... , k }. These k cluster representatives are also called the cluster means or cluster centroids. In clustering algorithms, points are grouped by some notion of “closeness” or “similarity.” In k-means, the default measure of closeness is the Euclidean distance. In particular, one can readily show that k-means attempts to minimize the following non-negative cost function:

∑i=1

N

¿¿ (1)

In other words, k-means attempts to minimize the total squared Euclidean distance between each point x i and its closest cluster representative c j. Equation 1 is often referred to as the k-means objective function.

5

Page 6: Neural nw k means

3 How the k-mean clustering algorithm works

Here is step by step k-means clustering algorithm:

K -means clustering algorithm flowchart

Step1. Begin with a decision on the value of k =number of clusters

Step2. Put any initial partition that classifies the data into k clusters. You may assign the training samples randomly, or systematically as the following:

1. Take the first k training sample as single-element clusters.2. Assign each of the remaining(N-k) training sample to the cluster with

the nearest centroid.

After each assignment, recomputed the centroid of the gaining cluster.

Step3. Take each sample in sequence and compute its distance from the centroid of each of the clusters. If a sample is not currently in the cluster with the closest centroid, switch this sample to that cluster and update the centroid of the cluster gaining the new sample and the cluster losing the sample.

6

Page 7: Neural nw k means

Step4. Repeat step 3 until convergence is achieved, that is until a pass through the training sample causes no new assignments.

If the number of data is less than the number of cluster then we assign each data as the centroid of the cluster. Each centroid will have a cluster number. If the number of data is bigger than the number of cluster, for each data, we calculate the distance to all centroid and get the minimum distance. This data is said belong to the cluster that has minimum distance from this data.

Since we are not sure about the location of the centroid, we need to adjust the centroid location based on the current updated data. Then we assign all the data to this new centroid. This process is repeated until no data is moving to another cluster anymore. Mathematically this loop can be proved to be convergent. The convergence will always occur if the following condition satisfied:

1. Each switch in step 2 the sum of distances from each training sample to that training sample’s group centroid is decreased.

2. There are only finitely many partitions of the training examples into k clusters.

4 Task Formulation

4.1 K-means implementation

To implement the K-means clustering algorithm we have to follow the description given below:  

Tasks:

1. Download test data data.mat, display them using ppatterns().

The file data.mat contains single variable X - 2×N matrix of 2D points.

2. Run the algorithm. In each iteration, display locations of means μj and current classification of the test data. To display the classification, use

again the ppatterns function.

3. In each iteration, plot  the average distance between points and their respective closest means μj.

7

Page 8: Neural nw k means

4. Experiment with diferent number K of means, eg. K = 2,3,4. Execute the algorithm repeatedly, initialise the mean values μj with random

positions. Use the function rand.

8

Page 9: Neural nw k means

4.2 Estimation of parameters of a Gaussian mixtureLet us assume that the distribution of our data is a mixture of three gaussians:

p(x )=∑ ¿3j=1 P( j) N (x∨μ j , Σ j)¿

,

where N(μj ,Σj) denotes a normal distribution with mean value μ j and covariance Σ j. P(j) denotes the weight of j-th gaussian within the mixture. The task is, for given input data x1 , x2 , ... , xN, to estimate the mixture parameters μ j , Σ j , P(j) . 

Tasks:

1. In each iteration of the implemented k-means algorithm, reestimate means μ j and covariances Σ j using the maximal likelihood method. P(j) will be the relative number (percentage) of data points classified to j-th cluster.

2. In each iteration, plot the total likelihood L of estimated parameters μ j , Σ j , P(j) : 

9

Page 10: Neural nw k means

4.3 Unsupervised learningApply the K-means clustering to the problem of "unsupervised learning". The input consists of images of three letters, H, L, T. It is not known, which letter is shown in which images. The task is to classify the images into three classes. The images will be described by the two usual measurements:

x = (sum of pixel intensities in the left half of the image) - (sum of pixel intensities in the right half of the image) y = (sum of pixel intensities in the upper half of the image) - (sum of pixel intensities in the lower half of the image)Tasks:

1. Download the images of letters image_data.mat, compute measurements x and y.

2. Using the k-means method, classify the images into three classes. In each iteration, display the means μj , current classification, and the likelihood L .

3. After the iteration stops, compute and display the average image of each of the three classes. To display the final classification, you can use show_class function.

10

Page 11: Neural nw k means

11

Page 12: Neural nw k means

Visualisation of classification by show_class:

12

Page 13: Neural nw k means

Class 1

Class 2

13

Page 14: Neural nw k means

Class 3

5 LimitationsThe greedy-descent nature of k-means on a non-convex cost implies that the convergence is only to a local optimum, and indeed the algorithm is typically quite sensitive to the initial centroid locations. In other words, initializing the set of cluster representatives C differently can lead to very different clusters, even on the same dataset D. A poor initialization can lead to very poor clusters. The local minima problem can be countered to some extent by running the algorithm multiple times with different initial centroids and then selecting the best result, or by doing limited local search about the converged solution. Other approaches include methods that attempts to keep k-means from converging to local minima. There are also a list of different methods of initialization, as well as a discussion of other limitations of k-means.As mentioned, choosing the optimal value of k may be difficult. If one has knowledge about the dataset, such as the number of partitions that naturally comprise the dataset, then that knowledge can be used to choose k . Otherwise, one must use some other criteria to choose k , thus solving the model selection problem. One naive solution is to try several different values of k and choose the clustering which minimizes the k-means objective function (Equation 1). Unfortunately, the value of the objective function is not as informative as one would hope in this case. For example, the cost of the optimal solution decreases

14

Page 15: Neural nw k means

with increasing k till it hits zero when the number of clusters equals the number of distinct data points. This makes it more difficult to use the objective function to (a) directly compare solutions with different numbers of clusters and (b) to find the optimum value of k .

Thus, if the desired k is not known in advance, one will typically run k-means with different values of k , and then use some other, more suitable criterion to select one of the results. For example, SAS uses the cube-clustering-criterion, while X-means adds a complexity term (which increases with k) to the original cost function (Eq. 1) and then identifies the k which minimizes this adjusted cost. Alternatively, one can progressively increase the number of clusters, in conjunction with a suitable stopping criterion. Bisecting k-means achieves this by first putting all the data into a single cluster, and then recursively splitting the least compact cluster into two using 2-means. The celebrated LBG algorithm used for vector quantization doubles the number of clusters till a suitable code-book size is obtained. Both these approaches thus alleviate the need to know k beforehand.

6 Difficulties with k-meansk-means suffers from several other problems that can be understood by first noting that the problem of fitting data using a mixture of k Gaussians with identical, isotropic covariance matrices, (∑ ¿σ2 I )where I is the identity matrix,

results in a “soft” version of k-means.More precisely, if the soft assignments of data points to the mixture components of such a model are instead hardened so that each data point is solely allocated to the most likely component, then one obtains the k-means algorithm. From this connection it is evident that k-means inherently assumes that the dataset is composed of a mixture of k balls or hyperspheres of data, and each of the k clusters corresponds to one of the mixture components. Because of this implicit assumption, k-means will falter whenever the data is not well described by a superposition of reasonably separated spherical Gaussian distributions. For example, k-means will have trouble if there are non-convex shaped clusters in the data. This problem may be alleviated by rescaling the data to “whiten” it before clustering, or by using a different distance measure that is more appropriate for the dataset. For example, information-theoretic clustering uses the KL-divergence to measure the distance between two data points representing two discrete probability distributions. It has been recently shown that if one measures distance by selecting any member of a very large class of divergences called Bregman divergences during the assignment step and makes no other changes, the essential properties of k-means, including guaranteed convergence, linear separation boundaries and scalability, are

15

Page 16: Neural nw k means

retained. This result makes k-means effective for a much larger class of datasets so long as an appropriate divergence is used.Another method of dealing with non-convex clusters is by pairing k-means with another algorithm. For example, one can first cluster the data into a large number of groups using k-means. These groups are then agglomerated into larger clusters using single link hierarchical clustering, which can detect complex shapes. This approach also makes the solution less sensitive to initialization, and since the hierarchical method provides results at multiple resolutions, one does not need to worry about choosing an exact value for k either; instead, one can simply use a large value for k when creating the initial clusters.The algorithm is also sensitive to the presence of outliers, since “mean” is not a robust statistic. A preprocessing step to remove outliers can be helpful. Post-processing the results, for example to eliminate small clusters, or to merge close clusters into a large cluster, is also desirable. Another potential issue is the problem of “empty” clusters . When running k-means, particularly with large values of k and/or when data resides in very high dimensional space, it is possible that at some point of execution, there exists a cluster representative cj such that all points xj in D are closer to some other cluster representative that is not cj . When points in D are assigned to their closest cluster, the jth cluster will have zero points assigned to it. That is, cluster j is now an empty cluster. The standard algorithm does not guard against empty clusters, but simple extensions (such as reinitializing the cluster representative of the empty cluster or “stealing” some points from the largest cluster) are possible.

7 Available softwareBecause of the k-means algorithm’s simplicity, effectiveness, and historical

importance, software to run   the k-means algorithm is readily available in

several forms. It is a standard feature in many popular data mining software

packages. For example, it can be found in Weka or in SAS under the

FASTCLUS procedure. It is also commonly included as add-ons to existing

software. For example, several implementations  of k-means are available as

parts of various toolboxes in Matlab. k-means is also available in Microsoft

Excel after adding XL   Miner. Finally, several stand-alone versions of k-

means exist and can be easily found on the Internet. The algorithm is also

16

Page 17: Neural nw k means

straightforward to code, and the reader is encouraged to create their own implementation of k-means as an exercise.

8 Applications of the k-Means Clustering AlgorithmBriefly, optical character recognition, speech recognition, and

encoding/decoding as example applications of k-means. However, a survey of

the literature on the subject offers a more in depth treatment of some other

practical applications, such as "data detection … for burst-mode optical

receiver[s]", and recognition of musical genres. Researchers describe "burst-

mode data-transmission systems," a "significant feature of burst-mode data

transmissions is that due to unequal distances between" sender and receivers,

"signal attenuation is not the same" for all receivers. Because of this,

"conventional receivers are not suitable for burst-mode data transmissions."

The importance, they note, is that many "high-speed optical multi-access

network applications, [such as] optical bus networks [and] WDMA optical star

networks" can use burst-mode receivers.

In their paper, they provide a "new, efficient burst-mode signal detection

scheme" that utilizes "a two-step data clustering method based on a K-means

algorithm." They go on to explain that "the burst-mode signal detection

problem" can be expressed as a "binary hypothesis," determining if a bit is 0 or

1. Further, although they could use maximum likelihood sequence estimation

(MLSE) to determine the class, it "is very computationally complex, and not

suitable for high-speed burst-mode data transmission." Thus, they use an

approach based on k-means to solve the practical problem where simple MLSE

is not enough.

9  Conclusion This project tried to explain about K-means clustering algorithm and its application to the problem of un supervised learning. The k-means algorithm is

a simple iterative clustering algorithm that partitions a dataset into k clusters.

At its core, the algorithm works by iterating over two steps:

17

Page 18: Neural nw k means

1) clustering all points in the dataset based on the distance between each point and its closest cluster representative, and

2) re-estimating the cluster representatives.Limitations of the k-means algorithm include the sensitivity of k-means to initialization and determining the value of k. Despite its drawbacks, k-means remains the most widely used partitional clustering algorithm in practice. The algorithm is simple, easily understandable and reasonably scalable, and can be

easily modified   to deal with different scenarios such as semi-supervised

learning or streaming data. Continual improvements and generalizations of the basic algorithm have ensured its continued relevance and gradually increased its effectiveness as well.

References1. http://www.ideal.ece.utexas.edu/papers/km.pdf 2. http://www.science.uva.nl/research/ias/alumni/m.sc.theses/theses/NoahLaith.doc 3. http://cw.felk.cvut.cz/cmp/courses/ae4b33rpz/Labs/kmeans/index_en.html

18

Page 19: Neural nw k means

Matlab codes for unsupervised learning task

clear;close all;load('data.mat'); % X = 2x140%% cast 1model = kminovec(X, 4, 10, 1);%% cast 2% clearGmodel.Mean = [-2,1;1,1;0,-1]';Gmodel.Cov (:, :, 1) = [ 0.1 0; 0 0.1]; Gmodel.Cov (:, :, 2) = [ 0.3 0; 0 0.3];Gmodel.Cov (:, :, 3) = [ 0.01 0; 0 0.5];Gmodel.Prior = [0.4;0.4;0.2];

19

Page 20: Neural nw k means

gmm = gmmsamp(Gmodel, 100);figure(gcf);clf;ppatterns(gmm.X, gmm.y);axis([-3 3 -3 3]);

model = kminovec(gmm.X, 3, 10, 1, gmm);

figure(gcf);plot(model.L);%% cast 3data = load('image_data.mat');

for i = 1:size(data.images, 3) % soucet sum leva - prava cast obrazku pX(i) = sum(sum(data.images(:, 1:floor(end/2) , i))) ... - sum(sum(data.images(:, (floor(end/2)+1):end , i))); % soucet sum horni - dolni pY(i) = sum(sum(data.images(1:floor(end/2),: , i))) ... - sum(sum(data.images((floor(end/2)+1):end , :, i)));end

model = kminovec([pX;pY], 3, 10, 1);show_class(data.images, model.class');

%% d model = struct('Mean',[-2 3; 5 8],'Cov',[1 0.5],'Prior',[0.4 0.6;0]); figure; hold on; plot([-4:0.1:5], pdfgmm([-4:0.1:5],model),'r'); sample = gmmsamp(model,500); [Y,X] = hist(sample.X,10); bar(X,Y/500);

20