how to determine the optimum number of cluster use K-Mean Clustering

25 次查看(过去 30 天)
Hi everyone,
Is there anyone knows how to determine the optimum number of cluster in K-Means Clustering ?? If there's any matlab code for it I very appreciate.
Thanks in advance, Lina

回答(3 个)

Walter Roberson
Walter Roberson 2012-4-5
Basically, there isn't a way, not really.
There are papers on the topic that show algorithms that have been developed. The algorithms mostly involve running K-Means with a fixed number of clusters, running it again with 1 more cluster, then again with 1 more yet, and so on, and trying to figure out the "best" point in the downturn curve of classification effectiveness.
Since, after all, you can get better classification by running with as many clusters as you have points, the algorithms try to figure out a "reasonable" stopping place where the error rate "isn't too bad" and increasing the number of clusters "don't help much". There is of course a lot of subjectivity about that, and it depends upon having a good measure for "error rate" which you often don't have (and tends to vary with context when you do have it.)

kira
kira 2019-5-2
old question, but I just found a way myself looking at matlab documentation:
klist=2:n;%the number of clusters you want to try
myfunc = @(X,K)(kmeans(X, K));
eva = evalclusters(net.IW{1},myfunc,'CalinskiHarabasz','klist',klist)
classes=kmeans(net.IW{1},eva.OptimalK);

Umar
Umar 2024-8-8

Hi @lina ,

Normally to determine the optimal number of clusters in K-Means Clustering, you can utilize the Elbow Method or the Silhouette Method. These methods help in identifying the appropriate number of clusters based on the data distribution. In example code snippet below efficiently implements the Elbow method by downloading it from the following link below

https://www.mathworks.com/matlabcentral/fileexchange/65823-kmeans_opt

for determining the optimal number of clusters in K-Means Clustering. By following this guide, you can adapt and apply this method to various datasets while ensuring accurate clustering results and insightful visualizations.

% Example code snippet

% Load or generate sample data

X = rand(100, 2); % Example data: 100 points in 2D

% Run k-means optimization

[IDX,C,SUMD,K] = kmeans_opt(X);

% Print results

fprintf('Optimal number of clusters: %d\n', K);

fprintf('Centroids:\n');

disp(C);

fprintf('Sum of distances: %.4f\n', SUMD);

% Plotting results

figure;

hold on;

gscatter(X(:,1), X(:,2), IDX);

plot(C(:,1), C(:,2), 'kx', 'MarkerSize', 15, 'LineWidth', 3);

title('K-Means Clustering Results');

xlabel('Feature 1');

ylabel('Feature 2');

legend('Cluster 1', 'Cluster 2', 'Cluster 3', 'Centroids');

hold off;

Please see attached plot

The code snippet above provides a clear example of how to perform K-Means clustering, it randomly generates 100 data points in 2D, calls the custom function kmeans_opt to perform K-Means clustering.Displays the optimal number of clusters, centroids, and sum of distances and visualizes the clustering results with data points colored by cluster and centroids marked. Please let me know if you have any further questions.

类别

Help CenterFile Exchange 中查找有关 Cluster Analysis and Anomaly Detection 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by