其他分享
首页 > 其他分享> > ML: K-means Clustering

ML: K-means Clustering

作者:互联网

Source: Coursera Machine Learning provided by Stanford University Andrew Ng - Machine Learning | Coursera


Unsupervised Learning - Clustering - K-means Algorithm

notations:

$K$: the number of clusters

$\mu_k$: the $k$-th cluster centroid, $\mu_k \in \mathbb{R}^n$

$c^{(i)}$: the index of the nearest centroid to the $i$-th example $x^{(i)}$, $c^{(i)} \in [1,K]$

algorithm process:

randomly initialize $K$ cluster centroids

repeat{

  for $i=1$ to $m$: update $c^{(i)}$

  for $k=1$ to $K$: $\mu _k$ := mean of points assigned to cluster $k$

}

distortion function:

The optimization objective of the K-means algorithm is the distortion function:

$$ J(c^{(1)},c^{(2)},\cdots,c^{(m)},\mu_1,\mu_2,\cdots,\mu_k) = \frac{1}{m} \sum_{i=1}^{m}\left\| x^{(i)} - \mu_{c^{(i)}}\right\|^2 $$

In each iteration, the first step is minimizing $J$ with respect to $c^{(i)}$s, and the second step is minimizing $J$ with respect to $\mu_k$s. Thus, the distortion function either decreases or stays the same (convergence) after each iteration.

random initialization:

One common way to initialize the cluster centroids is to randomly select $K$ points as centroids. However, different initial centroids may result in different answers, in other words, the algorithm may end up in different local minimums: 

local minimum

To deal with this, run the algorithm multiple times with different random initial centroids, and choose the result with the minimum distortion function.

choosing the number of clusters:

One way is called the "elbow" method - choose the $k$ before which the distortion function decreases rapidly and after which it decreases much more slowly.

elbow

Another way is to choose $K$ according to the later performance of the problem if the classification is for other later purposes.

标签:Clustering,function,different,means,ML,centroids,distortion,mu,cluster
来源: https://www.cnblogs.com/ms-qwq/p/16484690.html