专栏名称: 新语数据故事汇
《新语数据故事汇,数说新语》科普数据科学、讲述数据故事,深层次挖掘数据价值。
51好读  ›  专栏  ›  新语数据故事汇

三大指标助力K均值与层次聚类数选定及Python示例代码

新语数据故事汇  · 公众号  ·  · 2024-07-15 19:48

正文

在数据分析和机器学习领域,聚类作为一种核心技术,对于从未标记数据中发现模式和洞察力至关重要。聚类的过程是将数据点分组,使得同组内的数据点比不同组的数据点更相似,这在市场细分到社交网络分析的各种应用中都非常重要。然而,聚类最具挑战性的方面之一在于确定最佳聚类数,这一决策对分析质量有着重要影响。

虽然大多数数据科学家依赖肘部图和树状图来确定K均值和层次聚类的最佳聚类数,但还有一组其他的聚类验证技术可以用来选择最佳的组数(聚类数)。我们将在sklearn.datasets.load_wine问题上使用K均值和层次聚类来实现一组聚类验证指标。以下的大多数代码片段都是可重用的,可以在任何数据集上使用Python实现。

接下来我们主要介绍以下主要指标:

  • Gap统计量(Gap Statistics)( !pip install --upgrade gap-stat[rust]

  • Calinski-Harabasz指数(Calinski-Harabasz Index )( !pip install yellowbrick

  • Davies Bouldin评分(Davies Bouldin Score )(作为Scikit-Learn的一部分提供)

  • 轮廓评分(Silhouette Score )( !pip install yellowbrick

引入包和加载数据

# Libraries to help with reading and manipulating dataimport pandas as pdimport numpy as np# libaries to help with data visualizationimport matplotlib.pyplot as pltimport seaborn as sns# Removes the limit for the number of displayed columnspd.set_option("display.max_columns", None)# Sets the limit for the number of displayed rowspd.set_option("display.max_rows", 200)# to scale the data using z-scorefrom sklearn.preprocessing import StandardScaler# to compute distancesfrom scipy.spatial.distance import cdist, pdist# to perform k-means clustering and compute silhouette scoresfrom sklearn.cluster import KMeansfrom sklearn.metrics import silhouette_score# to visualize the elbow curve and silhouette scoresfrom yellowbrick.cluster import KElbowVisualizer, SilhouetteVisualizer# to perform hierarchical clustering, compute cophenetic correlation, and create dendrogramsfrom sklearn.cluster import AgglomerativeClusteringfrom scipy.cluster.hierarchy import dendrogram, linkage, cophenetsns.set(color_codes=True)
from sklearn.datasets import load_iris, load_wine, load_digits, make_blobswine = load_wine()X_wine = wine.dataX_wine

标准化数据:

scaler=StandardScaler()X_wine_int=X_wine.copy()X_wine_interim=scaler.fit_transform(X_wine_int)X_wine_scaled=pd.DataFrame(X_wine_interim)X_wine_scaled.head(10)

Gap统计量(Gap Statistics)

from gap_statistic import OptimalKfrom sklearn.cluster import KMeansdef KMeans_clustering_func(X, k):    """     K Means Clustering function, which uses the K Means model from sklearn.    These user-defined functions *must* take the X (input features) and a k     when initializing OptimalK    """        # Include any clustering Algorithm that can return cluster centers        m = KMeans(random_state=11, n_clusters=k)    m.fit(X)    return m.cluster_centers_, m.predict(X)#--------------------create a wrapper around OptimalK to extract cluster centers and cluster labelsoptimalK = OptimalK(clusterer=KMeans_clustering_func)#--------------------Run optimal K on the input data (subset_scaled_interim) and number of clustersn_clusters = optimalK(X_wine_scaled, cluster_array=np.arange(1, 15))print('Optimal clusters: ', n_clusters)#--------------------Gap Statistics data frameoptimalK.gap_df[['n_clusters', 'gap_value']]

plt.figure(figsize=(10,6))n_clusters=3plt.plot(optimalK.gap_df.n_clusters.values, optimalK.gap_df.gap_value.values, linewidth=2)plt.scatter(optimalK.gap_df[optimalK.gap_df.n_clusters == n_clusters].n_clusters,            optimalK.gap_df[optimalK.gap_df.n_clusters == n_clusters].gap_value, s=250






请到「今天看啥」查看全文