Java 类名:com.alibaba.alink.operator.batch.evaluation.EvalClusterBatchOp
Python 类名:EvalClusterBatchOp

功能介绍

聚类评估是对聚类算法的预测结果进行效果评估,支持下列评估指标。

Compactness(CP), CP越低意味着类内聚类距离越近

Seperation(SP), SP越高意味类间聚类距离越远

Davies-Bouldin Index(DB), DB越小意味着类内距离越小 同时类间距离越大

Calinski-Harabasz Index(VRC), VRC越大意味着聚类质量越好

Purity, Putity在[0,1]区间内,越接近1表示聚类结果越好

Normalized Mutual Information(NMI), NMI在[0,1]区间内, 越接近1表示聚类结果越好

Rand Index(RI), RI在[0,1]区间内,越接近1表示聚类结果越好

Adjusted Rand Index(ARI), ARI在[-1,1]区间内,越接近1表示聚类结果越好

参数说明

| 名称 | 中文名称 | 描述 | 类型 | 是否必须? | 取值范围 | 默认值 | | —- | —- | —- | —- | —- | —- | —- |

| predictionCol | 预测结果列名 | 预测结果列名 | String | ✓ | | |

| distanceType | 距离度量方式 | 距离类型 | String | | “EUCLIDEAN”, “COSINE”, “CITYBLOCK” | “EUCLIDEAN” |

| labelCol | 标签列名 | 输入表中的标签列名 | String | | | null |

| vectorCol | 向量列名 | 输入表中的向量列名 | String | | 所选列类型为 [DENSE_VECTOR, SPARSE_VECTOR, STRING, VECTOR] | null |

代码示例

Python 代码

  1. from pyalink.alink import *
  2. import pandas as pd
  3. useLocalEnv(1)
  4. df = pd.DataFrame([
  5. [0, "0 0 0"],
  6. [0, "0.1,0.1,0.1"],
  7. [0, "0.2,0.2,0.2"],
  8. [1, "9 9 9"],
  9. [1, "9.1 9.1 9.1"],
  10. [1, "9.2 9.2 9.2"]
  11. ])
  12. inOp = BatchOperator.fromDataframe(df, schemaStr='id int, vec string')
  13. metrics = EvalClusterBatchOp().setVectorCol("vec").setPredictionCol("id").linkFrom(inOp).collectMetrics()
  14. print("Total Samples Number:", metrics.getCount())
  15. print("Cluster Number:", metrics.getK())
  16. print("Cluster Array:", metrics.getClusterArray())
  17. print("Cluster Count Array:", metrics.getCountArray())
  18. print("CP:", metrics.getCp())
  19. print("DB:", metrics.getDb())
  20. print("SP:", metrics.getSp())
  21. print("SSB:", metrics.getSsb())
  22. print("SSW:", metrics.getSsw())
  23. print("CH:", metrics.getVrc())

Java 代码

  1. import org.apache.flink.types.Row;
  2. import com.alibaba.alink.operator.batch.BatchOperator;
  3. import com.alibaba.alink.operator.batch.evaluation.EvalClusterBatchOp;
  4. import com.alibaba.alink.operator.batch.source.MemSourceBatchOp;
  5. import com.alibaba.alink.operator.common.evaluation.ClusterMetrics;
  6. import org.junit.Test;
  7. import java.util.Arrays;
  8. import java.util.List;
  9. public class EvalClusterBatchOpTest {
  10. @Test
  11. public void testEvalClusterBatchOp() throws Exception {
  12. List <Row> df = Arrays.asList(
  13. Row.of(0, "0 0 0"),
  14. Row.of(0, "0.1,0.1,0.1"),
  15. Row.of(0, "0.2,0.2,0.2"),
  16. Row.of(1, "9 9 9"),
  17. Row.of(1, "9.1 9.1 9.1"),
  18. Row.of(1, "9.2 9.2 9.2")
  19. );
  20. BatchOperator <?> inOp = new MemSourceBatchOp(df, "id int, vec string");
  21. ClusterMetrics metrics = new EvalClusterBatchOp().setVectorCol("vec").setPredictionCol("id").linkFrom(inOp)
  22. .collectMetrics();
  23. System.out.println("Total Samples Number:" + metrics.getCount());
  24. System.out.println("Cluster Number:" + metrics.getK());
  25. System.out.println("Cluster Array:" + Arrays.toString(metrics.getClusterArray()));
  26. System.out.println("Cluster Count Array:" + Arrays.toString(metrics.getCountArray()));
  27. System.out.println("CP:" + metrics.getCp());
  28. System.out.println("DB:" + metrics.getDb());
  29. System.out.println("SP:" + metrics.getSp());
  30. System.out.println("SSB:" + metrics.getSsb());
  31. System.out.println("SSW:" + metrics.getSsw());
  32. System.out.println("CH:" + metrics.getVrc());
  33. }
  34. }

运行结果

  1. Total Samples Number: 6
  2. Cluster Number: 2
  3. Cluster Array: ['0', '1']
  4. Cluster Count Array: [3.0, 3.0]
  5. CP: 0.11547005383792497
  6. DB: 0.014814814814814791
  7. SP: 15.588457268119896
  8. SSB: 364.5
  9. SSW: 0.1199999999999996
  10. CH: 12150.000000000042