1.逻辑回归API介绍

  • sklearn.linear_model.LogisticRegression(solver=’liblinear’, penalty=’l2’, C=1.0)
    • solver可选参数:{‘liblinear’, ‘sag’, ‘saga’, ‘newton-cg’, ‘lbfgs’}
      • 默认:’liblinear’,用于优化问题的算法
      • ‘liblinear’用于小数集,sag和saga适用于大型数据集
      • 对于多类问题,只有newton-cg,sag,saga和lbfgs可以处理多项损失,liblinear仅局限于one-versus-rest分类
    • penalty:正则化种类
    • C:正则化力度
    • 默认将数量少的类别当做正例
  • LogisticRegression与SGDClassifier(loss=’log’, penalty=’’)的区别在于前者使用SAGD,后者使用SGD

    2.分类评估标准

    2.1准确率、精准率、召回率

  • 准确率

    • (TP+TN)/(TP+TN+FN+FP)
  • 精准率(Precision)—查的准不准
    • TP/(TP+FP)

image.png

  • 召回率(Recall)—查的全不全
    • TP/(TP+FN)

image.png

  • F1-score
    • 逻辑回归&分类评估方法 - 图3
  • API:sklearn.metrics.classification_report(y_true, y_pred, labels=[], target_names=None)

    • y_true:真实目标值
    • y_pred:预测目标值
    • labels:标签(指定类别的数字)
    • target_names:标签名

      2.2ROC曲线与AUC指标

  • TPR=TP/(TP+FN):所有真实类别为1的样本中,预测类别为1的比例

  • FPR=FP/(FP+TN):所有真实类别为0的样本中,预测类别为1的比例
  • AUC的概率意义是随机取一对正负样本,正样本得分大于负样本的概率,AUC∈[0.5,1]越大越好
  • API:sklean.metrics.roc_auc_score(y_true, y_score)
    • ROC曲线面积,即为AUC值
    • y_true:每个样本的真实标签,必须为0反例,1正例
    • y_score:预测得分,可以是正类的估计概率、置信值或者分类器返回值
  • AUC只能用评估二分类问题,擅长评价样本不平衡时分类器性能。

image.png

3.代码

  1. import pandas as pd
  2. import numpy as np
  3. from sklearn.model_selection import train_test_split
  4. from sklearn.preprocessing import StandardScaler
  5. from sklearn.linear_model import LogisticRegression
  6. from sklearn.metrics import classification_report, roc_auc_score
  7. # 加载数据
  8. names = ['Sample code number', 'Clump Thickness', 'Uniformity of Cell Size', 'Uniformity of Cell Shape',
  9. 'Marginal Adhesion', 'Single Epithelial Cell size', 'Bare Nuclei', 'Bland Chromatin',
  10. 'Normal Nucleoli', 'Mitoses', 'Class']
  11. data = pd.read_csv(
  12. "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data",
  13. names=names
  14. )
  15. # 处理缺失值
  16. data = data.replace('?', np.nan).dropna()
  17. print(data.describe())
  18. # 生成特征和标签
  19. x = data.iloc[:, 1:-1]
  20. y = data['Class']
  21. # 分割数据集
  22. x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)
  23. # 数据标准化
  24. transfer = StandardScaler()
  25. x_train = transfer.fit_transform(x_train)
  26. x_test = transfer.fit_transform(x_test)
  27. # 模型构建
  28. estimator = LogisticRegression()
  29. estimator.fit(x_train, y_train)
  30. # 模型评估
  31. y_pre = estimator.predict(x_test)
  32. print("预测值为:", y_pre)
  33. print("准确率为:", estimator.score(x_test, y_test)) # "准确率为: 0.9562043795620438"
  34. print(classification_report(y_test, y_pre, labels=(2, 4), target_names=('良性', '恶性')))
  35. # precision recall f1-score support
  36. #
  37. # 良性 0.97 0.97 0.97 87
  38. # 恶性 0.94 0.94 0.94 50
  39. #
  40. # accuracy 0.96 137
  41. # macro avg 0.95 0.95 0.95 137
  42. # weighted avg 0.96 0.96 0.96 137
  43. # 替换标签
  44. y_test = np.where(y_test > 2.5, 1, 0)
  45. print("AUC指标为:", roc_auc_score(y_true=y_test, y_score=y_pre)) # "AUC指标为: 0.9527586206896552"