其他分享
首页 > 其他分享> > 机器学习中的评价标准

机器学习中的评价标准

作者:互联网

from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))

输出:
                precision    recall  f1-score   support

     class 0       0.50      1.00      0.67         1
     class 1       0.00      0.00      0.00         1
     class 2       1.00      0.67      0.80         3

    accuracy                           0.60         5
   macro avg       0.50      0.56      0.49         5
weighted avg       0.70      0.60      0.61         5
 

其中列表左边的一列为分类的标签名,右边support列为每个标签的出现次数.precision recall f1-score三列分别为各个类别的精确度/召回率及 F1值.

Accuracy = (TP+TN) ⁄(TP + FP + TN + FN)

Precision =TP ⁄(TP+FP)

Recall = TP ⁄(TP+FN)

Precision又叫查准率,Recall又叫查全率

F1 = 2Precision *Recall ⁄(Precision+Recall)

macro avg 为算术平均值,以precision为例,macro avg = (0.50 + 0 + 1.00) ⁄ 3 = 0.50

weighted avg 是用每一个类别样本数量在所有类别的样本总数的占比作为权重 以precision为例, weighted avg = (0.5*1 + 0*1 + 1*3)/5 = 0.7

此外还有micro avg

 

 

from sklearn.metrics import confusion_matrix, precision_score

y_true = ["A", "A", "A", "A", "B", "B", "C", "C", "C", "C", "C"]
y_pred = ["A", "B", "A", "A", "B", "A", "B", "C", "C", "C", "C"]
print(confusion_matrix(y_true, y_pred))
print(precision_score(y_true, y_pred, average='micro'))

[[3 1 0] [1 1 0] [0 1 4]]
0.7272727272727273

对于类别A,它的TP=3, FP=1;对于类别B,它的TP=1, FP=1;对于类别C,它的TP=4,FP=1,因此micro avg precision为:

(3+1+4)/(3+1+1+1+4+1)=0.7273

  

  

标签:FP,机器,pred,precision,TP,学习,评价,avg,class
来源: https://www.cnblogs.com/bonne-chance/p/16122037.html