機器學習之常見的效能度量

洛陽山發表於2020-12-23

1、簡介

本文是對論文《The Impact of Automated Parameter Optimization on Defect Prediction Models》涉及到的效能度量標準整理。

最初看這篇論文的時候,被震撼了好久。從101個資料集中,選取涉及多個語言、多個領域的18個資料集,用12個效能度量討論現有流行機器學習分類器(模型)引數優化對效能的提升效果。整個實驗過程嚴謹,這種震撼效果直到讀到這個實驗室2017年的一篇論文《An Empirical Comparison of Model Validation Techniques for Defect Prediction Models》才減弱,使用的方法是相同的,對比實驗的設定也大同小異,像是流水線產品,留下了羨慕的淚水(這種綜述類的論文只能領域裡的專家才能發)。

2、效能度量總結

可以參考著這兩篇部落格一起看:
sklearn—評價指標大全
機器學習效能評估指標

指標很多都是根據二分類的混淆矩陣來的,混淆矩陣示例:
在這裡插入圖片描述

指標名稱計算公式含義參考文獻
查準率(精確率Precision) P = T P T P + F P P=\frac{TP}{TP+FP} P=TP+FPTP [ 3 ] [3] [3]
查全率(召回率Recall) TPR R = T P T P + F N R=\frac{TP}{TP+FN} R=TP+FNTP正類(有缺陷的模組)被正確分類的比例 [ 3 ] [3] [3]
F m e a s u r e F_{measure} Fmeasure( F 1 F_{1} F1) F = 2 × P × R P + R F=2 \times \frac{P \times R}{P+R} F=2×P+RP×R查準率和查全率的調和平均值 [ 3 ] [3] [3]
特異度(Specify)TNR S = T N T N + F P S=\frac{TN}{TN+FP} S=TN+FPTN負類(無缺陷模組)被正確分類的比例
誤報率FPR F P R = F P T N + F P FPR=\frac{FP}{TN+FP} FPR=TN+FPFP負類(無缺陷模組)被錯誤分類的比例 [ 4 ] [4] [4]
G m e a n G_{mean} Gmean G − m e a n = R × S G-mean=\sqrt{R \times S} Gmean=R×S R和S的幾何平均數
G m e a s u r e G_{measure} Gmeasure G m e a s u r e = 2 × p d × ( 1 − p f ) p d + ( 1 − p f ) G_{measure}=\frac{2 \times pd \times(1-pf)}{pd+(1-pf)} Gmeasure=pd+(1pf)2×pd×(1pf)TPR和FPR的調和平均值,其中pd=TPR,pf=FPR
B a l a n c e Balance Balance 1 − ( 0 − p f ) 2 + ( 1 + p d ) 2 ) 2 1-\sqrt{\frac{\left.(0-p f)^{2}+(1+p d)^{2}\right)}{2}} 12(0pf)2+(1+pd)2) 負類被誤分類的比例,其中pd=TPR,pf=FPR [ 5 ] [5] [5], [ 6 ] [6] [6]
馬修斯係數(Matthews Correlation Coefficient, MCC) M C C = T P × T N − F P × F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N ) MCC=\frac{TP \times TN-FP \times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}} MCC=(TP+FP)(TP+FN)(TN+FP)(TN+FN) TP×TNFP×FN實際分類與預測分類之間的相關係數 [ 7 ] [7] [7]
AUCROC曲線下的面積 [ 8 ] [8] [8] [ 9 ] [9] [9] [ 10 ] [10] [10] [ 11 ] [11] [11] [ 12 ] [12] [12] [ 13 ] [13] [13]
Brier 1 N ∑ i = 1 N ( p i − y i ) 2 \frac{1}{N} \sum_{i=1}^{N}\left(p_{i}-y_{i}\right)^{2} N1i=1N(piyi)2預測概率和結果之間的差距 [ 14 ] [14] [14] [ 15 ] [15] [15]
LogLoss l o g l o s s = − 1 N ∑ i = 1 N ( y i log ⁡ ( p i ) + ( 1 − y i ) log ⁡ ( 1 − p i ) ) logloss=-\frac{1}{N} \sum_{i=1}^{N}\left(y_{i} \log \left(p_{i}\right)+\left(1-y_{i}\right) \log \left(1-p_{i}\right)\right) logloss=N1i=1N(yilog(pi)+(1yi)log(1pi))分類損失函式

補充:

  • MCC:MCC本質上是一個描述實際分類與預測分類之間的相關係數,它的取值範圍為[-1,1],取值為1時表示對受試物件的完美預測,取值為0時表示預測的結果還不如隨機預測的結果,-1是指預測分類和實際分類完全不一致;
  • Brier score: p i p_{i} pi是預測概率, y i y_{i} yi是真實的標籤(0或者1),取值範圍是[0, 1],0代表效能最好,1代表效能最差,0.25表示隨機分類;
  • LogLoss: p i p_{i} pi是預測概率, y i y_{i} yi是真實的標籤(0或者1),Kaggle比賽的標準效能度量;

3、參考文獻

[ 1 ] [1] [1]C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Matsumoto, “The Impact of Automated Parameter Optimization on Defect Prediction Models,” IEEE Trans. Softw. Eng., vol. 45, no. 7, pp. 683–711, Jul. 2019, doi: 10.1109/TSE.2018.2794977
[ 2 ] [2] [2]C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Matsumoto, “An Empirical Comparison of Model Validation Techniques for Defect Prediction Models,” IEEE Trans. Softw. Eng., vol. 43, no. 1, pp. 1–18, Jan. 2017, doi: 10.1109/TSE.2016.2584050.
[ 3 ] [3] [3]W. Fu, T. Menzies, and X. Shen, “Tuning for software analytics: is it really necessary?” Information and Software Technology, vol. 76, pp. 135–146.
[ 4 ] [4] [4]T. Menzies, J. Greenwald, and A. Frank, “Data Mining Static Code Attributes to Learn Defect Predictors,” IEEE Transactions on Software Engineering (TSE), vol. 33, no. 1, pp. 2–13, 2007.
[ 5 ] [5] [5]H. Zhang and X. Zhang, “Comments on “Data Min- ing Static Code Attributes to Learn Defect Predic- tors”,” IEEE Transactions on Software Engineering (TSE), vol. 33, no. 9, pp. 635–636, 2007.
[ 6 ] [6] [6]A. Tosun, “Ensemble of Software Defect Predictors: A Case Study,” in Proceedings of the International Sympo- sium on Empirical Software Engineering and Measurement (ESEM), 2008, pp. 318–320.
[ 7 ] [7] [7]M. Shepperd, D. Bowes, and T. Hall, “Researcher Bias: The Use of Machine Learning in Software Defect Prediction,” IEEE Transactions on Software Engineering (TSE), vol. 40, no. 6, pp. 603–616, 2014.
[ 8 ] [8] [8]S. den Boer, N. F. de Keizer, and E. de Jonge, “Performance of prognostic models in critically ill cancer patients - a review.” Critical care, vol. 9, no. 4, pp. R458–R463, 2005.
[ 9 ] [9] [9]F. E. Harrell Jr., Regression Modeling Strategies, 1st ed. Springer, 2002.
[ 10 ] [10] [10]J. Huang and C. X. Ling, “Using AUC and accuracy
in evaluating learning algorithms,” Transactions on Knowledge and Data Engineering, vol. 17, no. 3, pp. 299– 310, 2005.
[ 11 ] [11] [11]S. Lessmann, S. Member, B. Baesens, C. Mues, and S. Pietsch, “Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings,” IEEE Transactions on Software Engineering (TSE), vol. 34, no. 4, pp. 485–496, 2008.
[ 12 ] [12] [12]E. W. Steyerberg, Clinical prediction models: a practi- cal approach to development, validation, and updating. Springer Science & Business Media, 2008.
[ 13 ] [13] [13]E. W. Steyerberg, A. J. Vickers, N. R. Cook, T. Gerds, N. Obuchowski, M. J. Pencina, and M. W. Kattan, “Assessing the performance of prediction models: a framework for some traditional and novel measures,” Epidemiology, vol. 21, no. 1, pp. 128–138, 2010.
[ 14 ] [14] [14]G. W. Brier, “Verification of Forecasets Expressed in Terms of Probability,” Monthly Weather Review, vol. 78, no. 1, pp. 25–27, 1950.
[ 15 ] [15] [15]K. Rufibach, “Use of Brier score to assess binary predictions,” Journal of Clinical Epidemiology, vol. 63, no. 8, pp. 938–939, 2010.

相關文章