用xgboost獲取特徵重要性原理及實踐

CSDN惠小惠發表於2019-04-13

1.xgboost對特徵重要性排序的原理

xgboost根據結構分數的增益情況計算出來選擇哪個特徵作為分割點,而某個特徵的重要性就是它在所有樹中出現的次數之和。也就是說一個屬性越多的被用來在模型中構建決策樹,它的重要性就相對越高
2 xgboost特徵重要性排序的方法

  1. xgboost可以通過get_score獲取特徵重要性
    for importance_type in (‘weight’, ‘gain’, ‘cover’, ‘total_gain’, ‘total_cover’):
    print(’%s: ’ % importance_type, bst.get_score(importance_type=importance_type))
    weight - 該特徵在所有樹中被用作分割樣本的特徵的次數。
    gain - 在所有樹中的平均增益。
    cover - 在樹中使用該特徵時的平均覆蓋範圍。(還不是特別明白)

  2. 利用plot_importance畫出各個特徵的重要性排序

  3. 可以通過測試多個閾值,藉助衡量分類器優劣的指標,來從特徵重要性中選擇特徵。
    下面利用kaggle的heartdisease資料實證分析

import numpy as np
import pandas as pd

from sklearn.model_selection import train_test_split,StratifiedKFold,train_test_split,GridSearchCV
from sklearn.metrics import accuracy_score, confusion_matrix, mean_squared_error,roc_auc_score
from xgboost import plot_importance
from matplotlib import pyplot as plt
import xgboost as xgb
#利用xgb.train中的get_score得到weight,gain,以及cover
params={        'max_depth':7,
                'n_estimators':80,
                'learning_rate':0.1, 
                'nthread':4,
                'subsample':1.0,
                'colsample_bytree':0.5,
                'min_child_weight' : 3,
                'seed':1301}
bst = xgb.train(params, xgtrain, num_boost_round=1)
for importance_type in ('weight', 'gain', 'cover', 'total_gain', 'total_cover'):
    print('%s: ' % importance_type, bst.get_score(importance_type=importance_type))
import graphviz
xgb.plot_tree(bst)
plt.show()

得到的結果如下:
weight: {‘slope’: 2, ‘sex’: 2, ‘age’: 7, ‘chol’: 13, ‘trestbps’: 9, ‘restecg’: 2}
gain: {‘slope’: 4.296458304, ‘sex’: 2.208011625, ‘age’: 0.8395543860142858, ‘chol’: 0.6131722695384615, ‘trestbps’: 0.49512829022222227, ‘restecg’: 0.679761901}
cover: {‘slope’: 116.5, ‘sex’: 106.0, ‘age’: 24.714285714285715, ‘chol’: 22.846153846153847, ‘trestbps’: 18.555555555555557, ‘restecg’: 18.0}
total_gain: {‘slope’: 8.592916608, ‘sex’: 4.41602325, ‘age’: 5.8768807021, ‘chol’: 7.971239503999999, ‘trestbps’: 4.456154612000001, ‘restecg’: 1.359523802}
total_cover: {‘slope’: 233.0, ‘sex’: 212.0, ‘age’: 173.0, ‘chol’: 297.0, ‘trestbps’: 167.0, ‘restecg’: 36.0}

from sklearn.feature_selection import SelectFromModel
model = xgb.XGBClassifier()
model.fit(X_train, y_train)
#plot_importance;利用plot_importance畫出各個特徵的重要性排序
from xgboost import plot_importance
plot_importance(model)
plt.show()

得到結果如下:
在這裡插入圖片描述

#我們可以通過測試多個閾值,來從特徵重要性中選擇特徵。具體而言,每個輸入變數的特徵重要性,本質上允許我們#通過重要性來測試每個特徵子集。
# make predictions for test data and evaluate
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
# Fit model using each importance as a threshold
thresholds = np.sort(model.feature_importances_)
for thresh in thresholds:
	# select features using threshold
	selection = SelectFromModel(model, threshold=thresh, prefit=True)
	select_X_train = selection.transform(X_train)
	# train model
	selection_model = xgb.XGBClassifier()
	selection_model.fit(select_X_train, y_train)
	# eval model
	select_X_test = selection.transform(X_test)
	y_pred = selection_model.predict(select_X_test)
	predictions = [round(value) for value in y_pred]
	accuracy = accuracy_score(y_test, predictions)
	print("Thresh=%.3f, n=%d, Accuracy: %.2f%%" % (thresh, select_X_train.shape[1], accuracy*100.0))

Accuracy: 84.62%
Thresh=0.025, n=13, Accuracy: 84.62%
Thresh=0.026, n=12, Accuracy: 80.22%
Thresh=0.026, n=11, Accuracy: 79.12%
Thresh=0.028, n=10, Accuracy: 76.92%
Thresh=0.032, n=9, Accuracy: 78.02%
Thresh=0.036, n=8, Accuracy: 80.22%
Thresh=0.041, n=7, Accuracy: 76.92%
Thresh=0.066, n=6, Accuracy: 76.92%
Thresh=0.085, n=5, Accuracy: 84.62%
Thresh=0.146, n=4, Accuracy: 80.22%
Thresh=0.151, n=3, Accuracy: 76.92%
Thresh=0.163, n=2, Accuracy: 74.73%
Thresh=0.174, n=1, Accuracy: 78.02%
由上述結果可以看出,隨著閾值的增大,特徵數目的減少,精確度有減小的趨勢,一般情況下用交叉驗證作為模型評估方案可能是更有用的策略。這在下一篇的xgb.cv中將實現。xgb.cv

相關文章