機器學習演算法筆記之3:線性模型

marsjhao發表於2020-04-06

一、概述

線性模型中的“線性”是一系列一次特徵的線性組合,在二維空間中是一條直線,在三維空間中是一個平面,推廣到n維空間可以理解為廣義線性模型。常見的廣義線性模型有嶺迴歸、Lasso迴歸、Elastic Net、邏輯迴歸、線性判別分析。

二、演算法筆記

1. 普通線性迴歸

線性迴歸是一種迴歸分析技術,迴歸分析本質上就是一個函式估計問題。

根據給定的資料集,定義模型和模型的損失函式,我們的目標是損失函式最小化。可以用梯度下降法求解使損失函式最小化的引數w和b,要注意的是,使用梯度下降時,必須要進行特徵歸一化(Feature Scaling)。特徵歸一化有兩個好處:一是提升模型的收斂速度,二是提升模型精度,它可以讓各個特徵對結果做出的貢獻相同。

2. 廣義線性模型

令h(y)=wTx + b,這樣就得到了廣義線性模型,例如lny=wTx+ b,它是通過exp(wTx + b)來擬合y的 ,實質上是非線性的。


3. 邏輯迴歸

邏輯迴歸模型是:

其中w和x可以為擴充的形式。

一個事件的機率(odds)是指該事件發生的概率與該事件不發生的概率的比值。設事件發生的概率是p,那麼該事件的機率是p/(1-p),該事件的對數機率(log odds)或logit函式是:

對邏輯迴歸而言,可得

這就是說,在邏輯迴歸模型中,輸出Y=1的對數機率是輸入x的線性函式。

對於給定的訓練資料集 ,其中 。應用極大似然估計法估計模型引數,從而得到邏輯迴歸模型設

似然函式為

對數似然函式為

對L(w)求極大值,得到w的估計值,帶入到邏輯迴歸模型即可。

4. 線性判別分析

線性判別分析(Linear Discriminant Analysis,LDA)的思想是:

訓練時,設法將訓練樣本投影到一條直線上,使得同類樣本的投影點儘可能地接近、異類樣本的投影點儘可能遠離。要學習的就是這樣一條直線。

預測時,將待預測樣本投影到學到的直線上,根據其投影點的位置來判定其類別。

三、Sklearn程式碼實現及實驗結果

import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model, cross_validation

# 載入sklearn中的糖尿病人資料集
def load_data():
    diabetes = datasets.load_diabetes()
    return cross_validation.train_test_split(diabetes.data, diabetes.target, 
                                             test_size=0.25, random_state=0)

# 線性迴歸模型
def test_LinearRegression(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.LinearRegression()
    regr.fit(X_train, y_train)
    print('線性迴歸模型')
    # 線性迴歸模型的權重值和偏置值
    print('Cofficients:%s, intercept %.2f' % (regr.coef_, regr.intercept_))
    # 測試集預測結果的均方誤差
    print('Residual sum of square: %.2f' % np.mean((regr.predict(X_test) - y_test) ** 2))
    # 預測效能得分,score值最大為1.0,越大越好,也可能為負值(預測效果太差)
    print('Score: %.2f' % regr.score(X_test, y_test))
    
X_train, X_test, y_train, y_test = load_data()
test_LinearRegression(X_train, X_test, y_train, y_test)

# 嶺迴歸Ridge Regresssion
def test_Ridge(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.Ridge()
    regr.fit(X_train, y_train)
    print('Ridge嶺迴歸')
    print('Cofficients:%s, intercept %.2f' % (regr.coef_, regr.intercept_))
    print('Residual sum of square: %.2f' % np.mean((regr.predict(X_test) - y_test) ** 2))
    print('Score: %.2f' % regr.score(X_test, y_test))

X_train, X_test, y_train, y_test = load_data()
test_Ridge(X_train, X_test, y_train, y_test)

# 檢驗不同的alpha值對嶺迴歸預測效能的影響
def test_Ridge_alpha(*data):
    X_train, X_test, y_train, y_test = data
    print('不同的alpha值對嶺迴歸預測效能的影響')
    alphas = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000]
    scores = []
    # 按照list中的元素迭代
    for i, alpha in enumerate(alphas):        
        regr = linear_model.Ridge(alpha=alpha)
        regr.fit(X_train, y_train)
        scores.append(regr.score(X_test, y_test))
    # alpha對預測效能影響的視覺化
    fig = plt.figure()
    ax = fig.add_subplot(1, 1, 1)
    ax.plot(alphas, scores)
    ax.set_xlabel(r"$\alpha$")
    ax.set_ylabel('score')
    ax.set_xscale('log')
    ax.set_title('Ridge')
    plt.show()

X_train, X_test, y_train, y_test = load_data()
test_Ridge_alpha(X_train, X_test, y_train, y_test)

# Lasso迴歸
def test_Lasso(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.Lasso()
    regr.fit(X_train, y_train)
    print('Lasso迴歸')
    print('Cofficients:%s, intercept %.2f' % (regr.coef_, regr.intercept_))
    print('Residual sum of square: %.2f' % np.mean((regr.predict(X_test) - y_test) ** 2))
    print('Score: %.2f' % regr.score(X_test, y_test))

X_train, X_test, y_train, y_test = load_data()
test_Lasso(X_train, X_test, y_train, y_test)

# 檢驗不同的alpha值對Lasso迴歸預測效能的影響
def test_Lasso_alpha(*data):
    X_train, X_test, y_train, y_test = data
    print('不同的alpha值對Lasso迴歸預測效能的影響')
    alphas = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000]
    scores = []
    # 按照list中的元素迭代
    for i, alpha in enumerate(alphas):        
        regr = linear_model.Lasso(alpha=alpha)
        regr.fit(X_train, y_train)
        scores.append(regr.score(X_test, y_test))
    # alpha對預測效能影響的視覺化
    fig = plt.figure()
    ax = fig.add_subplot(1, 1, 1)
    ax.plot(alphas, scores)
    ax.set_xlabel(r"$\alpha$")
    ax.set_ylabel('score')
    ax.set_xscale('log')
    ax.set_title('Lasso')
    plt.show()

X_train, X_test, y_train, y_test = load_data()
test_Lasso_alpha(X_train, X_test, y_train, y_test)

# ElasticNet迴歸
def test_ElasticNet(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.ElasticNet()
    regr.fit(X_train, y_train)
    print('ElasticNet迴歸')
    print('Cofficients:%s, intercept %.2f' % (regr.coef_, regr.intercept_))
    print('Residual sum of square: %.2f' % np.mean((regr.predict(X_test) - y_test) ** 2))
    print('Score: %.2f' % regr.score(X_test, y_test))

X_train, X_test, y_train, y_test = load_data()
test_ElasticNet(X_train, X_test, y_train, y_test)

# 檢驗不同的alpha值對ElasticNet迴歸預測效能的影響
def test_ElasticNet_alpha_rho(*data):
    X_train, X_test, y_train, y_test = data
    print('不同的alpha值對Lasso迴歸預測效能的影響')
    alphas = np.logspace(-2, 2)
    rhos = np.linspace(0.01, 1)
    scores = []
    for alpha in alphas:
        for rho in rhos:          
            regr = linear_model.ElasticNet(alpha=alpha, l1_ratio=rho)
            regr.fit(X_train, y_train)
            scores.append(regr.score(X_test, y_test))
    # alpha對預測效能影響的視覺化
    alphas, rhos = np.meshgrid(alphas, rhos)
    scores = np.array(scores).reshape(alphas.shape)
    from mpl_toolkits.mplot3d import Axes3D
    from matplotlib import cm
    fig = plt.figure()
    ax = Axes3D(fig)
    surf = ax.plot_surface(alphas, rhos, scores, rstride=1, cstride=1, 
                           cmap=cm.jet, linewidth=0, antialiased=False)
    fig.colorbar(surf, shrink=0.5, aspect=5)
    ax.set_xlabel(r"$\alpha$")
    ax.set_ylabel(r"$\rho$")
    ax.set_zlabel('score')
    ax.set_title('ElasticNst')
    plt.show()

X_train, X_test, y_train, y_test = load_data()
test_ElasticNet_alpha_rho(X_train, X_test, y_train, y_test)

# 載入sklearn中的鳶尾花資料集
def load_data1():
    iris = datasets.load_iris()
    X_train = iris.data
    y_train = iris.target
    return cross_validation.train_test_split(X_train, y_train, 
                            test_size=0.25, random_state=0, stratify=y_train)

# Logistic迴歸
def test_LogisticRegression(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.LogisticRegression()
    regr.fit(X_train, y_train)
    print('Logistic迴歸')
    print('Cofficients:%s, intercept %s' % (regr.coef_, regr.intercept_))
    print('Score: %.2f' % regr.score(X_test, y_test))
    
X_train, X_test, y_train, y_test = load_data1()
test_LogisticRegression(X_train, X_test, y_train, y_test)

# 多分類Logistic迴歸
def test_LogisticRegression_multinomial(*data):
    X_train, X_test, y_train, y_test = data
    regr = linear_model.LogisticRegression(multi_class='multinomial', solver='lbfgs')
    regr.fit(X_train, y_train)
    print('Logistic_multinomial迴歸')
    print('Cofficients:%s, intercept %s' % (regr.coef_, regr.intercept_))
    print('Score: %.2f' % regr.score(X_test, y_test))
    
X_train, X_test, y_train, y_test = load_data1()
test_LogisticRegression_multinomial(X_train, X_test, y_train, y_test)
執行結果:

線性迴歸模型

Cofficients:[ -43.26774487-208.67053951  593.39797213  302.89814903 -560.27689824

 261.47657106   -8.83343952  135.93715156 703.22658427   28.34844354],intercept 153.07

Residual sum of square: 3180.20

Score: 0.36

Ridge嶺迴歸

Cofficients:[  21.19927911 -60.47711393  302.87575204  179.41206395    8.90911449

 -28.8080548  -149.30722541  112.67185758 250.53760873   99.57749017],intercept 152.45

Residual sum of square: 3192.33

Score: 0.36

不同的alpha值對嶺迴歸預測效能的影響

Lasso迴歸

Cofficients:[   0.          -0.          442.67992538    0.            0.            0.

  -0.            0.          330.76014648    0.       ], intercept 152.52

Residual sum of square: 3583.42

Score: 0.28

不同的alpha值對Lasso迴歸預測效能的影響


ElasticNet迴歸

Cofficients:[ 0.40560736  0.         3.76542456  2.38531508  0.58677945 0.22891647

 -2.15858149 2.33867566  3.49846121  1.98299707], intercept 151.93

Residual sum of square: 4922.36

Score: 0.01

不同的alpha值對Lasso迴歸預測效能的影響


Logistic迴歸

Cofficients:[[ 0.39310895  1.35470406 -2.12308303 -0.96477916]

 [0.22462128 -1.34888898  0.60067997-1.24122398]

 [-1.50918214 -1.29436177  2.14150484 2.2961458 ]], intercept [ 0.24122458 1.13775782 -1.09418724]

Score: 0.97

Logistic_multinomial迴歸

Cofficients:[[-0.38365872  0.85501328 -2.27224244 -0.98486171]

 [0.34359409 -0.37367647 -0.03043553 -0.86135577]

 [0.04006464 -0.48133681  2.30267797  1.84621747]], intercept [  8.79984878  2.46853199 -11.26838077]

Score: 1.00


相關文章