深度學習——使用卷積神經網路改進識別鳥與飛機模型

jp發表於2023-04-08

準備資料集:從CIFAR-10抽離鳥與飛機的圖片

from torchvision import datasets
from torchvision import transforms
data_path = './data'

# 載入訓練集
cifar10 = datasets.CIFAR10(root = data_path, train=True, download=False)
# 載入驗證集
cifar10_val = datasets.CIFAR10(root=data_path, train=False, download=False)

# 使用To_Tensor 將 32*32*3 的圖片格式轉為 3*32*32 的張量格式
to_tensor = transforms.ToTensor()

# 進行標籤轉換,否則下面開始訓練時會報錯:IndexError: Target 2 is out of bounds
label_map={0:0, 2:1}

# 分別從訓練集和驗證集中抽取鳥與飛機圖片
cifar2 = [(to_tensor(img), label_map[label]) for img, label in cifar10 if label in [0, 2]]
cifar2_val = [(to_tensor(img), label_map[label]) for img, label in cifar10_val if label in [0, 2]]

驗證下,是否獲取成功

import matplotlib.pyplot as plt
img, _ = cifar2[100]
plt.imshow(img.permute(1, 2, 0))
<matplotlib.image.AxesImage at 0x29bdaed6aa0>

使用DataLoader封裝資料集

from torch.utils.data import DataLoader

# 訓練集資料載入器
train_loader = DataLoader(cifar2, batch_size=64, pin_memory=True, shuffle=True, num_workers=4, drop_last=True) # type: ignore
# 驗證集資料載入器
val_loader = DataLoader(cifar2_val, batch_size=64, pin_memory=True, num_workers=4, drop_last=True)

子類化nn.Module

我們打算放棄nn.Sequential帶來的靈活性。使用更自由的子類化nn.Module

為了子類化nn.Module,我們至少需要定義一個forward()函式,該函式用於接收模組的輸入並返回輸出,這便是模組計算的之處。

Pytorch中,如果使用標準的torch操作,自動求導將自動處理反向傳播,也就是不需要定義backward()函式。

重新定義我們的模型:

import torch
from torch import nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, padding=1)    # 卷積層
        self.conv2 = nn.Conv2d(in_channels=16, out_channels=8, kernel_size=3, padding=1)
        self.fc1 = nn.Linear(8*8*8, 32) # 全連線層,8個8x8的特徵圖,每個特徵圖有8個通道
        self.fc2 = nn.Linear(32, 2)

    def forward(self, x):
        out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)    # 圖片初始大小為32x32,經過第一次池化,特徵圖大小為16x16
        out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)  # 經過池化,特徵圖大小為8x8
        out = out.view(-1, 8*8*8)
        out = torch.tanh(self.fc1(out))
        out = self.fc2(out)
        return out

假設卷積層輸入特徵圖大小為\(W_{in}\times H_{in}\),卷積核大小為\(K\),padding大小為\(P\),stride為\(S\),卷積層輸出特徵圖大小為\(W_{out}\times H_{out}\),那麼有如下公式:

\(W_{out} = \lfloor \frac{W_{in}+2P-K}{S} \rfloor +1\)

\(H_{out} = \lfloor \frac{H_{in}+2P-K}{S} \rfloor +1\)
其中,\(\lfloor x \rfloor\)表示將\(x\)向下取整的結果。

在這個程式碼中,第一個卷積層的輸入特徵圖大小為32x32,卷積核大小為3,padding大小為1,stride為1,因此將上述公式代入計算,得到:

\(W_{out} = \lfloor \frac{32+2\times1-3}{1} \rfloor +1 = 32\)

\(H_{out} = \lfloor \frac{32+2\times1-3}{1} \rfloor +1 = 32\)

因此,第一個卷積層的輸出特徵圖大小為32x32。

簡單測試下模型是否執行

model = Net()
model(img.unsqueeze(0))
tensor([[-0.0153, -0.1532]], grad_fn=<AddmmBackward0>)

訓練卷積神經網路

訓練過程有兩個迭代組成:

  • 第一層迭代:代表迭代週期(epoch)
  • 第二層迭代:對DataLoader傳來的每批次資料集進行訓練

在每一次迴圈中:

  • 向模型提供輸入(正向傳播)
  • 計算損失(正向傳播)
  • 將老梯度歸零
  • 呼叫loss.backward()來計算損失相對所有引數的梯度(反向傳播)
  • 讓最佳化器朝著更低的損失邁進

定義訓練的函式,並嘗試在GPU上進行訓練:

device =torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Training on {device}.")
Training on cuda.
import datetime

def train_loop(n_epochs, optimizer, model, loss_fn, train_loader):
    for epoch in range(1, n_epochs+1):
        loss_train = 0.0
        for imgs, labels in train_loader:   # 在資料載入器中獲取批處理迴圈資料集

            imgs = imgs.to(device=device)   # 這兩行程式碼將imgs labels移動到device指定的裝置
            labels = labels.to(device=device)

            outputs = model(imgs)           # 透過模型計算一個批次的結果
            loss = loss_fn(outputs, labels) # 計算最小化損失
            optimizer.zero_grad()           # 去掉最後一輪的梯度
            loss.backward()                 # 執行反向傳播
            optimizer.step()                # 更新模型
            loss_train += loss.item()       # 對每層迴圈得到的損失求和,避免梯度變化

        if epoch ==1 or epoch%10 == 0:
            print("{} Epoch {}, Train loss {}".             # 總損失/訓練資料載入器的長度,得到每批平均損失
                  format(datetime.datetime.now(), epoch, loss_train / len(train_loader)))

上面已經準備好了modeltrain_loader,還需準備optimizereloss_fn

import torch.optim as optim

# 模型也需要搬到GPU,否則會報錯:
model = Net().to(device=device)    # RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

optimizer = optim.SGD(model.parameters(), lr=1e-2)  # 使用隨機梯度下降最佳化器
loss_fn = nn.CrossEntropyLoss() # 交叉熵損失

# 呼叫訓練迴圈
train_loop(n_epochs=100,
            optimizer=optimizer,
            model=model,
            loss_fn=loss_fn,
            train_loader=train_loader)
2023-04-08 16:49:02.897419 Epoch 1, Train loss 0.6789790311684976
2023-04-08 16:50:12.260929 Epoch 10, Train loss 0.45727716023341203
2023-04-08 16:51:29.474510 Epoch 20, Train loss 0.3460641039105562
2023-04-08 16:52:45.412158 Epoch 30, Train loss 0.3255017975775095
2023-04-08 16:53:59.949844 Epoch 40, Train loss 0.3127688937462293
2023-04-08 16:55:14.758279 Epoch 50, Train loss 0.3003842735137695
2023-04-08 16:56:29.352129 Epoch 60, Train loss 0.2895182979603608
2023-04-08 16:57:44.294486 Epoch 70, Train loss 0.2761662933879938
2023-04-08 16:58:58.890680 Epoch 80, Train loss 0.2641859925710238
2023-04-08 17:00:13.058129 Epoch 90, Train loss 0.25313296078298336
2023-04-08 17:01:27.434814 Epoch 100, Train loss 0.2413799591266956
# 再建立一個沒有被打亂的訓練資料載入器,用於驗證
train_loader_ = DataLoader(cifar2, batch_size=64, shuffle=False, num_workers=4, drop_last=True)

def validate(model, train_loader, val_loader):
    for name, loader in [('trian', train_loader), ('val', val_loader)]:
        correct = 0
        total = 0
        with torch.no_grad():   # 在這裡,我們希望不更新引數
            for imgs, labels in loader:

                imgs = imgs.to(device=device)
                labels = labels.to(device=device)

                outputs = model(imgs)
                _, predicted = torch.max(outputs, dim=1)    # 將最大值的索引作為輸出

                total += labels.shape[0]
                correct += int((predicted == labels).sum())
        print("Accuracy: {}: {}".format(name, correct/total))

validate(model, train_loader_, val_loader)
Accuracy: trian: 0.9037459935897436
Accuracy: val: 0.8765120967741935

準確率確實還可以,但模型結構還是過於簡單,繼續順著書本調整下!

改進神經網路

一般來說,模型訓練結果的優劣主要有三方面決定:1、模型結構;2、訓練過程;3、資料集。

在這裡,暫不考慮第三種帶來的變化,事實上,很多情況下,資料集的質量很能影響模型的泛化性,但是由於我們使用的是專門用於教學的資料集,因此只考慮前兩種變化對模型預測精確度帶來的變化。

增加記憶體容量:寬度

寬度,即神經網路的寬度:每層神經元數,或每個卷積的通道數。

我們只需要在第1個卷積層中指定更多的輸出通道,並相應地增加後續層數,便可得到更長的向量。

此外,將模型訓練過程中的中間通道數作為引數而不是硬編碼數字傳遞給__init__()

現在重寫Net類:

class NetWidth(nn.Module):
    def __init__(self, n_channel=32):
        super().__init__()
        self.n_channel = n_channel
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=n_channel, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(in_channels=n_channel, out_channels=n_channel//2,        # 增加了神經網路的寬度
                               kernel_size=3, padding=1)  
        self.fc1 = nn.Linear((n_channel//2)*8*8, 32)    
        self.fc2 = nn.Linear(32, 2)

    def forward(self, x):
        out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
        out = F.max_pool2d(torch.tanh(out), 2)
        out = out.view(-1, (self.n_channel//2)*8*8)
        out = torch.tanh(self.fc1(out))
        out = self.fc2(out)
        return out

現在看看改變了寬度後,模型的引數數量:

n1 = sum(p.numel() for p in model.parameters())  # 增加寬度前的模型引數數量
model2 = NetWidth().to(device=device)            
n2 = sum(p.numel() for p in model2.parameters())    # 增加寬度後的模型引數數量
print(n1)
print(n2)
18090
38386

容量越大,模型所能管理的輸入的可變性就越大。但是相應的,模型出現過擬合的可能性也會增加。

處理增加資料集來避免過擬合之外,還可以調整訓練過程。

模型收斂和泛化:正則化

  1. 權重懲罰
    穩定泛化第一種方法新增正則化項。在這裡我們新增L2正則化,它是所有權重的平方和(L1正則化是模型中所有權重的絕對值之和)。

    L2正則化也成為權重衰減,對引數的負梯度為: \(w_i=-2\times lambda\times w_i\),其中lambda為超引數,在Pytorch中稱為權重衰減。

    因此,在損失函式中加入L2正則化,相當於在最佳化步驟中將每個權重按其當前值的比例遞減。權重引數適用於網路的所有引數,例如偏置。
def training_loop_l2reg(n_epochs, optimizer, model, loss_fn, train_loader):
    for epoch in range(1, n_epochs+1):
        loss_train = 0.0
        for imgs, labels in train_loader:
            imgs = imgs.to(device=device)
            labels = labels.to(device=device)
            outputs = model(imgs)
            loss = loss_fn(outputs, labels)

            l2_lambda = 0.001       # 加入L2正則化
            l2_norm = sum(p.pow(2.0).sum() for p in model.parameters())

            loss = loss+l2_lambda*l2_norm
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            loss_train += loss.item()

        if epoch==1 or epoch%10 == 0:
            print("{} Epoch {}, Training loss {}".format(
                datetime.datetime.now(), epoch, loss_train/len(train_loader)
            ))
  1. Dropout

Dropout將網路每輪訓練迭代中神經元隨即清零。Dropout在每次迭代中有效地生成具有不同神經元拓撲結構的模型,使得模型中的神經元在過擬合過程中協調記憶的機會更少。另一中觀點是,Dropout在整個網路中干擾了模型生成的特徵,產生了一種接近於增強的效果。

class NetDropout(nn.Module):
    def __init__(self, n_channel=32):
        super().__init__()
        self.n_channel = n_channel
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=n_channel, kernel_size=3, padding=1)
        self.conv1_dropout = nn.Dropout2d(p=0.4)                                        # 使用dropout,p為一個元素歸零的機率
        self.conv2 = nn.Conv2d(in_channels=n_channel, out_channels=n_channel//2,        # 增加了神經網路的寬度
                               kernel_size=3, padding=1)  
        self.conv2_dropout = nn.Dropout2d(p=0.4)
        self.fc1 = nn.Linear((n_channel//2)*8*8, 32)    
        self.fc2 = nn.Linear(32, 2)

    def forward(self, x):
        out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
        out = self.conv2_dropout(out)
        out = F.max_pool2d(torch.tanh(out), 2)
        out = self.conv2_dropout(out)
        out = out.view(-1, (self.n_channel//2)*8*8)
        out = torch.tanh(self.fc1(out))
        out = self.fc2(out)
        return out
  1. 批次化歸一

批次歸一化背後的主要思想是將輸入重新調整到網路的啟用狀態,從而使小批次具有一定的理想分佈,這有助於避免啟用函式的輸入過多地進入函式的包和部分,從而消除梯度並減慢訓練速度。

class NetBatchNorm(nn.Module):
    def __init__(self, n_channel=32):
        super().__init__()
        self.n_channel = n_channel
        self.conv1 = nn.Conv2d(in_channels=3, out_channels=n_channel, kernel_size=3, padding=1)
        self.conv1_batchnorm = nn.BatchNorm2d(num_features=n_channel)                   # 使用批次歸一化
        self.conv2 = nn.Conv2d(in_channels=n_channel, out_channels=n_channel//2,        # 增加了神經網路的寬度
                               kernel_size=3, padding=1)  
        self.conv2_batchnorm = nn.BatchNorm2d(num_features=n_channel//2)
        self.fc1 = nn.Linear((n_channel//2)*8*8, 32)    
        self.fc2 = nn.Linear(32, 2)

    def forward(self, x):
        out = self.conv1_batchnorm(self.conv1(x))
        out = F.max_pool2d(torch.tanh(out), 2)
        out = self.conv2_batchnorm(self.conv2(out))
        out = F.max_pool2d(torch.tanh(out), 2)
        out = out.view(-1, (self.n_channel//2)*8*8)
        out = torch.tanh(self.fc1(out))
        out = self.fc2(out)
        return out

現在使用NetBatchNormtraining_loop_l2reg重新訓練並評估我們的模型,希望較之前能有提升!

model = NetBatchNorm().to(device=device)
optimizer = optim.SGD(model.parameters(), lr=1e-2)  # 使用隨機梯度下降最佳化器
loss_fn = nn.CrossEntropyLoss() # 交叉熵損失

training_loop_l2reg(
    n_epochs=100,
    optimizer=optimizer,
    model=model,
    loss_fn=loss_fn,
    train_loader=train_loader
)
2023-04-08 17:22:51.919275 Epoch 1, Training loss 0.5400954796335636
2023-04-08 17:24:01.077684 Epoch 10, Training loss 0.3433214044914796
2023-04-08 17:25:18.132063 Epoch 20, Training loss 0.2857391257316638
2023-04-08 17:26:34.441769 Epoch 30, Training loss 0.24476417631675035
2023-04-08 17:27:50.975030 Epoch 40, Training loss 0.21916839241599426
2023-04-08 17:29:09.751893 Epoch 50, Training loss 0.193350423557254
2023-04-08 17:30:26.556550 Epoch 60, Training loss 0.17405275838115278
2023-04-08 17:31:46.126329 Epoch 70, Training loss 0.15676446583790657
2023-04-08 17:33:06.333187 Epoch 80, Training loss 0.14270161565106648
2023-04-08 17:34:25.760439 Epoch 90, Training loss 0.13285309878679422
2023-04-08 17:35:45.502106 Epoch 100, Training loss 0.12409532667161563

再次測量模型精度:

model.eval()
validate(model=model, train_loader=train_loader_, val_loader=val_loader)
Accuracy: trian: 0.9859775641025641
Accuracy: val: 0.8805443548387096

可以看到在訓練集上,準確率高達0.98,而驗證集卻只有0.88,還是存在著過擬合的風險。

最後將模型引數儲存:

torch.save(model.state_dict(), "./models/birdsVsPlane.pt")  # 只儲存了模型引數

由於我們使用的模型和資料都是在GPU上進行訓練的,因此載入模型還需要確定裝置位置:

load_model = NetBatchNorm().to(device=device)
load_model.load_state_dict(torch.load("./models/birdsVsPlane.pt", map_location=device))
<All keys matched successfully>

載入完畢,簡單測試下:

img, label = cifar2[5]
img = img.to(device=device)
load_model(img.unsqueeze(0)), label
(tensor([[ 4.4285, -4.5254]], device='cuda:0', grad_fn=<AddmmBackward0>), 0)
img_ = img.to('cpu')    # 使用plt繪圖,要先將圖片轉到cpu上
plt.imshow(img_.permute(1,2,0))
<matplotlib.image.AxesImage at 0x29d35a4b850>

參考文獻

[1] Eli Stevens. Deep Learning with Pytorch[M]. 1. 人民郵電出版社, 2022.02 :144-163.

相關文章