[深度學習]丟棄法(drop out)

nannandbk發表於2024-04-15

丟棄法(drop out)

一、介紹

1.動機

  • 一個好的模型需要對輸入資料的擾動魯棒
    • 使用有噪音的資料等價於Tikhonov正則
    • 丟棄法:在層之間加入噪音

2.丟棄法的定義

image

這裡除以\(1-p\)是為了\(x_i^{'}\)與原來的\(x_i\)的期望相同。

\[ 0\times p + (1-p)\times \dfrac{x_i}{1-p} = x_i \]

3.使用丟棄法

image

其中:

  • \(h\) 為隱藏層
  • \(\sigma\) 為啟用函式
  • \(o\) 為輸出
  • \(y\)\(o\) 經過 \(softmax\) 層得到分類結果

image

4.總結

image

二、程式碼部分

1.丟棄法(使用自定義)

實現dropout_layer函式,該函式以dropout的機率丟棄張量輸入x中的元素

# 實現dropout_layer函式,該函式以dropout的機率丟棄張量輸入x中的元素
import torch
from torch import nn
from d2l import torch as d2l

def dropout_layer(X, dropout):
    assert 0 <= dropout <= 1 # dropout大於等於0,小於等於1,否則報錯
    if dropout == 1:
        return torch.zeros_like(X) # 如果dropout為1,則X返回為全0
    if dropout == 0:
        return X # 如果dropout為1,則X返回為全原值
    mask = (torch.rand(X.shape)>dropout).float() # 取X.shape裡面0到1之間的均勻分佈,如果值大於dropout,則把它選出來
    #print((torch.randn(X.shape)>dropout)) # 返回的是布林值,然後轉布林值為0、1
    return mask * X / (1.0 - dropout) 

X = torch.arange(16,dtype=torch.float32).reshape((2,8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5)) # 有百分之50的機率變為0
print(dropout_layer(X, 1.))

輸出

tensor([[ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.],
        [ 8.,  9., 10., 11., 12., 13., 14., 15.]])
tensor([[ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.],
        [ 8.,  9., 10., 11., 12., 13., 14., 15.]])
tensor([[ 0.,  0.,  4.,  6.,  0.,  0.,  0., 14.],
        [16.,  0.,  0.,  0.,  0.,  0.,  0., 30.]])
tensor([[0., 0., 0., 0., 0., 0., 0., 0.],
        [0., 0., 0., 0., 0., 0., 0., 0.]])

定義具有兩個隱藏層的多層感知機,每個隱藏層包含256個單元

# 定義具有兩個隱藏層的多層感知機,每個隱藏層包含256個單元
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10 ,256, 256

dropout1, dropout2 = 0.2, 0.5

class Net(nn.Module):
    def __init__(self, num_inputs, num_outputs, num_hiddens1, num_hiddens2,is_training=True):       
        super(Net, self).__init__()
        self.num_inputs = num_inputs
        self.training = is_training
        self.lin1 = nn.Linear(num_inputs, num_hiddens1)
        self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)
        self.lin3 = nn.Linear(num_hiddens2, num_outputs)
        self.relu = nn.ReLU()
        
    def forward(self, X):
        H1 = self.relu(self.lin1(X.reshape((-1,self.num_inputs))))
        if self.training == True: # 如果是在訓練,則作用dropout,否則則不作用
            H1 = dropout_layer(H1, dropout1)
        H2 = self.relu(self.lin2(H1))
        if self.training == True:
            H2 = dropout_layer(H2,dropout2)
        out = self.lin3(H2) # 輸出層不作用dropout
        return out
        
net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2)

# 訓練和測試
num_epochs, lr, batch_size = 10, 0.5, 256
loss = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = torch.optim.SGD(net.parameters(),lr=lr)
d2l.train_ch3(net,train_iter,test_iter,loss,num_epochs,trainer)

image

2.丟棄法(使用框架)

import torch
from torch import nn
from d2l import torch as d2l

# 簡潔實現
num_epochs, lr, batch_size = 10, 0.5, 256
dropout1, dropout2 = 0.2, 0.5
loss = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)

net = nn.Sequential(nn.Flatten(),nn.Linear(784,256),nn.ReLU(),
                   nn.Dropout(dropout1),nn.Linear(256,256),nn.ReLU(),
                   nn.Dropout(dropout2),nn.Linear(256,10))

def init_weights(m):
    if type(m) == nn.Linear:
        nn.init.normal_(m.weight,std=0.01)
    
net.apply(init_weights)

trainer = torch.optim.SGD(net.parameters(),lr=lr)
d2l.train_ch3(net,train_iter, test_iter, loss, num_epochs,trainer)

image

相關文章