前言
本文主要介紹卷積神經網路的使用的下半部分。
另外,上篇文章增加了一點程式碼註釋,主要是解釋(w-f+2p)/s+1這個公式的使用。
所以,要是這篇文章的程式碼看不太懂,可以翻一下上篇文章。
程式碼實現
之前,我們已經學習了概念,在結合我們以前學習的知識,我們可以直接閱讀下面程式碼了。
程式碼裡使用了,dataset.CIFAR10資料集。
CIFAR-10 資料集由 60000 張 32x32 彩色影像組成,共分為 10 個不同的類別,分別是飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船和卡車。
每個類別包含 6000 張影像,其中 50000 張用於訓練,10000 張用於測試。
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.nn.functional as F #nn不好使時,在這裡找啟用函式
# device config
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# hyper parameters
input_size = 784 # 28x28
hidden_size = 100
num_classes = 10
batch_size = 100
learning_rate = 0.001
num_epochs = 2
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_dataset = torchvision.datasets.CIFAR10(
root='./data', train=True, download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(
root='./data', train=False, download=True, transform=transform)
train_loader = torch.utils. data.DataLoader(
dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_dataset, batch_size=batch_size, shuffle=False)
print('每份100個,被分成多少份:', len(test_loader))
classes = ('plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck')
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet,self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120) #這個在forward裡解釋
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x))) #這裡x已經變成 torch.Size([4, 16, 5, 5])
# print("兩次卷積兩次池化後的x.shape:",x.shape)
x = x.view(-1,16*5*5)#這裡的16*5*5就是x的後面3個維度相乘
x = F.relu(self.fc1(x)) #fc1定義時,inputx已經是16*5*5了
x = F.relu(self.fc2(x))
x= self.fc3(x)
return x
model = ConvNet().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
n_total_steps = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# origin shape:[4,3,32,32]=4,3,1024
# input layer: 3 input channels, 6 output channels, 5 kernel size
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 2000 == 0:
print(
f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Finished Training')
# test
with torch.no_grad():
n_correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)] #生成 10 個 0 的列表
n_class_samples = [0 for i in range(10)]
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
print('test-images.shape:', images.shape)
outputs = model(images)
# max returns(value ,index)
_, predicted = torch.max(outputs, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()
for i in range(batch_size):
label = labels[i]
# print("label:",label) #這裡存的是 0~9的數字 輸出就是這樣的 label: tensor(2) predicted[i]也是這樣的數
pred = predicted[i]
if (label == pred):
n_class_correct[label] += 1
n_class_samples[label] += 1
acc = 100.0*n_correct/n_samples # 計算正確率
print(f'accuracy ={acc}')
for i in range(10):
acc = 100.0*n_class_correct[i]/n_class_samples[i]
print(f'Accuracy of {classes[i]}: {acc} %')
執行結果如下:
accuracy =10.26
Accuracy of plane: 0.0 %
Accuracy of car: 0.0 %
Accuracy of bird: 0.0 %
Accuracy of cat: 0.0 %
Accuracy of deer: 0.0 %
Accuracy of dog: 0.0 %
Accuracy of frog: 0.0 %
Accuracy of horse: 0.0 %
Accuracy of ship: 89.6 %
Accuracy of truck: 13.0 %
這是因為我設定的num_epochs=2,也就是迴圈的次數太低,所以結果的精確度就很低。
我們只要增加epochs的值,就能提高精確度了。
傳送門:
零基礎學習人工智慧—Python—Pytorch學習—全集
這樣我們卷積神經網路就學完了。
注:此文章為原創,任何形式的轉載都請聯絡作者獲得授權並註明出處!
若您覺得這篇文章還不錯,請點選下方的【推薦】,非常感謝!
https://www.cnblogs.com/kiba/p/18381036