題圖來自:medium
本文主要演示瞭如何使用Inception v3模型進行遷移學習,建立新的影象分類器。01 - 簡單線性模型 | 02 - 卷積神經網路 | 03 - PrettyTensor | 04 - 儲存& 恢復
05 - 整合學習 | 06 - CIFAR 10 | 07 - Inception 模型
by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube
中文翻譯 thrillerist / Github
如有轉載,請附上本文連結。
簡介
在前一篇教程 #07 中,我們瞭解瞭如何用預訓練的Indeption模型來做影象分類。不幸的是,Inception模型似乎無法對人物影象做分類。原因在於該模型所使用的訓練集,其中有一些易混淆的類別標籤。
Inception模型實際上能夠從影象中提取出有用的資訊。因此我們可以用其它資料集來訓練Inception模型。但如果要在新的資料集上訓練這樣的模型,需要在一臺強大又昂貴的電腦上花費好幾周的時間。
相反,我們可以複用預訓練的Inception模型,然後只需要替換掉最後做分類的那一層。這個方法叫遷移學習。
本文基於上一篇教程,你需要熟悉教程#07中的Inception模型,以及之前教程中關於如何在TensorFlow中建立和訓練神經網路的部分。 這篇教程的部分程式碼在inception.py
檔案中。
流程圖
下圖展示了用Inception模型做遷移學習時資料的流向。首先,我們在Inception模型中輸入並處理一張影象。在模型最終的分類層之前,將所謂的Transfer- Values儲存到快取檔案中。
使用快取檔案的原因是,Inception模型處理一張圖要花很長時間。我的裝有Quad-Core 2 GHz CPU的膝上型電腦每秒能用Inception模型處理3張影象。如果每張影象都要處理多次的話,將transfer-values儲存下來可以節省很多時間。
transfer-values有時也稱為bottleneck-values,但這個詞可能令人費解,在這裡就沒有使用。
當新資料集裡的所有影象都用Inception處理過,並且生成的transfer-values都儲存到快取檔案之後,我們可以將這些transfer-values作為其它神經網路的輸入。接著訓練第二個神經網路,用來分類新的資料集,因此,網路基於Inception模型的transfer-values來學習如何分類影象。
這樣,Inception模型從影象中提取出有用的資訊,然後用另外的神經網路來做真正的分類工作。
from IPython.display import Image, display
Image('images/08_transfer_learning_flowchart.png')複製程式碼
匯入
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
from datetime import timedelta
import os
# Functions and classes for loading and using the Inception model.
import inception
# We use Pretty Tensor to define the new classifier.
import prettytensor as pt複製程式碼
使用Python3.5.2(Anaconda)開發,TensorFlow版本是:
tf.__version__複製程式碼
'0.12.0-rc0'
PrettyTensor 版本:
pt.__version__複製程式碼
'0.7.1'
載入CIFAR-10資料
import cifar10複製程式碼
cirfa10模組中已經定義好了資料維度,因此我們需要時只要匯入就行。
from cifar10 import num_classes複製程式碼
設定電腦上儲存資料集的路徑。
# cifar10.data_path = "data/CIFAR-10/"複製程式碼
CIFAR-10資料集大概有163MB,如果給定路徑沒有找到檔案的話,將會自動下載。
cifar10.maybe_download_and_extract()複製程式碼
Data has apparently already been downloaded and unpacked.
載入類別名稱。
class_names = cifar10.load_class_names()
class_names複製程式碼
Loading data: data/CIFAR-10/cifar-10-batches-py/batches.meta
['airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck']
載入訓練集。這個函式返回影象、整形分類號碼、以及用One-Hot編碼的分類號陣列,稱為標籤。
images_train, cls_train, labels_train = cifar10.load_training_data()複製程式碼
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_1
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_2
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_3
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_4
Loading data: data/CIFAR-10/cifar-10-batches-py/data_batch_5
載入測試集。
images_test, cls_test, labels_test = cifar10.load_test_data()複製程式碼
Loading data: data/CIFAR-10/cifar-10-batches-py/test_batch
現在已經載入了CIFAR-10資料集,它包含60,000張影象以及相關的標籤(影象的分類)。資料集被分為兩個獨立的子集,即訓練集和測試集。
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))複製程式碼
Size of:
- Training-set: 50000
- Test-set: 10000
用來繪製圖片的幫助函式
這個函式用來在3x3的柵格中畫9張影象,然後在每張影象下面寫出真實類別和預測類別。
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
繪製幾張影象看看資料是否正確
# Get the first images from the test-set.
images = images_test[0:9]
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=False)複製程式碼
下載Inception模型
從網上下載Inception模型。這是你儲存資料檔案的預設資料夾。如果資料夾不存在就自動建立。
# inception.data_dir = 'inception/'複製程式碼
如果資料夾中不存在Inception模型,就自動下載。
它有85MB。
更多詳情見教程#07。
inception.maybe_download()複製程式碼
Downloading Inception v3 Model ...
Data has apparently already been downloaded and unpacked.
載入Inception模型
載入模型,為影象分類做準備。
注意warning資訊,以後可能會導致程式執行失敗。
model = inception.Inception()複製程式碼
計算 Transfer-Values
匯入用來從Inception模型中獲取transfer-values的幫助函式。
from inception import transfer_values_cache複製程式碼
設定訓練集和測試集快取檔案的目錄。
file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.pkl')
file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.pkl')複製程式碼
print("Processing Inception transfer-values for training-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_train * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,
images=images_scaled,
model=model)複製程式碼
Processing Inception transfer-values for training-images ...
- Data loaded from cache-file: data/CIFAR-10/inception_cifar10_train.pkl複製程式碼
print("Processing Inception transfer-values for test-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_test * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_test = transfer_values_cache(cache_path=file_path_cache_test,
images=images_scaled,
model=model)複製程式碼
Processing Inception transfer-values for test-images ...
- Data loaded from cache-file: data/CIFAR-10/inception_cifar10_test.pkl
檢查transfer-values的陣列大小。在訓練集中有50,000張影象,每張影象有2048個transfer-values。
transfer_values_train.shape複製程式碼
(50000, 2048)
相同的,在測試集中有10,000張影象,每張影象有2048個transfer-values。
transfer_values_test.shape複製程式碼
(10000, 2048)
繪製transfer-values的幫助函式
def plot_transfer_values(i):
print("Input image:")
# Plot the i'th image from the test-set.
plt.imshow(images_test[i], interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values_test[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()複製程式碼
plot_transfer_values(i=16)複製程式碼
Input image:
Transfer-values for the image using Inception model:
plot_transfer_values(i=17)複製程式碼
Input image:
Transfer-values for the image using Inception model:
transfer-values的PCA分析結果
用scikit-learn裡的主成分分析(PCA),將transfer-values的陣列維度從2048維降到2維,方便繪製。
from sklearn.decomposition import PCA複製程式碼
建立一個新的PCA-object,將目標陣列維度設為2。
pca = PCA(n_components=2)複製程式碼
計算PCA需要一段時間,因此將樣本數限制在3000。如果你願意,可以使用整個訓練集。
transfer_values = transfer_values_train[0:3000]複製程式碼
獲取你選取的樣本的類別號。
cls = cls_train[0:3000]複製程式碼
保陣列有3000份樣本,每個樣本有2048個transfer-values。
transfer_values.shape複製程式碼
(3000, 2048)複製程式碼
用PCA將transfer-value從2048維降低到2維。
transfer_values_reduced = pca.fit_transform(transfer_values)複製程式碼
陣列現在有3000個樣本,每個樣本兩個值。
transfer_values_reduced.shape複製程式碼
(3000, 2)
幫助函式用來繪製降維後的transfer-values。
def plot_scatter(values, cls):
# Create a color-map with a different color for each class.
import matplotlib.cm as cm
cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))
# Get the color for each sample.
colors = cmap[cls]
# Extract the x- and y-values.
x = values[:, 0]
y = values[:, 1]
# Plot it.
plt.scatter(x, y, color=colors)
plt.show()複製程式碼
畫出用PCA降維後的transfer-values。用10種不同的顏色來表示CIFAR-10資料集中不同的類別。顏色各自組合在一起,但有很多重疊部分。這可能是因為PCA無法正確地分離transfer-values。
plot_scatter(transfer_values_reduced, cls)複製程式碼
transfer-values的t-SNE分析結果
from sklearn.manifold import TSNE複製程式碼
另一種降維的方法是t-SNE。不幸的是,t-SNE很慢,因此我們先用PCA將維度從2048減少到50。
pca = PCA(n_components=50)
transfer_values_50d = pca.fit_transform(transfer_values)複製程式碼
建立一個新的t-SNE物件,用來做最後的降維工作,將目標維度設為2維。
tsne = TSNE(n_components=2)複製程式碼
用t-SNE執行最終的降維。目前在scikit-learn中實現的t-SNE可能無法處理很多樣本的資料,所以如果你用整個訓練集的話,程式可能會崩潰。
transfer_values_reduced = tsne.fit_transform(transfer_values_50d)複製程式碼
確保陣列有3000份樣本,每個樣本有兩個transfer-values。
transfer_values_reduced.shape複製程式碼
(3000, 2)
畫出用t-SNE降低至二維的transfer-values,相比上面PCA的結果,它有更好的分離度。
這意味著由Inception模型得到的transfer-values似乎包含了足夠多的資訊,可以對CIFAR-10影象進行分類,然而還是有一些重疊部分,說明分離並不完美。
plot_scatter(transfer_values_reduced, cls)複製程式碼
TensorFlow中的新分類器
在我們將會在TensorFlow中建立一個新的神經網路。這個網路會把Inception模型中的transfer-values作為輸入,然後輸出CIFAR-10影象的預測類別。
這裡假定你已經熟悉如何在TensorFlow中建立神經網路,否則請閱讀教程#03。
佔位符 (Placeholder)變數
首先需要找到transfer-values的陣列長度,它是儲存在Inception模型物件中的一個變數。
transfer_len = model.transfer_len複製程式碼
現在為輸入的transfer-values建立一個placeholder變數,輸入到我們新建的網路中。變數的形狀是[None, transfer_len]
,None
表示它的輸入陣列包含任意數量的樣本,每個樣本元素個數為2048,即transfer_len
。
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')複製程式碼
為輸入影象的真實型別標籤定義另外一個placeholder變數。這是One-Hot編碼的陣列,包含10個元素,每個元素代表了資料集中的一種可能類別。
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')複製程式碼
計算代表真實類別的整形數字。這也可能是一個placeholder變數。
y_true_cls = tf.argmax(y_true, dimension=1)複製程式碼
神經網路
建立在CIFAR-10資料集上做分類的神經網路。它將Inception模型得到的transfer-values作為輸入,儲存在placeholder變數x
中。網路輸出預測的類別y_pred
。
教程#03中有更多使用Pretty Tensor構造神經網路的細節。
# Wrap the transfer-values as a Pretty Tensor object.
x_pretty = pt.wrap(x)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
fully_connected(size=1024, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)複製程式碼
優化方法
建立一個變數來記錄當前優化迭代的次數。
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)複製程式碼
優化新的神經網路的方法。
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)複製程式碼
分類準確率
網路的輸出y_pred是一個包含10個元素的陣列。類別號是陣列中最大元素的索引。
y_pred_cls = tf.argmax(y_pred, dimension=1)複製程式碼
建立一個布林向量,表示每張影象的真實類別是否與預測類別相同。
correct_prediction = tf.equal(y_pred_cls, y_true_cls)複製程式碼
將布林值向量型別轉換成浮點型向量,這樣子False就變成0,True變成1,然後計算這些值的平均數,以此來計算分類的準確度。
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))複製程式碼
執行TensorFlow
建立TensorFlow會話(session)
一旦建立了TensorFlow圖,我們需要建立一個TensorFlow會話,用來執行圖。
session = tf.Session()複製程式碼
初始化變數
我們需要在開始優化weights和biases變數之前對它們進行初始化。
session.run(tf.global_variables_initializer())複製程式碼
獲取隨機訓練batch的幫助函式
訓練集中有50,000張影象(以及儲存transfer-values的陣列)。用這些影象(transfer-vlues)計算模型的梯度會花很多時間。因此,我們在優化器的每次迭代裡只用到了一小部分的影象(transfer-vlues)。
如果記憶體耗盡導致電腦當機或變得很慢,你應該試著減少這些數量,但同時可能還需要更優化的迭代。
train_batch_size = 64複製程式碼
函式用來從訓練集中選擇隨機batch的transfer-vlues。
def random_batch():
# Number of images (transfer-values) in the training-set.
num_images = len(transfer_values_train)
# Create a random index.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random x and y-values.
# We use the transfer-values instead of images as x-values.
x_batch = transfer_values_train[idx]
y_batch = labels_train[idx]
return x_batch, y_batch複製程式碼
執行優化迭代的幫助函式
函式用來執行一定數量的優化迭代,以此來逐漸改善網路層的變數。在每次迭代中,會從訓練集中選擇新的一批資料,然後TensorFlow在這些訓練樣本上執行優化。每100次迭代會列印出進度。
def optimize(num_iterations):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images (transfer-values) and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch()
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
# We also want to retrieve the global_step counter.
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
# Print status to screen every 100 iterations (and last).
if (i_global % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-batch.
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
# Print status.
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))複製程式碼
展示結果的幫助函式
繪製錯誤樣本的幫助函式
函式用來繪製測試集中被誤分類的樣本。
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = images_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
n = min(9, len(images))
# Plot the first n images.
plot_images(images=images[0:n],
cls_true=cls_true[0:n],
cls_pred=cls_pred[0:n])複製程式碼
繪製混淆(confusion)矩陣的幫助函式
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
# Append the class-name to each line.
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))複製程式碼
計算分類的幫助函式
這個函式用來計算影象的預測類別,同時返回一個代表每張影象分類是否正確的布林陣列。
由於計算可能會耗費太多記憶體,就分批處理。如果你的電腦當機了,試著降低batch-size。
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(transfer_values, labels, cls_true):
# Number of images.
num_images = len(transfer_values)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: transfer_values[i:j],
y_true: labels[i:j]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred複製程式碼
計算測試集上的預測類別。
def predict_cls_test():
return predict_cls(transfer_values = transfer_values_test,
labels = labels_test,
cls_true = cls_test)複製程式碼
計算分類準確率的幫助函式
這個函式計算了給定布林陣列的分類準確率,布林陣列表示每張影象是否被正確分類。比如, cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
。
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()複製程式碼
展示分類準確率的幫助函式
函式用來列印測試集上的分類準確率。
為測試集上的所有圖片計算分類會花費一段時間,因此我們直接從這個函式裡呼叫上面的函式,這樣就不用每個函式都重新計算分類。
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)複製程式碼
結果
優化之前的效能
測試集上的準確度很低,這是由於模型只做了初始化,並沒做任何優化,所以它只是對影象做隨機分類。
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)複製程式碼
Accuracy on Test-Set: 9.4% (939 / 10000)
10,000次優化迭代後的效能
在10,000次優化迭代之後,測試集上的分類準確率大約為90%。相比之下,之前教程#06中的準確率低於80%。
optimize(num_iterations=10000)複製程式碼
Global Step: 100, Training Batch Accuracy: 82.8%
Global Step: 200, Training Batch Accuracy: 90.6%
Global Step: 300, Training Batch Accuracy: 90.6%
Global Step: 400, Training Batch Accuracy: 95.3%
Global Step: 500, Training Batch Accuracy: 85.9%
Global Step: 600, Training Batch Accuracy: 84.4%
Global Step: 700, Training Batch Accuracy: 90.6%
Global Step: 800, Training Batch Accuracy: 93.8%
Global Step: 900, Training Batch Accuracy: 92.2%
Global Step: 1000, Training Batch Accuracy: 95.3%
Global Step: 1100, Training Batch Accuracy: 93.8%
Global Step: 1200, Training Batch Accuracy: 90.6%
Global Step: 1300, Training Batch Accuracy: 95.3%
Global Step: 1400, Training Batch Accuracy: 90.6%
Global Step: 1500, Training Batch Accuracy: 90.6%
Global Step: 1600, Training Batch Accuracy: 92.2%
Global Step: 1700, Training Batch Accuracy: 90.6%
Global Step: 1800, Training Batch Accuracy: 92.2%
Global Step: 1900, Training Batch Accuracy: 84.4%
Global Step: 2000, Training Batch Accuracy: 85.9%
Global Step: 2100, Training Batch Accuracy: 87.5%
Global Step: 2200, Training Batch Accuracy: 90.6%
Global Step: 2300, Training Batch Accuracy: 92.2%
Global Step: 2400, Training Batch Accuracy: 95.3%
Global Step: 2500, Training Batch Accuracy: 89.1%
Global Step: 2600, Training Batch Accuracy: 93.8%
Global Step: 2700, Training Batch Accuracy: 87.5%
Global Step: 2800, Training Batch Accuracy: 90.6%
Global Step: 2900, Training Batch Accuracy: 92.2%
Global Step: 3000, Training Batch Accuracy: 96.9%
Global Step: 3100, Training Batch Accuracy: 96.9%
Global Step: 3200, Training Batch Accuracy: 92.2%
Global Step: 3300, Training Batch Accuracy: 95.3%
Global Step: 3400, Training Batch Accuracy: 93.8%
Global Step: 3500, Training Batch Accuracy: 89.1%
Global Step: 3600, Training Batch Accuracy: 89.1%
Global Step: 3700, Training Batch Accuracy: 95.3%
Global Step: 3800, Training Batch Accuracy: 98.4%
Global Step: 3900, Training Batch Accuracy: 89.1%
Global Step: 4000, Training Batch Accuracy: 92.2%
Global Step: 4100, Training Batch Accuracy: 96.9%
Global Step: 4200, Training Batch Accuracy: 100.0%
Global Step: 4300, Training Batch Accuracy: 100.0%
Global Step: 4400, Training Batch Accuracy: 90.6%
Global Step: 4500, Training Batch Accuracy: 95.3%
Global Step: 4600, Training Batch Accuracy: 96.9%
Global Step: 4700, Training Batch Accuracy: 96.9%
Global Step: 4800, Training Batch Accuracy: 96.9%
Global Step: 4900, Training Batch Accuracy: 92.2%
Global Step: 5000, Training Batch Accuracy: 98.4%
Global Step: 5100, Training Batch Accuracy: 93.8%
Global Step: 5200, Training Batch Accuracy: 92.2%
Global Step: 5300, Training Batch Accuracy: 98.4%
Global Step: 5400, Training Batch Accuracy: 98.4%
Global Step: 5500, Training Batch Accuracy: 100.0%
Global Step: 5600, Training Batch Accuracy: 92.2%
Global Step: 5700, Training Batch Accuracy: 98.4%
Global Step: 5800, Training Batch Accuracy: 92.2%
Global Step: 5900, Training Batch Accuracy: 92.2%
Global Step: 6000, Training Batch Accuracy: 93.8%
Global Step: 6100, Training Batch Accuracy: 95.3%
Global Step: 6200, Training Batch Accuracy: 98.4%
Global Step: 6300, Training Batch Accuracy: 98.4%
Global Step: 6400, Training Batch Accuracy: 96.9%
Global Step: 6500, Training Batch Accuracy: 95.3%
Global Step: 6600, Training Batch Accuracy: 96.9%
Global Step: 6700, Training Batch Accuracy: 96.9%
Global Step: 6800, Training Batch Accuracy: 92.2%
Global Step: 6900, Training Batch Accuracy: 96.9%
Global Step: 7000, Training Batch Accuracy: 100.0%
Global Step: 7100, Training Batch Accuracy: 95.3%
Global Step: 7200, Training Batch Accuracy: 96.9%
Global Step: 7300, Training Batch Accuracy: 96.9%
Global Step: 7400, Training Batch Accuracy: 95.3%
Global Step: 7500, Training Batch Accuracy: 95.3%
Global Step: 7600, Training Batch Accuracy: 93.8%
Global Step: 7700, Training Batch Accuracy: 93.8%
Global Step: 7800, Training Batch Accuracy: 95.3%
Global Step: 7900, Training Batch Accuracy: 95.3%
Global Step: 8000, Training Batch Accuracy: 93.8%
Global Step: 8100, Training Batch Accuracy: 95.3%
Global Step: 8200, Training Batch Accuracy: 98.4%
Global Step: 8300, Training Batch Accuracy: 93.8%
Global Step: 8400, Training Batch Accuracy: 98.4%
Global Step: 8500, Training Batch Accuracy: 96.9%
Global Step: 8600, Training Batch Accuracy: 96.9%
Global Step: 8700, Training Batch Accuracy: 98.4%
Global Step: 8800, Training Batch Accuracy: 95.3%
Global Step: 8900, Training Batch Accuracy: 98.4%
Global Step: 9000, Training Batch Accuracy: 98.4%
Global Step: 9100, Training Batch Accuracy: 98.4%
Global Step: 9200, Training Batch Accuracy: 96.9%
Global Step: 9300, Training Batch Accuracy: 100.0%
Global Step: 9400, Training Batch Accuracy: 90.6%
Global Step: 9500, Training Batch Accuracy: 92.2%
Global Step: 9600, Training Batch Accuracy: 98.4%
Global Step: 9700, Training Batch Accuracy: 96.9%
Global Step: 9800, Training Batch Accuracy: 98.4%
Global Step: 9900, Training Batch Accuracy: 98.4%
Global Step: 10000, Training Batch Accuracy: 100.0%
Time usage: 0:00:32
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製程式碼
Accuracy on Test-Set: 90.7% (9069 / 10000)
Example errors:
Confusion Matrix:
[926 6 13 2 3 0 1 1 29 19] (0) airplane
[ 9 921 2 5 0 1 1 1 2 58] (1) automobile
[ 18 1 883 31 32 4 22 5 1 3] (2) bird
[ 7 2 19 855 23 57 24 9 2 2] (3) cat
[ 5 0 21 25 896 4 24 22 2 1] (4) deer
[ 2 0 12 97 18 843 10 15 1 2] (5) dog
[ 2 1 16 17 17 4 940 1 2 0] (6) frog
[ 8 0 10 19 28 14 1 914 2 4] (7) horse
[ 42 6 1 4 1 0 2 0 932 12] (8) ship
[ 6 19 2 2 1 0 1 1 9 959] (9) truck
(0) (1) (2) (3) (4) (5) (6) (7) (8) (9)
關閉TensorFlow會話
現在我們已經用TensorFlow完成了任務,關閉session,釋放資源。注意,我們需要關閉兩個TensorFlow-session,每個模型物件各有一個。
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# model.close()
# session.close()複製程式碼
總結
之前的教程 #06 中,我們在一臺膝上型電腦上花了15個小時來訓練一個神經網路,用來對CIFAR-10資料集做分類,它在測試集上的準確率大約80%。
在這篇教程中,我們使用教程 #07 中的Inception模型,來獲取在CIFAR-10資料集上大概90%的分類準確率。我們將所有CIFAR-10資料集中的影象輸入到Inception模型中,然後在最終分類層之前獲取transfer-values。接著建立另外一個神經網路,它將transfer-values作為輸入,生成一個CIFAR-10類別作為輸出。
CIFAR-10資料集包含60,000張影象。在一臺沒有GPU的電腦上,大約花了6個小時來計算Inception模型對這些影象的transfer-values。在這些transfer-values上訓練一個新的分類器只需幾分鐘。兩部分時間加起來,這種遷移學習比直接為CIFRA-10資料集訓練一個神經網路要快一倍以上,並且它能得到更高的分類準確率。
因此,用Inception模型做遷移學習,對於在自己的資料集上建立一個影象分類器是很有幫助的。
練習
下面使一些可能會讓你提升TensorFlow技能的一些建議練習。為了學習如何更合適地使用TensorFlow,實踐經驗是很重要的。
在你對這個Notebook進行修改之前,可能需要先備份一下。
- 試著在PCA和t-SNE中使用整個訓練集。會出現什麼情況?
- 試著為新的分類器改變神經網路。如果你刪掉全連線層或新增更多的全連線層會發生什麼?
- 如果你執行更多或更少的迭代會出現什麼情況?
- 如果你改變優化器的
learning_rate
會發生什麼? - 如果你像在教程#06中的那樣,對CIFAR-10影象進行扭曲呢?你將不能使用快取,因為每張圖都不同。
- 試著用MNIST資料集來代替CIFAR-10資料集。
- 向朋友解釋程式如何工作。