題圖來自: github
本文主要介紹了PrettyTensor,用來快速構建神經網路。
當然,原文寫於16年,現在有更方便的API,後續會介紹。
本文有大段前篇教程的文字及程式碼,如果看過上一篇的朋友可以快速翻到下文PrettyTensor實現的那一部分去。
by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube
中文翻譯 thrillerist / Github
如有轉載,請附上本文連結。
介紹
之前的教程演示瞭如何在TensorFlow中實現一個卷積神經網路,這需要了解一些TensorFlow工作的底層原理。它有點複雜,實現起來還容易犯錯。
這篇教程為我們說明了如何使用TensorFlow的一個附加包PrettyTensor,它也是Google開發的。PrettyTensor提供了在TensorFlow中建立神經網路的更簡單的方法,讓我們可以關注自己想要實現的想法,而不用過多擔心底層的實現細節。這也讓程式碼更短、更容易閱讀和修改。
除了用PrettyTensor構造圖之外,這篇教程的大部分程式碼和教程 #02 中的一樣,當然還有一些細微的變化。
這篇教程是基於教程 #02 之上的,如果你是TensorFlow新手的話,推薦先學完上一份教程。你需要熟悉基本的線性代數、Python和Jupyter Notebook編輯器。
流程圖
下面的圖表直接展示了之後實現的卷積神經網路中資料的傳遞。關於卷積的詳細描述請看上一篇教程。
from IPython.display import Image
Image('images/02_network_flowchart.png')複製程式碼
輸入影像在第一層卷基層中用權重過濾器處理。結果在16張新圖裡,每個代表了卷積層裡一個過濾器(的處理結果)。影像也經過降取樣,因此影像解析度從28x28減少到14x14。
這16張小圖在第二個卷積層中處理。這16個通道都需要一個權重過濾,這層的輸出的每個通道也各需要一個權重過濾。總共有36個輸出,所以在第二個卷積層有16 x 36 = 576個濾波器。輸出圖再一次降取樣到7x7個畫素。
第二個卷積層的輸出是36張7x7畫素的影像。它們被壓到一個長為7 x 7 x 36 = 1764的向量中去,它作為一個有128個神經元(或元素)的全連線網路的輸入。這些又輸入到另一個有10個神經元的全連線層中,每個神經元代表一個類別,用來確定影像的類別,也即影像上的數字。
卷積濾波一開始是隨機挑選的,因此分類也是隨機完成的。根據交叉熵(cross-entropy)來測量輸入圖預測值和真實類別間的錯誤。然後優化器用鏈式法則自動地將這個誤差傳在卷積網路中傳遞,更新濾波權重來提升分類質量。這個過程迭代了幾千次,直到分類誤差足夠低。
這些特定的濾波權重和中間影像是一個優化的結果,和你執行這些程式碼所看到的可能會有所不同。
注意,這些在TensorFlow上的計算是在一部分影像上執行,而非單獨的一張圖,這使得計算更有效。也意味著在TensorFlow上實現時,這個流程圖實際上會有更多的資料維度。
匯入
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt複製程式碼
使用Python3.5.2(Anaconda)開發,TensorFlow版本是:
tf.__version__複製程式碼
'0.12.0-rc0'
PrettyTensor 版本:
pt.__version__複製程式碼
'0.7.1'
載入資料
MNIST資料集大約12MB,如果沒在給定路徑中找到就會自動下載。
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)複製程式碼
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
現在已經載入了MNIST資料集,它由70,000張影像和對應的標籤(比如影像的類別)組成。資料集分成三份互相獨立的子集。我們在教程中只用訓練集和測試集。
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))複製程式碼
Size of:
-Training-set: 55000
-Test-set: 10000
-Validation-set: 5000
型別標籤使用One-Hot編碼,這意外每個標籤是長為10的向量,除了一個元素之外,其他的都為零。這個元素的索引就是類別的數字,即相應圖片中畫的數字。我們也需要測試資料集類別數字的整型值,用下面的方法來計算。
data.test.cls = np.argmax(data.test.labels, axis=1)複製程式碼
資料維度
在下面的原始碼中,有很多地方用到了資料維度。它們只在一個地方定義,因此我們可以在程式碼中使用這些數字而不是直接寫數字。
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10複製程式碼
用來繪製圖片的幫助函式
這個函式用來在3x3的柵格中畫9張影像,然後在每張影像下面寫出真實類別和預測類別。
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
繪製幾張影像來看看資料是否正確
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)複製程式碼
TensorFlow圖
TensorFlow的全部目的就是使用一個稱之為計算圖(computational graph)的東西,它會比直接在Python中進行相同計算量要高效得多。TensorFlow比Numpy更高效,因為TensorFlow瞭解整個需要執行的計算圖,然而Numpy只知道某個時間點上唯一的數學運算。
TensorFlow也能夠自動地計算需要優化的變數的梯度,使得模型有更好的表現。這是由於圖是簡單數學表示式的結合,因此整個圖的梯度可以用鏈式法則推匯出來。
TensorFlow還能利用多核CPU和GPU,Google也為TensorFlow製造了稱為TPUs(Tensor Processing Units)的特殊晶片,它比GPU更快。
一個TensorFlow圖由下面幾個部分組成,後面會詳細描述:
- 佔位符變數(Placeholder)用來改變圖的輸入。
- 模型變數(Model)將會被優化,使得模型表現得更好。
- 模型本質上就是一些數學函式,它根據Placeholder和模型的輸入變數來計算一些輸出。
- 一個cost度量用來指導變數的優化。
- 一個優化策略會更新模型的變數。
另外,TensorFlow圖也包含了一些除錯狀態,比如用TensorBoard列印log資料,本教程不涉及這些。
佔位符 (Placeholder)變數
Placeholder是作為圖的輸入,我們每次執行圖的時候都可能改變它們。將這個過程稱為feeding placeholder變數,後面將會描述這個。
首先我們為輸入影像定義placeholder變數。這讓我們可以改變輸入到TensorFlow圖中的影像。這也是一個張量(tensor),代表一個多維向量或矩陣。資料型別設定為float32
,形狀設為[None, img_size_flat]
,None
代表tensor可能儲存著任意數量的影像,每張圖象是一個長度為img_size_flat
的向量。
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')複製程式碼
卷積層希望x被編碼為4維張量,因此我們需要將它的形狀轉換至[num_images, img_height, img_width, num_channels]
。注意img_height == img_width == img_size
,如果第一維的大小設為-1,num_images
的大小也會被自動推匯出來。轉換運算如下:
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])複製程式碼
接下來我們為輸入變數x
中的影像所對應的真實標籤定義placeholder變數。變數的形狀是[None, num_classes]
,這代表著它儲存了任意數量的標籤,每個標籤是長度為num_classes
的向量,本例中長度為10。
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')複製程式碼
我們也可以為class-number提供一個placeholder,但這裡用argmax來計算它。這裡只是TensorFlow中的一些操作,沒有執行什麼運算。
y_true_cls = tf.argmax(y_true, dimension=1)複製程式碼
TensorFlow 實現
這一節顯示了教程 #02 中直接用TensorFlow實現卷積神經網路的原始碼。這份Notebook中並沒有直接用到這些程式碼,只是為了方便和下面PrettyTensor的實現進行比較。
這裡要注意的是有多少程式碼量以及TensorFlow儲存資料、進行運算的底層細節。即使在很小的神經網路中也容易犯錯。
幫助函式
在直接用TensorFlow實現時,我們建立一些在構造圖時常用到的幫助函式。
這兩個函式在TensorFlow圖中建立新的變數並用隨機值初始化。
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))複製程式碼
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))複製程式碼
下面的幫助函式建立一個新的卷積網路。輸入和輸出是4維的張量(4-rank tensors)。注意TensorFlow API的底層細節,比如權重變數的大小。這裡很容易犯錯,可能會導致奇怪的錯誤資訊,並且很難除錯。
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of filters.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights複製程式碼
下面的幫助函式將一個4維張量轉換到2維,因此我們可以在卷積層之後新增一個全連線層。
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features複製程式碼
接下來的幫助函式建立一個全連線層。
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer複製程式碼
構造圖(Graph)
現在將會用上面的幫助函式來建立卷積神經網路。如果沒有這些函式的話,程式碼將會又長又難以理解。
注意,我們並不會執行下面的程式碼。寫在這裡只是為了與PrettyTensor的程式碼進行比較。
之前的教程使用定義好的常量,因此很容易改變(變數)。比如,我們沒有將 filter_size=5
當作 new_conv_layer()
的引數,而是令filter_size=filter_size1
,然後在其他地方定義filter_size1=5
。這樣子就很容易改變所有的常量。
if False: # Don't execute this! Just show it for easy comparison.
# First convolutional layer.
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=5,
num_filters=16,
use_pooling=True)
# Second convolutional layer.
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=16,
filter_size=5,
num_filters=36,
use_pooling=True)
# Flatten layer.
layer_flat, num_features = flatten_layer(layer_conv2)
# First fully-connected layer.
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=128,
use_relu=True)
# Second fully-connected layer.
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=128,
num_outputs=num_classes,
use_relu=False)
# Predicted class-label.
y_pred = tf.nn.softmax(layer_fc2)
# Cross-entropy for the classification of each image.
cross_entropy = \
tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
# Loss aka. cost-measure.
# This is the scalar value that must be minimized.
loss = tf.reduce_mean(cross_entropy)複製程式碼
PrettyTensor實現
這一節演示如何用PrettyTensor來實現一個相同的卷積神經網路。
基本思想就是用一個PrettyTensor object封裝輸入張量x_image
,它有一個新增新卷積層的幫助函式,以此來建立整個神經網路。這有點像我們之前實現的那些幫助函式,但它更簡單一些,因為PrettyTensor記錄每一層的輸入和輸出維度等等。
x_pretty = pt.wrap(x_image)複製程式碼
現在我們已經將輸入影像裝到一個PrettyTensor的object中,再用幾行程式碼就可以新增摺積層和全連線層。
注意,在with
程式碼塊中,pt.defaults_scope(activation_fn=tf.nn.relu)
把 activation_fn=tf.nn.relu
當作每個的層引數,因此這些層都用到了 Rectified Linear Units (ReLU) 。defaults_scope
使我們能更方便地修改所有層的引數。
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)複製程式碼
就是這樣!現在我們用幾行程式碼就建立了一個完全一樣的卷積神經網路,如果直接用TensorFlow實現的話需要一大段非常複雜的程式碼。
用PrettyTensor來代替TensorFlow,我們可以清楚地看到網路的構造以及資料如何在網路中流通。這讓我們可以專注於神經網路的關鍵思想而不是底層的實現細節。它十分簡單優雅!
獲取權重
不幸的是,使用PrettyTensor時並非所有的事都那麼優雅。
下面,我們想要繪製出卷積層的權重。在用TensorFlow實現時,我們自己建立了變數,所以可以直接訪問它們。但使用PrettyTensor構造網路時,所有的變數都是間接地由PrettyTensor建立。因此我們不得不從TensorFlow中找回變數。
我們用layer_conv1
和 layer_conv2
代表兩個卷積層。這也叫變數作用域(不要與上面描述的defaults_scope
混淆了)。PrettyTensor會自動給它為每個層建立的變數命名,因此我們可以通過層的作用域名稱和變數名來取得某一層的權重。
函式實現有點笨拙,因為我們不得不用TensorFlow函式get_variable()
,它是設計給其他用途的,建立新的變數或重用現有變數。建立下面的幫助函式很簡單。
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable複製程式碼
藉助這個幫助函式我們可以獲取變數。這些是TensorFlow的objects。你需要類似的操作來獲取變數的內容: contents = session.run(weights_conv1)
,下面會提到這個。
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')複製程式碼
優化方法
PrettyTensor給我們提供了預測型別標籤(y_pred
)以及一個需要最小化的損失度量,用來提升神經網路分類圖片的能力。
PrettyTensor的文件並沒有說明它的損失度量是用cross-entropy還是其他的。但現在我們用AdamOptimizer
來最小化損失。
優化過程並不是在這裡執行。實際上,還沒計算任何東西,我們只是往TensorFlow圖中新增了優化器,以便後續操作。
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)複製程式碼
效能度量
我們需要另外一些效能度量,來向使用者展示這個過程。
首先我們從神經網路輸出的y_pred
中計算出預測的類別,它是一個包含10個元素的向量。類別數字是最大元素的索引。
y_pred_cls = tf.argmax(y_pred, dimension=1)複製程式碼
然後建立一個布林向量,用來告訴我們每張圖片的真實類別是否與預測類別相同。
correct_prediction = tf.equal(y_pred_cls, y_true_cls)複製程式碼
上面的計算先將布林值向量型別轉換成浮點型向量,這樣子False就變成0,True變成1,然後計算這些值的平均數,以此來計算分類的準確度。
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))複製程式碼
執行TensorFlow
建立TensorFlow會話(session)
一旦建立了TensorFlow圖,我們需要建立一個TensorFlow會話,用來執行圖。
session = tf.Session()複製程式碼
初始化變數
我們需要在開始優化weights
和biases
變數之前對它們進行初始化。
session.run(tf.global_variables_initializer())複製程式碼
用來優化迭代的幫助函式
在訓練集中有50,000張圖。用這些影像計算模型的梯度會花很多時間。因此我們利用隨機梯度下降的方法,它在優化器的每次迭代裡只用到了一小部分的影像。
如果記憶體耗盡導致電腦當機或變得很慢,你應該試著減少這些數量,但同時可能還需要更優化的迭代。
train_batch_size = 64複製程式碼
函式執行了多次的優化迭代來逐步地提升網路層的變數。在每次迭代中,從訓練集中選擇一批新的資料,然後TensorFlow用這些訓練樣本來執行優化器。每100次迭代會列印出相關資訊。
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))複製程式碼
用來繪製錯誤樣本的幫助函式
函式用來繪製測試集中被誤分類的樣本。
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])複製程式碼
繪製混淆(confusion)矩陣的幫助函式
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
展示效能的幫助函式
函式用來列印測試集上的分類準確度。
為測試集上的所有圖片計算分類會花費一段時間,因此我們直接用這個函式來呼叫上面的結果,這樣就不用每次都重新計算了。
這個函式可能會佔用很多電腦記憶體,這也是為什麼將測試集分成更小的幾個部分。如果你的電腦記憶體比較小或當機了,就要試著降低batch-size。
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)複製程式碼
優化之前的效能
測試集上的準確度很低,這是由於模型只做了初始化,並沒做任何優化,所以它只是對影像做隨機分類。
print_test_accuracy()複製程式碼
Accuracy on Test-Set: 9.1% (909 / 10000)複製程式碼
1次迭代後的效能
做了一次優化後,此時優化器的學習率很低,效能其實並沒有多大提升。
optimize(num_iterations=1)複製程式碼
Optimization Iteration: 1, Training Accuracy: 6.2%
Time usage: 0:00:00
print_test_accuracy()複製程式碼
Accuracy on Test-Set: 8.9% (892 / 10000)
100次迭代優化後的效能
100次優化迭代之後,模型顯著地提升了分類的準確度。
optimize(num_iterations=99) # We already performed 1 iteration above.複製程式碼
Time usage: 0:00:00
print_test_accuracy(show_example_errors=True)複製程式碼
Accuracy on Test-Set: 83.9% (8393 / 10000)
Example errors:
1000次優化迭代後的效能
1000次優化迭代之後,模型在測試集上的準確度超過了90%。
optimize(num_iterations=900) # We performed 100 iterations above.複製程式碼
Optimization Iteration: 101, Training Accuracy: 93.8%
Optimization Iteration: 201, Training Accuracy: 89.1%
Optimization Iteration: 301, Training Accuracy: 85.9%
Optimization Iteration: 401, Training Accuracy: 87.5%
Optimization Iteration: 501, Training Accuracy: 92.2%
Optimization Iteration: 601, Training Accuracy: 95.3%
Optimization Iteration: 701, Training Accuracy: 95.3%
Optimization Iteration: 801, Training Accuracy: 90.6%
Optimization Iteration: 901, Training Accuracy: 98.4%
Time usage: 0:00:03
print_test_accuracy(show_example_errors=True)複製程式碼
Accuracy on Test-Set: 96.3% (9634 / 10000)
Example errors:
10,000次優化迭代後的效能
經過10,000次優化迭代後,測試集上的分類準確率高達99%。
optimize(num_iterations=9000) # We performed 1000 iterations above.複製程式碼
Optimization Iteration: 1001, Training Accuracy: 98.4%
Optimization Iteration: 1101, Training Accuracy: 95.3%
Optimization Iteration: 1201, Training Accuracy: 98.4%
Optimization Iteration: 1301, Training Accuracy: 96.9%
Optimization Iteration: 1401, Training Accuracy: 100.0%
Optimization Iteration: 1501, Training Accuracy: 95.3%
Optimization Iteration: 1601, Training Accuracy: 96.9%
Optimization Iteration: 1701, Training Accuracy: 96.9%
Optimization Iteration: 1801, Training Accuracy: 98.4%
Optimization Iteration: 1901, Training Accuracy: 96.9%
Optimization Iteration: 2001, Training Accuracy: 98.4%
Optimization Iteration: 2101, Training Accuracy: 95.3%
Optimization Iteration: 2201, Training Accuracy: 98.4%
Optimization Iteration: 2301, Training Accuracy: 98.4%
Optimization Iteration: 2401, Training Accuracy: 98.4%
Optimization Iteration: 2501, Training Accuracy: 93.8%
Optimization Iteration: 2601, Training Accuracy: 98.4%
Optimization Iteration: 2701, Training Accuracy: 98.4%
Optimization Iteration: 2801, Training Accuracy: 95.3%
Optimization Iteration: 2901, Training Accuracy: 98.4%
Optimization Iteration: 3001, Training Accuracy: 98.4%
Optimization Iteration: 3101, Training Accuracy: 100.0%
Optimization Iteration: 3201, Training Accuracy: 96.9%
Optimization Iteration: 3301, Training Accuracy: 100.0%
Optimization Iteration: 3401, Training Accuracy: 98.4%
Optimization Iteration: 3501, Training Accuracy: 96.9%
Optimization Iteration: 3601, Training Accuracy: 98.4%
Optimization Iteration: 3701, Training Accuracy: 96.9%
Optimization Iteration: 3801, Training Accuracy: 100.0%
Optimization Iteration: 3901, Training Accuracy: 98.4%
Optimization Iteration: 4001, Training Accuracy: 96.9%
Optimization Iteration: 4101, Training Accuracy: 98.4%
Optimization Iteration: 4201, Training Accuracy: 100.0%
Optimization Iteration: 4301, Training Accuracy: 100.0%
Optimization Iteration: 4401, Training Accuracy: 100.0%
Optimization Iteration: 4501, Training Accuracy: 100.0%
Optimization Iteration: 4601, Training Accuracy: 98.4%
Optimization Iteration: 4701, Training Accuracy: 96.9%
Optimization Iteration: 4801, Training Accuracy: 95.3%
Optimization Iteration: 4901, Training Accuracy: 100.0%
Optimization Iteration: 5001, Training Accuracy: 96.9%
Optimization Iteration: 5101, Training Accuracy: 100.0%
Optimization Iteration: 5201, Training Accuracy: 98.4%
Optimization Iteration: 5301, Training Accuracy: 98.4%
Optimization Iteration: 5401, Training Accuracy: 100.0%
Optimization Iteration: 5501, Training Accuracy: 98.4%
Optimization Iteration: 5601, Training Accuracy: 96.9%
Optimization Iteration: 5701, Training Accuracy: 100.0%
Optimization Iteration: 5801, Training Accuracy: 96.9%
Optimization Iteration: 5901, Training Accuracy: 100.0%
Optimization Iteration: 6001, Training Accuracy: 98.4%
Optimization Iteration: 6101, Training Accuracy: 98.4%
Optimization Iteration: 6201, Training Accuracy: 98.4%
Optimization Iteration: 6301, Training Accuracy: 98.4%
Optimization Iteration: 6401, Training Accuracy: 100.0%
Optimization Iteration: 6501, Training Accuracy: 100.0%
Optimization Iteration: 6601, Training Accuracy: 100.0%
Optimization Iteration: 6701, Training Accuracy: 100.0%
Optimization Iteration: 6801, Training Accuracy: 96.9%
Optimization Iteration: 6901, Training Accuracy: 100.0%
Optimization Iteration: 7001, Training Accuracy: 100.0%
Optimization Iteration: 7101, Training Accuracy: 100.0%
Optimization Iteration: 7201, Training Accuracy: 100.0%
Optimization Iteration: 7301, Training Accuracy: 96.9%
Optimization Iteration: 7401, Training Accuracy: 100.0%
Optimization Iteration: 7501, Training Accuracy: 100.0%
Optimization Iteration: 7601, Training Accuracy: 96.9%
Optimization Iteration: 7701, Training Accuracy: 100.0%
Optimization Iteration: 7801, Training Accuracy: 100.0%
Optimization Iteration: 7901, Training Accuracy: 100.0%
Optimization Iteration: 8001, Training Accuracy: 98.4%
Optimization Iteration: 8101, Training Accuracy: 100.0%
Optimization Iteration: 8201, Training Accuracy: 100.0%
Optimization Iteration: 8301, Training Accuracy: 100.0%
Optimization Iteration: 8401, Training Accuracy: 100.0%
Optimization Iteration: 8501, Training Accuracy: 98.4%
Optimization Iteration: 8601, Training Accuracy: 100.0%
Optimization Iteration: 8701, Training Accuracy: 100.0%
Optimization Iteration: 8801, Training Accuracy: 100.0%
Optimization Iteration: 8901, Training Accuracy: 100.0%
Optimization Iteration: 9001, Training Accuracy: 98.4%
Optimization Iteration: 9101, Training Accuracy: 98.4%
Optimization Iteration: 9201, Training Accuracy: 100.0%
Optimization Iteration: 9301, Training Accuracy: 100.0%
Optimization Iteration: 9401, Training Accuracy: 98.4%
Optimization Iteration: 9501, Training Accuracy: 100.0%
Optimization Iteration: 9601, Training Accuracy: 100.0%
Optimization Iteration: 9701, Training Accuracy: 100.0%
Optimization Iteration: 9801, Training Accuracy: 98.4%
Optimization Iteration: 9901, Training Accuracy: 100.0%
Time usage: 0:00:27
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製程式碼
Accuracy on Test-Set: 98.8% (9881 / 10000)
Example errors:
Confusion Matrix:
[[ 975 0 0 0 0 0 1 1 3 0]
[ 0 1127 2 0 0 0 1 2 3 0]
[ 2 2 1019 1 1 0 1 2 4 0]
[ 0 0 0 1005 0 1 0 1 3 0]
[ 0 0 0 0 977 0 1 0 1 3]
[ 2 0 0 13 0 870 1 0 6 0]
[ 5 2 0 0 1 3 943 0 4 0]
[ 0 2 8 2 1 0 0 1007 1 7]
[ 2 0 2 3 1 1 0 0 964 1]
[ 0 2 0 4 5 1 0 1 2 994]]
權重和層的視覺化
當我們直接用TensorFlow來實現卷積神經網路時,可以很容易地畫出卷積權重和不同層的輸出影像。當使用PrettyTensor的時候,我們也可以通過上面提到過的方法取得權重,但我們無法簡單得到卷積層的輸出(影像)。因此下面只繪製了權重。
繪製卷積權重的幫助函式
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
卷積層 1
現在繪製第一個卷積層的濾波權重。
其中正值權重是紅色的,負值為藍色。
plot_conv_weights(weights=weights_conv1)複製程式碼
卷積層 2
現在繪製第二個卷積層的濾波權重。
第一個卷積層有16個輸出通道,代表著第二個卷基層有16個輸入。第二個卷積層的每個輸入通道也有一些權重濾波。我們先繪製第一個通道的權重濾波。
同樣的,正值是紅色,負值是藍色。
plot_conv_weights(weights=weights_conv2, input_channel=0)複製程式碼
第二個卷積層共有16個輸入通道,我們可以同樣地畫出15張其他濾波權重影像。這裡我們再畫一下第二個通道的影像。
plot_conv_weights(weights=weights_conv2, input_channel=1)複製程式碼
關閉TensorFlow會話
現在我們已經用TensorFlow完成了任務,關閉session,釋放資源。
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()複製程式碼
總結
相比直接使用TensorFlow,PrettyTensor可以用更簡單的程式碼來實現神經網路。這使你能夠專注於自己的想法而不是底層的實現細節。它讓程式碼更易於理解,也減少犯錯的可能。
然而,PrettyTensor中有一些矛盾和笨拙的設計,它的文件簡短而又令人疑惑,也不易於學習。希望未來會有所改進(本文寫於2016七月)。
還有一些PrettyTensor的替代品,包括TFLearn和Keras。
練習
下面使一些可能會讓你提升TensorFlow技能的一些建議練習。為了學習如何更合適地使用TensorFlow,實踐經驗是很重要的。
在你對這個Notebook進行修改之前,可能需要先備份一下。
- 將所有層的啟用函式改成sigmod。
- 在一些層中使用sigmod,一些層中使用relu。這裡能用
defaults_scope
嗎? - 在所有層裡使用12loss。然後試著只在某些層裡使用這個。
- 用PrettyTensor的reshape函式代替TensorFlow的。其中某一個會更好嗎?
- 在全連線層後面新增一個dropout-layer。如果你在訓練和測試的時候想要一個不同的
keep_prob
,就需要在feed-dict中設定一個placeholder變數。 - 用stride=2來代替 2x2 max-pooling層。分類準確率會有所不同麼?你多次優化它們之後呢?差異是隨機的,你如何度量是否真實存在差異呢?在卷積層中使用max-pooling和stride的優缺點是什麼?
- 改變層的引數,比如kernel、depth、size等等。耗費的時間以及分類準確率有什麼差別?
- 新增或刪除某些卷積層和全連線層。
- 你設計的表現良好的最簡網路是什麼?
- 取回卷積層的bias-values並列印出來。參考一下
get_weights_variable()
的實現。 - 不看原始碼,自己重寫程式。
- 向朋友解釋程式如何工作。