本文主要演示瞭如何尋找MNIST影象的“對抗噪聲”,以及如何使神經網路對對抗噪聲免疫。
01 - 簡單線性模型 | 02 - 卷積神經網路 | 03 - PrettyTensor | 04 - 儲存& 恢復
05 - 整合學習 | 06 - CIFAR 10 | 07 - Inception 模型 | 08 - 遷移學習
09 - 視訊資料 | 11 - 對抗樣本
by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube
中文翻譯 thrillerist / Github
如有轉載,請附上本文連結。
介紹
之前的教程#11展示瞭如何找到最先進神經網路的對抗樣本,它會引起網路誤分類影象,即使在人眼看來影象完全相同。例如,在新增了對抗噪聲之後,一張鸚鵡的影象會被誤分類成書架,但在人類眼中影象完全沒什麼變化。
教程#11是通過每張影象的優化過程來尋找對抗噪聲的。由於噪聲是專門為某張影象生成,因此它可能不是通用的,無法在其他影象上起作用。
本教程將會找到那些導致幾乎所有輸入影象都被誤分類成目標類別的對抗噪聲。我們使用MNIST手寫數字資料集為例。現在,對抗噪聲對人眼是清晰可見的,但人類還是能夠很容易地辨認出數字,然而神經網路幾乎將所有影象誤分類。
這篇教程裡,我們還會試著讓神經網路對對抗噪聲免疫。
教程 #11 用Numpy來做對抗優化。在這篇教程裡,我們會直接在TensorFlow裡實現優化過程。這會更快速,尤其是在使用GPU的時候,因為不用每次迭代都在GPU裡拷貝資料。
推薦你先學習教程 #11。你也需要大概地熟悉神經網路,詳見教程 #01和 #02。
流程圖
下面的圖表直接展示了之後實現的卷積神經網路中資料的傳遞。
例子展示的是數字7的輸入影象。隨後在影象上新增對抗噪聲。紅色的噪聲點是正值,它讓畫素值更深,藍色噪聲點是負值,讓輸入影象在此處的顏色更淺。
這些噪聲影象傳到神經網路中,然後得到一個預測數字。這種情況下,對抗噪聲讓神經網路相信這張數字7的影象顯示的是數字3。噪聲對人眼是清晰可見的,但人類仍然可以容易地辨認出數字7來。
這邊值得注意的是,單一的噪聲模式會導致神經網路將幾乎所有的輸入影象都誤分類成期望的目標型別。
在這個神經網路中有兩個單獨的優化程式。首先,我們優化神經網路的變數來分類訓練集的影象。這是神經網路的常規優化過程。一旦分類準確率足夠高,我們就切換到第二個優化程式,(它用來)尋找單一模式的對抗噪聲,使得所有的輸入影象都被誤分類成目標型別。
這兩個優化程式是完全獨立的。第一個程式只修改量神經網路的變數,第二個程式只修改對抗噪聲。
from IPython.display import Image
Image('images/12_adversarial_noise_flowchart.png')複製程式碼
匯入
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt複製程式碼
使用Python3.5.2(Anaconda)開發,TensorFlow版本是:
tf.__version__複製程式碼
'0.12.0-rc0'
PrettyTensor 版本:
pt.__version__複製程式碼
'0.7.1'
載入資料
MNIST資料集大約12MB,如果沒在給定路徑中找到就會自動下載。
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)複製程式碼
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
現在已經載入了MNIST資料集,它由70,000張影象和對應的標籤(比如影象的類別)組成。資料集分成三份互相獨立的子集。我們在教程中只用訓練集和測試集。
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))複製程式碼
Size of:
- Training-set: 55000 - Test-set: 10000 - Validation-set: 5000複製程式碼
型別標籤使用One-Hot編碼,這意外每個標籤是長為10的向量,除了一個元素之外,其他的都為零。這個元素的索引就是類別的數字,即相應圖片中畫的數字。我們也需要測試資料集類別數字的整型值,現在計算它。
data.test.cls = np.argmax(data.test.labels, axis=1)複製程式碼
資料維度
在下面的原始碼中,有很多地方用到了資料維度。它們只在一個地方定義,因此我們可以在程式碼中使用這些數字而不是直接寫數字。
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10複製程式碼
用來繪製影象的幫助函式
這個函式用來在3x3的柵格中畫9張影象,然後在每張影象下面寫出真實類別和預測類別。如果提供了噪聲,就將其新增到所有影象上。
def plot_images(images, cls_true, cls_pred=None, noise=0.0):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Get the i'th image and reshape the array.
image = images[i].reshape(img_shape)
# Add the adversarial noise to the image.
image += noise
# Ensure the noisy pixel-values are between 0 and 1.
image = np.clip(image, 0.0, 1.0)
# Plot image.
ax.imshow(image,
cmap='binary', interpolation='nearest')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
繪製幾張影象來看看資料是否正確
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)複製程式碼
TensorFlow圖(Graph)
現在將使用TensorFlow和PrettyTensor構建神經網路的計算圖。 與往常一樣,我們需要為影象建立佔位符變數,將其送到計算圖中,然後將對抗噪聲新增到影象中。接著把噪聲影象用作卷積神經網路的輸入。
這個網路有兩個單獨的優化程式。神經網路本身變數的一個常規優化過程,以及對抗噪聲的另一個優化過程。 兩個優化過程都直接在TensorFlow中實現。
佔位符 (Placeholder)變數
佔位符變數為TensorFlow中的計算圖提供了輸入,我們可以在每次執行圖的時候更改。 我們稱為feeding佔位符變數。
首先,我們為輸入影象定義佔位符變數。 這允許我們改變輸入到TensorFlow圖中的影象。 這是一個張量,代表它是一個多維陣列。 資料型別設為float32
,形狀設為[None,img_size_flat]
,其中None
代表張量可以儲存任意數量的影象,每個影象是長度為img_size_flat
的向量。
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')複製程式碼
卷積層希望x
被編碼為4維張量,因此我們需要將它的形狀轉換至[num_images, img_height, img_width, num_channels]
。注意img_height == img_width == img_size
,如果第一維的大小設為-1, num_images的大小也會被自動推匯出來。轉換運算如下:
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])複製程式碼
接下來我們為輸入變數x中的影象所對應的真實標籤定義佔位符變數。變數的形狀是[None, num_classes]
,這代表著它儲存了任意數量的標籤,每個標籤是長度為num_classes
的向量,本例中長度為10。
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')複製程式碼
我們也可以為類別號提供一個佔位符,但這裡用argmax來計算它。這裡只是TensorFlow中的一些操作符,沒有執行什麼運算。
y_true_cls = tf.argmax(y_true, dimension=1)複製程式碼
對抗噪聲
輸入影象的畫素值在0.0到1.0之間。對抗噪聲是在輸入影象上新增或刪除的數值。
對抗噪聲的界限設為0.35,則噪聲在正負0.35之間。
noise_limit = 0.35複製程式碼
對抗噪聲的優化器會試圖最小化兩個損失度量:(1)神經網路常規的損失度量,因此我們會找到使得目標型別分類準確率最高的噪聲;(2)L2-loss度量,它會保持儘可能低的噪聲。
下面的權重決定了與常規的損失度量相比,L2-loss的重要性。通常接近零的L2權重表現的更好。
noise_l2_weight = 0.02複製程式碼
當我們為噪聲建立變數時,必須告知TensorFlow它屬於哪一個變數集合,這樣,後面就能通知兩個優化器要更新哪些變數。
首先為變數集合定義一個名稱。這只是一個字串。
ADVERSARY_VARIABLES = 'adversary_variables'複製程式碼
接著,建立噪聲變數所屬集合的列表。如果我們將噪聲變數新增到集合tf.GraphKeys.VARIABLES
中,它就會和TensorFlow圖中的其他變數一起被初始化,但不會被優化。這裡有點混亂。
collections = [tf.GraphKeys.VARIABLES, ADVERSARY_VARIABLES]複製程式碼
現在我們可以為對抗噪聲新增新的變數。它會被初始化為零。它是不可訓練的,因此並不會與神經網路中的其他變數一起被優化。這讓我們可以建立兩個獨立的優化程式。
x_noise = tf.Variable(tf.zeros([img_size, img_size, num_channels]),
name='x_noise', trainable=False,
collections=collections)複製程式碼
對抗噪聲會被限制在我們上面設定的噪聲界限內。注意此時並未在計算圖表內進行計算,在優化步驟之後執行,詳見下文。
x_noise_clip = tf.assign(x_noise, tf.clip_by_value(x_noise,
-noise_limit,
noise_limit))複製程式碼
噪聲影象只是輸入影象和對抗噪聲的總和。
x_noisy_image = x_image + x_noise複製程式碼
把噪聲影象新增到輸入影象上時,它可能會溢位有效影象(畫素)的邊界,因此我們裁剪/限制噪聲影象,確保它的畫素值在0到1之間。
x_noisy_image = tf.clip_by_value(x_noisy_image, 0.0, 1.0)複製程式碼
卷積神經網路
我們會用PrettyTensor來構造卷積神經網路。首先需要將噪聲影象的張量封裝到PrettyTensor物件中,該物件提供了構造神經網路的函式。
x_pretty = pt.wrap(x_noisy_image)複製程式碼
將輸入影象封裝到PrettyTensor物件之後,用幾行程式碼就能新增摺積層和全連線層。
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)複製程式碼
注意,在with
程式碼塊中,pt.defaults_scope(activation_fn=tf.nn.relu)
把 activation_fn=tf.nn.relu
當作每個的層引數,因此這些層都用到了 Rectified Linear Units (ReLU) 。defaults_scope使我們能更方便地修改所有層的引數。
正常訓練的優化器
這是會在常規優化程式裡被訓練的神經網路的變數列表。注意,'x_noise:0'
不在列表裡,因此這個程式並不會優化對抗噪聲。
[var.name for var in tf.trainable_variables()]複製程式碼
['layer_conv1/weights:0',
'layer_conv1/bias:0',
'layer_conv2/weights:0',
'layer_conv2/bias:0',
'layer_fc1/weights:0',
'layer_fc1/bias:0',
'fully_connected/weights:0',
'fully_connected/bias:0']
神經網路中這些變數的優化由Adam-optimizer完成,它用到上面PretyTensor構造的神經網路所返回的損失度量。
此時不執行優化,實際上這裡根本沒有計算,我們只是把優化物件新增到TensorFlow圖表中,以便稍後執行。
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)複製程式碼
對抗噪聲的優化器
獲取變數列表,這些是需要在第二個程式裡為對抗噪聲做優化的變數。
adversary_variables = tf.get_collection(ADVERSARY_VARIABLES)複製程式碼
展示變數名稱列表。這裡只有一個元素,是我們在上面建立的對抗噪聲變數。
[var.name for var in adversary_variables]複製程式碼
['x_noise:0']
我們會將常規優化的損失函式與所謂的L2-loss相結合。這將會得到在最佳分類準確率下的最小對抗噪聲。
L2-loss由一個通常設定為接近零的權重縮放。
l2_loss_noise = noise_l2_weight * tf.nn.l2_loss(x_noise)複製程式碼
將正常的損失函式和對抗噪聲的L2-loss相結合。
loss_adversary = loss + l2_loss_noise複製程式碼
現在可以為對抗噪聲建立優化器。由於優化器並不能更新神經網路的所有變數,我們必須給出一個需要更新的變數的列表,即對抗噪聲變數。注意,這裡的學習率比上面的常規優化器要大很多。
optimizer_adversary = tf.train.AdamOptimizer(learning_rate=1e-2).minimize(loss_adversary, var_list=adversary_variables)複製程式碼
現在我們為神經網路建立了兩個優化器,一個用於神經網路的變數,另一個用於對抗噪聲的單個變數。
效能度量
在TensorFlow圖表中,我們需要另外一些操作,以便在優化過程中向使用者展示進度。
首先,計算出神經網路輸出y_pred
的預測類別號,它是一個包含10個元素的向量。型別號是最大元素的索引。
y_pred_cls = tf.argmax(y_pred, dimension=1)複製程式碼
接著建立一個布林陣列,用來表示每張影象的預測型別是否與真實型別相同。
correct_prediction = tf.equal(y_pred_cls, y_true_cls)複製程式碼
上面的計算先將布林值向量型別轉換成浮點型向量,這樣子False就變成0,True變成1,然後計算這些值的平均數,以此來計算分類的準確度。
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))複製程式碼
執行TensorFlow
建立TensorFlow會話(session)
一旦建立了TensorFlow圖,我們需要建立一個TensorFlow會話,用來執行圖。
session = tf.Session()複製程式碼
初始化變數
我們需要在開始優化weights
和biases
變數之前對它們進行初始化。
session.run(tf.global_variables_initializer())複製程式碼
幫助函式將對抗噪聲初始化/重置為零。
def init_noise():
session.run(tf.variables_initializer([x_noise]))複製程式碼
呼叫函式來初始化對抗噪聲。
init_noise()複製程式碼
用來優化迭代的幫助函式
在訓練集中有55,000張圖。用全部影象計算模型的梯度會花很多時間。因此我們在優化器的每次迭代裡只用到了一小部分的影象。
如果記憶體耗盡導致電腦當機或變得很慢,你應該試著減少這些數量,但同時可能還需要更優化的迭代。
train_batch_size = 64複製程式碼
下面的函式用來執行一定數量的優化迭代,以此來逐漸改善神經網路的變數。在每次迭代中,會從訓練集中選擇新的一批資料,然後TensorFlow在這些訓練樣本上執行優化。每100次迭代會列印出進度。
這個函式與之前教程中的相似,除了現在它多了一個對抗目標類別(adversary target-class)的引數。當目標類別設為整數時,將會用它取代訓練集中的真實類別號。也會用對抗優化器代替常規優化器,然後在每次優化之後,噪聲將被限制/截斷到允許的範圍。這裡優化了對抗噪聲,並忽略神經網路中的其他變數。
def optimize(num_iterations, adversary_target_cls=None):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# If we are searching for the adversarial noise, then
# use the adversarial target-class instead.
if adversary_target_cls is not None:
# The class-labels are One-Hot encoded.
# Set all the class-labels to zero.
y_true_batch = np.zeros_like(y_true_batch)
# Set the element for the adversarial target-class to 1.
y_true_batch[:, adversary_target_cls] = 1.0
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# If doing normal optimization of the neural network.
if adversary_target_cls is None:
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
else:
# Run the adversarial optimizer instead.
# Note that we have 'faked' the class above to be
# the adversarial target-class instead of the true class.
session.run(optimizer_adversary, feed_dict=feed_dict_train)
# Clip / limit the adversarial noise. This executes
# another TensorFlow operation. It cannot be executed
# in the same session.run() as the optimizer, because
# it may run in parallel so the execution order is not
# guaranteed. We need the clip to run after the optimizer.
session.run(x_noise_clip)
# Print status every 100 iterations.
if (i % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))複製程式碼
獲取及繪製噪聲的幫助函式
這個函式從TensorFlow圖表中獲取對抗噪聲。
def get_noise():
# Run the TensorFlow session to retrieve the contents of
# the x_noise variable inside the graph.
noise = session.run(x_noise)
return np.squeeze(noise)複製程式碼
這個函式繪製了對抗噪聲,並列印一些統計資訊。
def plot_noise():
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Print statistics.
print("Noise:")
print("- Min:", noise.min())
print("- Max:", noise.max())
print("- Std:", noise.std())
# Plot the noise.
plt.imshow(noise, interpolation='nearest', cmap='seismic',
vmin=-1.0, vmax=1.0)複製程式碼
用來繪製錯誤樣本的幫助函式
函式用來繪製測試集中被誤分類的樣本。
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
noise=noise)複製程式碼
繪製混淆(confusion)矩陣的幫助函式
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)複製程式碼
展示效能的幫助函式
函式用來列印測試集上的分類準確度。
為測試集上的所有圖片計算分類會花費一段時間,因此我們直接用這個函式來呼叫上面的結果,這樣就不用每次都重新計算了。
這個函式可能會佔用很多電腦記憶體,這也是為什麼將測試集分成更小的幾個部分。如果你的電腦記憶體比較小或當機了,就要試著降低batch-size。
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)複製程式碼
神經網路的常規優化
此時對抗噪聲還沒有效果,因為上面只將它初始化為零,在優化過程中並未更新。
optimize(num_iterations=1000)複製程式碼
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 200, Training Accuracy: 84.4%
Optimization Iteration: 300, Training Accuracy: 84.4%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 500, Training Accuracy: 87.5%
Optimization Iteration: 600, Training Accuracy: 93.8%
Optimization Iteration: 700, Training Accuracy: 93.8%
Optimization Iteration: 800, Training Accuracy: 93.8%
Optimization Iteration: 900, Training Accuracy: 96.9%
Optimization Iteration: 999, Training Accuracy: 92.2%
Time usage: 0:00:03
測試集上的分類準確率大約96-97%。(每次執行Python Notobook時,結果會有所變化。)
print_test_accuracy(show_example_errors=True)複製程式碼
Accuracy on Test-Set: 96.3% (9633 / 10000)
Example errors:
尋找對抗噪聲
在我們開始優化對抗噪聲之前,先將它初始化為零。上面已經完成了這一步,但這裡再執行一次,以防你用其他目標型別重新執行程式碼。
init_noise()複製程式碼
現在執行對抗噪聲的優化。這裡使用對抗優化器而不是常規優化器,這說明它只優化對抗噪聲變數,同時忽略神經網路中的其他變數。
optimize(num_iterations=1000, adversary_target_cls=3)複製程式碼
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 300, Training Accuracy: 98.4%
Optimization Iteration: 400, Training Accuracy: 95.3%
Optimization Iteration: 500, Training Accuracy: 96.9%
Optimization Iteration: 600, Training Accuracy: 100.0%
Optimization Iteration: 700, Training Accuracy: 98.4%
Optimization Iteration: 800, Training Accuracy: 95.3%
Optimization Iteration: 900, Training Accuracy: 93.8%
Optimization Iteration: 999, Training Accuracy: 100.0%
Time usage: 0:00:03
現在對抗噪聲已經被優化了,可以在一張影象中展示出來。紅色畫素顯示了正噪聲值,藍色畫素顯示了負噪聲值。這個噪聲模式將會被新增到每張輸入影象中。正噪聲值(紅)使畫素變暗,負噪聲值(藍)使畫素變亮。如下所示。
plot_noise()複製程式碼
Noise:
- Min: -0.35 - Max: 0.35 - Std: 0.195455複製程式碼
當測試集的所有影象上都新增了該噪聲之後,根據選定的目標類別,分類準確率通常在是10-15%之間。我們也能從混淆矩陣中看出,測試集中的大多數影象都被分類成期望的目標類別——儘管有些目標型別比其他的需要更多的對抗噪聲。
所以我們找到了使對抗噪聲,使神經網路將測試集中絕大部分影象誤分類成期望的類別。
我們也可以畫出一些帶有對抗噪聲的誤分類影象樣本。噪聲清晰可見,但人眼還是可以輕易地分辨出數字。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製程式碼
Accuracy on Test-Set: 13.2% (1323 / 10000)
Example errors:
Confusion Matrix:
[[ 85 0 0 895 0 0 0 0 0 0]
[ 0 0 0 1135 0 0 0 0 0 0]
[ 0 0 46 986 0 0 0 0 0 0]
[ 0 0 0 1010 0 0 0 0 0 0]
[ 0 0 0 959 20 0 0 0 3 0]
[ 0 0 0 847 0 45 0 0 0 0]
[ 0 0 0 914 0 1 42 0 1 0]
[ 0 0 0 977 0 0 0 51 0 0]
[ 0 0 0 952 0 0 0 0 22 0]
[ 0 0 1 1006 0 0 0 0 0 2]]
所有目標類別的對抗噪聲
這是幫助函式用於尋找所有目標類別的對抗噪聲。函式從型別號0遍歷到9,執行上面的優化。然後將結果儲存到一個陣列中。
def find_all_noise(num_iterations=1000):
# Adversarial noise for all target-classes.
all_noise = []
# For each target-class.
for i in range(num_classes):
print("Finding adversarial noise for target-class:", i)
# Reset the adversarial noise to zero.
init_noise()
# Optimize the adversarial noise.
optimize(num_iterations=num_iterations,
adversary_target_cls=i)
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Append the noise to the array.
all_noise.append(noise)
# Print newline.
print()
return all_noise複製程式碼
all_noise = find_all_noise(num_iterations=300)複製程式碼
Finding adversarial noise for target-class: 0
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 200, Training Accuracy: 92.2%
Optimization Iteration: 299, Training Accuracy: 93.8%
Time usage: 0:00:01Finding adversarial noise for target-class: 1
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 62.5%
Optimization Iteration: 200, Training Accuracy: 62.5%
Optimization Iteration: 299, Training Accuracy: 75.0%
Time usage: 0:00:01Finding adversarial noise for target-class: 2
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 95.3%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 3
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 299, Training Accuracy: 98.4%
Time usage: 0:00:01Finding adversarial noise for target-class: 4
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 81.2%
Optimization Iteration: 200, Training Accuracy: 82.8%
Optimization Iteration: 299, Training Accuracy: 82.8%
Time usage: 0:00:01Finding adversarial noise for target-class: 5
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 299, Training Accuracy: 98.4%
Time usage: 0:00:01Finding adversarial noise for target-class: 6
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 92.2%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 7
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 299, Training Accuracy: 92.2%
Time usage: 0:00:01Finding adversarial noise for target-class: 8
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 9
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 84.4%
Optimization Iteration: 200, Training Accuracy: 87.5%
Optimization Iteration: 299, Training Accuracy: 90.6%
Time usage: 0:00:01
繪製所有目標型別的對抗噪聲
這個幫助函式用於在柵格中繪製所有目標型別(0到9)的對抗噪聲。
def plot_all_noise(all_noise):
# Create figure with 10 sub-plots.
fig, axes = plt.subplots(2, 5)
fig.subplots_adjust(hspace=0.2, wspace=0.1)
# For each sub-plot.
for i, ax in enumerate(axes.flat):
# Get the adversarial noise for the i'th target-class.
noise = all_noise[i]
# Plot the noise.
ax.imshow(noise,
cmap='seismic', interpolation='nearest',
vmin=-1.0, vmax=1.0)
# Show the classes as the label on the x-axis.
ax.set_xlabel(i)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製程式碼
plot_all_noise(all_noise)複製程式碼
紅色畫素顯示正噪聲值,藍色畫素顯示負噪聲值。
在其中一些噪聲影象中,你可以看到數字的痕跡。例如,目標型別0的噪聲顯示了一個被藍色包圍的紅圈。這說明會以圓形狀將一些噪聲新增到影象中,並抑制其他畫素。這足以讓MNIST資料集中的大部分影象被誤分類成0。另外一個例子是3的噪聲,影象的紅色畫素也顯示了數字3的痕跡。但其他類別的噪聲不太明顯。
對抗噪聲的免疫
現在試著讓神經網路對對抗噪聲免疫。我們重新訓練神經網路,使其忽略對抗噪聲。這個過程可以重複多次。
幫助函式建立了對對抗噪聲免疫的神經網路
這是使神經網路對對抗噪聲免疫的幫助函式。首先執行優化來找到對抗噪聲。接著執行常規優化使神經網路對該噪聲免疫。
def make_immune(target_cls, num_iterations_adversary=500,
num_iterations_immune=200):
print("Target-class:", target_cls)
print("Finding adversarial noise ...")
# Find the adversarial noise.
optimize(num_iterations=num_iterations_adversary,
adversary_target_cls=target_cls)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
# Newline.
print()
print("Making the neural network immune to the noise ...")
# Try and make the neural network immune to this noise.
# Note that the adversarial noise has not been reset to zero
# so the x_noise variable still holds the noise.
# So we are training the neural network to ignore the noise.
optimize(num_iterations=num_iterations_immune)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)複製程式碼
對目標型別3的噪聲免疫
首先嚐試使神經網路對目標型別3的對抗噪聲免疫。
我們先找到導致神經網路誤分類測試集上大多數影象的對抗噪聲。接著執行常規優化,其變數經過微調從而忽略噪聲,使得分類準確率再次達到95-97%。
make_immune(target_cls=3)複製程式碼
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 3.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 300, Training Accuracy: 96.9%
Optimization Iteration: 400, Training Accuracy: 96.9%
Optimization Iteration: 499, Training Accuracy: 96.9%
Time usage: 0:00:02Accuracy on Test-Set: 14.4% (1443 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 42.2%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 199, Training Accuracy: 89.1%
Time usage: 0:00:01Accuracy on Test-Set: 95.3% (9529 / 10000)
現在試著再次執行它。 現在更難為目標類別3找到對抗噪聲。神經網路似乎已經變得對對抗噪聲有些免疫。
make_immune(target_cls=3)複製程式碼
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 32.8%
Optimization Iteration: 200, Training Accuracy: 32.8%
Optimization Iteration: 300, Training Accuracy: 29.7%
Optimization Iteration: 400, Training Accuracy: 34.4%
Optimization Iteration: 499, Training Accuracy: 26.6%
Time usage: 0:00:02Accuracy on Test-Set: 72.1% (7207 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 75.0%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 95.2% (9519 / 10000)
對所有目標型別的噪聲免疫
現在,試著使神經網路對所有目標型別的噪聲免疫。不幸的是,看起來並不太好。
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()複製程式碼
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 73.4%
Optimization Iteration: 200, Training Accuracy: 75.0%
Optimization Iteration: 300, Training Accuracy: 85.9%
Optimization Iteration: 400, Training Accuracy: 81.2%
Optimization Iteration: 499, Training Accuracy: 90.6%
Time usage: 0:00:02Accuracy on Test-Set: 23.3% (2326 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 34.4%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.6% (9559 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 57.8%
Optimization Iteration: 200, Training Accuracy: 62.5%
Optimization Iteration: 300, Training Accuracy: 62.5%
Optimization Iteration: 400, Training Accuracy: 67.2%
Optimization Iteration: 499, Training Accuracy: 67.2%
Time usage: 0:00:02Accuracy on Test-Set: 42.2% (4218 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 59.4%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.5% (9555 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 43.8%
Optimization Iteration: 200, Training Accuracy: 57.8%
Optimization Iteration: 300, Training Accuracy: 70.3%
Optimization Iteration: 400, Training Accuracy: 68.8%
Optimization Iteration: 499, Training Accuracy: 71.9%
Time usage: 0:00:02Accuracy on Test-Set: 46.4% (4639 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 59.4%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 95.5% (9545 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 46.9%
Optimization Iteration: 300, Training Accuracy: 53.1%
Optimization Iteration: 400, Training Accuracy: 50.0%
Optimization Iteration: 499, Training Accuracy: 48.4%
Time usage: 0:00:02Accuracy on Test-Set: 56.5% (5648 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 54.7%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 95.8% (9581 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 85.9%
Optimization Iteration: 200, Training Accuracy: 85.9%
Optimization Iteration: 300, Training Accuracy: 87.5%
Optimization Iteration: 400, Training Accuracy: 95.3%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 15.6% (1557 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 18.8%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 95.6% (9557 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 18.8%
Optimization Iteration: 100, Training Accuracy: 71.9%
Optimization Iteration: 200, Training Accuracy: 90.6%
Optimization Iteration: 300, Training Accuracy: 95.3%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 17.4% (1745 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 15.6%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.0% (9601 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 10.9%
Optimization Iteration: 100, Training Accuracy: 81.2%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 300, Training Accuracy: 92.2%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 17.6% (1762 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 20.3%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.7% (9570 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 14.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 98.4%
Optimization Iteration: 300, Training Accuracy: 100.0%
Optimization Iteration: 400, Training Accuracy: 96.9%
Optimization Iteration: 499, Training Accuracy: 100.0%
Time usage: 0:00:02Accuracy on Test-Set: 12.8% (1281 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 95.9% (9587 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 64.1%
Optimization Iteration: 200, Training Accuracy: 81.2%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 78.1%
Optimization Iteration: 499, Training Accuracy: 84.4%
Time usage: 0:00:02Accuracy on Test-Set: 24.9% (2493 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 25.0%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.0% (9601 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 50.0%
Optimization Iteration: 300, Training Accuracy: 53.1%
Optimization Iteration: 400, Training Accuracy: 64.1%
Optimization Iteration: 499, Training Accuracy: 65.6%
Time usage: 0:00:02Accuracy on Test-Set: 45.5% (4546 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 51.6%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.2% (9615 / 10000)
對所有目標類別免疫(執行兩次)
現在試著執行兩次,使神經網路對所有目標類別的噪聲免疫。不幸的是,結果也不太好。
使神經網路免受一個對抗目標型別的影響,似乎使得它對另外一個目標型別失去了免疫。
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
make_immune(target_cls=i)
# Print newline.
print()複製程式碼
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 53.1%
Optimization Iteration: 200, Training Accuracy: 73.4%
Optimization Iteration: 300, Training Accuracy: 79.7%
Optimization Iteration: 400, Training Accuracy: 84.4%
Optimization Iteration: 499, Training Accuracy: 95.3%
Time usage: 0:00:02複製程式碼
Accuracy on Test-Set: 29.2% (2921 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 29.7%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.2% (9619 / 10000)
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 1.6%
Optimization Iteration: 100, Training Accuracy: 12.5%
Optimization Iteration: 200, Training Accuracy: 7.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 9.4%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 94.4% (9437 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 89.1%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9635 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 42.2%
Optimization Iteration: 200, Training Accuracy: 60.9%
Optimization Iteration: 300, Training Accuracy: 75.0%
Optimization Iteration: 400, Training Accuracy: 70.3%
Optimization Iteration: 499, Training Accuracy: 85.9%
Time usage: 0:00:02Accuracy on Test-Set: 28.7% (2875 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 39.1%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9643 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 15.6%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 12.5%
Optimization Iteration: 400, Training Accuracy: 9.4%
Optimization Iteration: 499, Training Accuracy: 12.5%
Time usage: 0:00:02Accuracy on Test-Set: 94.3% (9428 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 95.3%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 96.9% (9685 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 60.9%
Optimization Iteration: 200, Training Accuracy: 64.1%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 75.0%
Optimization Iteration: 499, Training Accuracy: 82.8%
Time usage: 0:00:02Accuracy on Test-Set: 34.3% (3427 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 31.2%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9657 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 9.4%
Optimization Iteration: 200, Training Accuracy: 14.1%
Optimization Iteration: 300, Training Accuracy: 10.9%
Optimization Iteration: 400, Training Accuracy: 7.8%
Optimization Iteration: 499, Training Accuracy: 17.2%
Time usage: 0:00:02Accuracy on Test-Set: 94.3% (9435 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 96.9%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9664 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 14.1%
Optimization Iteration: 100, Training Accuracy: 20.3%
Optimization Iteration: 200, Training Accuracy: 40.6%
Optimization Iteration: 300, Training Accuracy: 57.8%
Optimization Iteration: 400, Training Accuracy: 54.7%
Optimization Iteration: 499, Training Accuracy: 64.1%
Time usage: 0:00:02Accuracy on Test-Set: 48.4% (4837 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 54.7%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 100.0%
Time usage: 0:00:01Accuracy on Test-Set: 96.5% (9650 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 17.2%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 1.6%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 95.7% (9570 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 95.3%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.7% (9667 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 67.2%
Optimization Iteration: 200, Training Accuracy: 78.1%
Optimization Iteration: 300, Training Accuracy: 79.7%
Optimization Iteration: 400, Training Accuracy: 81.2%
Optimization Iteration: 499, Training Accuracy: 96.9%
Time usage: 0:00:02Accuracy on Test-Set: 23.7% (2373 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 26.6%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.3% (9632 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 7.8%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 7.8%
Optimization Iteration: 499, Training Accuracy: 14.1%
Time usage: 0:00:02Accuracy on Test-Set: 92.0% (9197 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 92.2%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.3% (9632 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 57.8%
Optimization Iteration: 200, Training Accuracy: 76.6%
Optimization Iteration: 300, Training Accuracy: 85.9%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 85.9%
Time usage: 0:00:02Accuracy on Test-Set: 23.0% (2297 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 28.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9663 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 20.3%
Optimization Iteration: 499, Training Accuracy: 21.9%
Time usage: 0:00:02Accuracy on Test-Set: 88.2% (8824 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 93.8%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.7% (9665 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 40.6%
Optimization Iteration: 200, Training Accuracy: 53.1%
Optimization Iteration: 300, Training Accuracy: 51.6%
Optimization Iteration: 400, Training Accuracy: 56.2%
Optimization Iteration: 499, Training Accuracy: 62.5%
Time usage: 0:00:02Accuracy on Test-Set: 44.0% (4400 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 39.1%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9642 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 17.2%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 14.1%
Optimization Iteration: 400, Training Accuracy: 20.3%
Optimization Iteration: 499, Training Accuracy: 7.8%
Time usage: 0:00:02Accuracy on Test-Set: 94.6% (9457 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 93.8%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.8% (9682 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 65.6%
Optimization Iteration: 200, Training Accuracy: 89.1%
Optimization Iteration: 300, Training Accuracy: 82.8%
Optimization Iteration: 400, Training Accuracy: 85.9%
Optimization Iteration: 499, Training Accuracy: 90.6%
Time usage: 0:00:02Accuracy on Test-Set: 18.1% (1809 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 23.4%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.8% (9682 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 28.1%
Optimization Iteration: 499, Training Accuracy: 18.8%
Time usage: 0:00:02Accuracy on Test-Set: 84.1% (8412 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 84.4%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 100.0%
Time usage: 0:00:01Accuracy on Test-Set: 97.0% (9699 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 46.9%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 70.3%
Optimization Iteration: 499, Training Accuracy: 75.0%
Time usage: 0:00:02Accuracy on Test-Set: 36.8% (3678 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 48.4%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 97.0% (9699 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 14.1%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 7.8%
Optimization Iteration: 400, Training Accuracy: 4.7%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 96.2% (9625 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 96.9%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 97.2% (9720 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 23.4%
Optimization Iteration: 200, Training Accuracy: 43.8%
Optimization Iteration: 300, Training Accuracy: 37.5%
Optimization Iteration: 400, Training Accuracy: 45.3%
Optimization Iteration: 499, Training Accuracy: 39.1%
Time usage: 0:00:02Accuracy on Test-Set: 64.9% (6494 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 67.2%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 97.5% (9746 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 7.8%
Optimization Iteration: 200, Training Accuracy: 10.9%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 12.5%
Optimization Iteration: 499, Training Accuracy: 4.7%
Time usage: 0:00:02Accuracy on Test-Set: 97.1% (9709 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 98.4%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 97.7% (9768 / 10000)
繪製對抗噪聲
現在我們已經對神經網路和對抗網路都進行了很多優化。讓我們看看對抗噪聲長什麼樣。
plot_noise()複製程式碼
Noise:
- Min: -0.35 - Max: 0.35 - Std: 0.270488複製程式碼
有趣的是,相比優化之前的乾淨影象,神經網路在噪聲影象上有更高的分類準確率。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製程式碼
Accuracy on Test-Set: 97.7% (9768 / 10000)
Example errors:
Confusion Matrix:
[[ 972 0 1 0 0 0 2 1 3 1]
[ 0 1119 4 0 0 2 2 0 8 0]
[ 3 0 1006 9 1 1 1 5 4 2]
[ 1 0 1 997 0 5 0 4 2 0]
[ 0 1 3 0 955 0 3 1 2 17]
[ 1 0 0 9 0 876 3 0 2 1]
[ 6 4 0 0 3 6 934 0 5 0]
[ 2 4 18 3 1 0 0 985 2 13]
[ 4 0 4 3 4 1 1 3 950 4]
[ 6 6 0 7 4 5 0 4 3 974]]
乾淨影象上的效能
現在將對抗噪聲重置為零,看看神經網路在乾淨影象上的表現。
init_noise()複製程式碼
相比噪聲影象,神經網路在乾淨影象上表現的要更差一點。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製程式碼
Accuracy on Test-Set: 92.2% (9222 / 10000)
Example errors:
Confusion Matrix:
[[ 970 0 1 0 0 1 8 0 0 0]
[ 0 1121 5 0 0 0 9 0 0 0]
[ 2 1 1028 0 0 0 1 0 0 0]
[ 1 0 27 964 0 13 2 2 1 0]
[ 0 2 3 0 957 0 20 0 0 0]
[ 3 0 2 2 0 875 10 0 0 0]
[ 4 1 0 0 1 1 951 0 0 0]
[ 10 21 61 3 14 3 0 913 3 0]
[ 29 2 91 7 7 26 70 1 741 0]
[ 20 18 10 12 150 65 11 12 9 702]]
關閉TensorFlow會話
現在我們已經用TensorFlow完成了任務,關閉session,釋放資源。
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()複製程式碼
討論
在上面的實驗中可以看到,我們能夠使神經網路對單個目標類別的對抗噪聲免疫。這使得不可能找到引起誤分類到目標型別的對抗噪聲。但是,顯然也不可能使神經網路同時對所有目標類別免疫。可能用其他方法能夠做到這一點。
一種建議是對不同目標型別進行交叉的免疫訓練,而不是依次對每個目標型別進行完全的優化。對上面的程式碼做些小修改就能做到這一點。
另一個建議是設定兩層神經網路,共11個網路。第一層網路用來對輸入影象進行分類。這個網路沒有對對抗噪聲免疫。然後根據第一層的預測型別選擇第二層的另一個網路。第二層中的網路對各自目標型別的對抗噪聲免疫。因此,一個對抗樣本可能糊弄第一層的網路,但第二層中的網路會免於特定目標型別噪聲的影響。
這可能使用了型別數量比較少的情況,但如果數量很大就變得不可行,比如ImageNet有1000個類別,這樣我們在第二層中需要訓練1000個神經網路,這並不實際。
總結
這篇教程展示瞭如何找到MNIST資料集手寫數字的對抗噪聲。 每個目標類別都找到了一個單一的噪聲模式,它導致幾乎所有的輸入影象都被誤分類為目標類別。
MNIST資料集的噪聲模式對人眼清晰可見。但可能在高解析度影象上(比如ImageNet資料集)工作的大型神經網路可以找到更細微的噪聲模式。
本教程也嘗試了使神經網路免受對抗噪聲影響的方法。 這對單個目標類別有效,但所測試的方法無法使神經網路同時對所有對抗目標類別免疫。
練習
下面使一些可能會讓你提升TensorFlow技能的一些建議練習。為了學習如何更合適地使用TensorFlow,實踐經驗是很重要的。
在你對這個Notebook進行修改之前,可能需要先備份一下。
- 嘗試為對抗噪聲使用更少或更多的優化迭代數。
- 教程#11只需少於30次的迭代次數就能找到對抗噪聲,相比之下為什麼這篇教程需要更多迭代?
- 嘗試設定不同的
noise_limit
和noise_l2_weight
。這會如何影響對抗噪聲以及分類準確率? - 試著為目標型別1尋找對抗噪聲。它是否適用於目標型別3?
- 你能找到一個更好的方法,使得神經網路對對抗噪聲免疫嗎?
- 神經網路是否可以對單個影象產生的對抗噪聲免疫,就像教程 #11 中所做的那樣?
- 嘗試用不同的配置建立另一個神經網路。一個網路上的對抗噪聲對另一個網路有效嗎?
- 用CIFAR-10資料集代替MNIST。你可以複用教程 #06 中的一些程式碼。
- 你會如何找到Inception模型和ImageNet資料集的對抗噪聲?
- 向朋友解釋程式如何工作。