深度學習DeepLearning.ai系列課程學習總結:14. Tensorflow入門

WangZhe0912發表於2017-09-17

轉載過程中,圖片丟失,程式碼顯示錯亂。

為了更好的學習內容,請訪問原創版本:

http://www.missshi.cn/api/view/blog/59bbcb46e519f50d04000206

Ps:初次訪問由於js檔案較大,請耐心等候(8s左右)

 

在之前的內容中,我們始終都是在使用numpy來實現神經網路。

然而,對於大型的神經網路模型而言,這樣是非常耗時的。

幸運的是現在有很多成熟的深度學習框架可以給我們提供幫助,本文要講解的就是Google推出的Tensorflow框架。

 

在使用Tensorflow框架時,通常的步驟如下:

1. 初始化變數

2. 啟動一個Session

3. 訓練演算法

4. 完成神經網路

 

Tensorflow庫

首先,讓我們先了解一些Tensorflow的庫函式:

  1. import math
  2. import numpy as np
  3. import h5py
  4. import matplotlib.pyplot as plt
  5. import tensorflow as tf
  6. from tensorflow.python.framework import ops
  7. from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
  8.  
  9. %matplotlib inline
  10. np.random.seed(1)

其中,一些相關函式如下:

  1. def load_dataset():
  2.     train_dataset = h5py.File('datasets/train_signs.h5', "r")
  3.     train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
  4.     train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
  5.  
  6.     test_dataset = h5py.File('datasets/test_signs.h5', "r")
  7.     test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
  8.     test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
  9.  
  10.     classes = np.array(test_dataset["list_classes"][:]) # the list of classes
  11.     
  12.     train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
  13.     test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
  14.     
  15.     return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
  16.  
  17. def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
  18.     """
  19.     Creates a list of random minibatches from (X, Y)
  20.     
  21.     Arguments:
  22.     X -- input data, of shape (input size, number of examples)
  23.     Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
  24.     mini_batch_size - size of the mini-batches, integer
  25.     seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
  26.     
  27.     Returns:
  28.     mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
  29.     """
  30.     
  31.     m = X.shape[1]                  # number of training examples
  32.     mini_batches = []
  33.     np.random.seed(seed)
  34.     
  35.     # Step 1: Shuffle (X, Y)
  36.     permutation = list(np.random.permutation(m))
  37.     shuffled_X = X[:, permutation]
  38.     shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m))
  39.  
  40.     # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
  41.     num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
  42.     for k in range(0, num_complete_minibatches):
  43.         mini_batch_X = shuffled_X[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
  44.         mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
  45.         mini_batch = (mini_batch_X, mini_batch_Y)
  46.         mini_batches.append(mini_batch)
  47.     
  48.     # Handling the end case (last mini-batch < mini_batch_size)
  49.     if m % mini_batch_size != 0:
  50.         mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
  51.         mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
  52.         mini_batch = (mini_batch_X, mini_batch_Y)
  53.         mini_batches.append(mini_batch)
  54.     
  55.     return mini_batches
  56.  
  57. def convert_to_one_hot(Y, C):
  58.     Y = np.eye(C)[Y.reshape(-1)].T
  59.     return Y
  60.  
  61. def predict(X, parameters):
  62.     
  63.     W1 = tf.convert_to_tensor(parameters["W1"])
  64.     b1 = tf.convert_to_tensor(parameters["b1"])
  65.     W2 = tf.convert_to_tensor(parameters["W2"])
  66.     b2 = tf.convert_to_tensor(parameters["b2"])
  67.     W3 = tf.convert_to_tensor(parameters["W3"])
  68.     b3 = tf.convert_to_tensor(parameters["b3"])
  69.     
  70.     params = {"W1": W1,
  71.               "b1": b1,
  72.               "W2": W2,
  73.               "b2": b2,
  74.               "W3": W3,
  75.               "b3": b3}
  76.     
  77.     x = tf.placeholder("float", [12288, 1])
  78.     
  79.     z3 = forward_propagation_for_predict(x, params)
  80.     p = tf.argmax(z3)
  81.     
  82.     sess = tf.Session()
  83.     prediction = sess.run(p, feed_dict = {x: X})
  84.         
  85.     return prediction

Ps:為了給大家提供更好的學習效果,我們提供了原始資料集train_signs.h5。

請訪問http://www.missshi.cn/#/books搜尋train_signs.h5進行下載,首次訪問Js可能載入微慢,請耐心等候(約10s)。

如果感覺不錯希望大家推廣下網站哈!不建議大家把訓練集直接在QQ群或CSDN上直接分享。


現在,我們已經引入了我們需要的庫函式。

接下來,我們首先來計算一下訓練樣本的誤差:

  1. y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
  2. = tf.constant(39, name='y')                    # Define y. Set to 39
  3.  
  4. loss = tf.Variable((- y_hat)**2, name='loss')  # Create a variable for the loss
  5.  
  6. init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
  7.                                                  # the loss variable will be initialized and ready to be computed
  8. with tf.Session() as session:                    # Create a session and print the output
  9.     session.run(init)                            # Initializes the variables
  10.     print(session.run(loss))                     # Prints the loss
  11.     # 9

對於Tensorflow的程式碼實現而言,實現程式碼的結構如下:

1. 建立Tensorflow變數(此時,尚未直接計算)

2. 實現Tensorflow變數之間的操作定義

3. 初始化Tensorflow變數

4. 建立Session

5. 執行Session,此時,之前編寫操作都會在這一步執行。


下面,讓我們通過更多的示例來了解這個概念:

  1. = tf.constant(2)
  2. = tf.constant(10)
  3. = tf.multiply(a,b)
  4. print(c) 
  5. # Tensor("Mul:0", shape=(), dtype=int32)

正如我們之前所講的,在定義變數的部分,計算不會直接進行,因此,c並不是20,而是一個int32型變數。

  1. sess = tf.Session()
  2. print(sess.run(c))
  3. # 20

接下來,我們來繼續學習placeholder。

placeholder是一個佔位變數,表示在執行過程中才會給這個變數賦值。

  1. = tf.placeholder(tf.int64, name = 'x')
  2. print(sess.run(2 * x, feed_dict = {x: 3}))
  3. # 6
  4. sess.close()


線性函式

接下來,我們需要用Tensorflow來實現神經網路中最常用的函式之一:線性函式。

  1. def linear_function():
  2.     """
  3.     Implements a linear function: 
  4.             Initializes W to be a random tensor of shape (4,3)
  5.             Initializes X to be a random tensor of shape (3,1)
  6.             Initializes b to be a random tensor of shape (4,1)
  7.     Returns: 
  8.     result -- runs the session for Y = WX + b 
  9.     """
  10.     
  11.     np.random.seed(1)
  12.     
  13.     ### START CODE HERE ### (4 lines of code)
  14.     X = tf.constant(np.random.randn(3,1), name = "X")
  15.     W = tf.constant(np.random.randn(4,3), name = "X")
  16.     b = tf.constant(np.random.randn(4,1), name = "X")
  17.     Y = tf.matmul(W, X) + b
  18.     ### END CODE HERE ### 
  19.     
  20.     # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
  21.     
  22.     ### START CODE HERE ###
  23.     sess = tf.Session()
  24.     result = sess.run(Y)
  25.     ### END CODE HERE ### 
  26.     
  27.     # close the session 
  28.     sess.close()
  29.  
  30.     return result


sigmod函式

  1. def sigmoid(z):
  2.     """
  3.     Computes the sigmoid of z
  4.     
  5.     Arguments:
  6.     z -- input value, scalar or vector
  7.     
  8.     Returns: 
  9.     results -- the sigmoid of z
  10.     """
  11.     
  12.     ### START CODE HERE ### ( approx. 4 lines of code)
  13.     # Create a placeholder for x. Name it 'x'.
  14.     x = tf.placeholder(tf.float32, name = "x")
  15.  
  16.     # compute sigmoid(x)
  17.     sigmoid = tf.sigmoid(x)
  18.  
  19.     # Create a session, and run it. Please use the method 2 explained above. 
  20.     # You should use a feed_dict to pass z's value to x. 
  21.     with tf.Session() as sess:
  22.         # Run session and call the output "result"
  23.         result = sess.run(sigmoid, feed_dict = {x: z})
  24.     
  25.     ### END CODE HERE ###
  26.     
  27.     return result


代價函式計算

其中,代價函式的定義如下:

  1. def cost(logits, labels):
  2.     """
  3.     Computes the cost using the sigmoid cross entropy
  4.     
  5.     Arguments:
  6.     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
  7.     labels -- vector of labels y (1 or 0) 
  8.     
  9.     Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels
  10.     in the TensorFlow documentation. So logits will feed into z, and labels into y. 
  11.     
  12.     Returns:
  13.     cost -- runs the session of the cost (formula (2))
  14.     """
  15.     
  16.     ### START CODE HERE ### 
  17.     
  18.     # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
  19.     z = tf.placeholder(tf.float32, name = "logits")
  20.     y = tf.placeholder(tf.float32, name = "labels")
  21.     
  22.     # Use the loss function (approx. 1 line)
  23.     cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y)
  24.     
  25.     # Create a session (approx. 1 line). See method 1 above.
  26.     sess = tf.Session()
  27.     
  28.     # Run the session (approx. 1 line).
  29.     cost = sess.run(cost, feed_dict = {z: logits, y:labels})
  30.     
  31.     # Close the session (approx. 1 line). See method 1 above.
  32.     sess.close()    
  33.     ### END CODE HERE ###
  34.     
  35.     return cost

看到了嘛?

只用一個函式tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y),我們就實現瞭如此複雜的代價函式。

這就是深度學習框架的魅力!


進行0,1編碼

通常,對於一個多分類問題,我們得到的標籤是一些0到C-1的整數。其中,C是分類數。

然而,在訓練之前,我們需要將直接0到C-1的整數轉換為一個C維的向量。

  1. def one_hot_matrix(labels, C):
  2. """
  3. Creates a matrix where the i-th row corresponds to the ith class number and the jth column
  4. corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
  5. will be 1.
  6. Arguments:
  7. labels -- vector containing the labels
  8. C -- number of classes, the depth of the one hot dimension
  9. Returns:
  10. one_hot -- one hot matrix
  11. """
  12. ### START CODE HERE ###
  13. # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
  14. C = tf.constant(C, name = "C")
  15. # Use tf.one_hot, be careful with the axis (approx. 1 line)
  16. one_hot_matrix = tf.one_hot(labels, C, 1)
  17. # Create the session (approx. 1 line)
  18. sess = tf.Session()
  19. # Run the session (approx. 1 line)
  20. one_hot = sess.run(one_hot_matrix).T
  21. # Close the session (approx. 1 line). See method 1 above.
  22. sess.close()
  23. ### END CODE HERE ###
  24. return one_hot


全0初始化與全1初始化

  1. def zeros(shape):
  2. """
  3. Creates an array of ones of dimension shape
  4. Arguments:
  5. shape -- shape of the array you want to create
  6. Returns:
  7. ones -- array containing only ones
  8. """
  9. ### START CODE HERE ###
  10. # Create "zeros" tensor using tf.zeros(...). (approx. 1 line)
  11. ones = tf.zeros(shape)
  12. # Create the session (approx. 1 line)
  13. sess = tf.Session()
  14. # Run the session to compute 'zeros' (approx. 1 line)
  15. zeros = sess.run(zeros)
  16. # Close the session (approx. 1 line). See method 1 above.
  17. sess.close()
  18. ### END CODE HERE ###
  19. return zeros
  20.  
  21. def ones(shape):
  22. """
  23. Creates an array of ones of dimension shape
  24. Arguments:
  25. shape -- shape of the array you want to create
  26. Returns:
  27. ones -- array containing only ones
  28. """
  29. ### START CODE HERE ###
  30. # Create "ones" tensor using tf.ones(...). (approx. 1 line)
  31. ones = tf.ones(shape)
  32. # Create the session (approx. 1 line)
  33. sess = tf.Session()
  34. # Run the session to compute 'ones' (approx. 1 line)
  35. ones = sess.run(ones)
  36. # Close the session (approx. 1 line). See method 1 above.
  37. sess.close()
  38. ### END CODE HERE ###
  39. return ones


用Tensorflow搭建一個神經網路模型

用tensorflow搭建神經網路模型時,可以分為兩大步驟:

1. 建立計算圖

2. 訓練執行

問題描述:

我們需要去建立一個神經網路來識別0-5的六個手勢。

每張圖片的大小都是64*64畫素。其中,訓練集包含1080張圖片。測試集包含120張圖片。

  1. # 讀取資料集
  2. X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

測試一張圖片吧:

  1. # Example of a picture
  2. index = 0
  3. plt.imshow(X_train_orig[index])
  4. print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

接下來,我們需要對讀取的資料集進行預處理:

包括歸一化和之前提到的零一化。

  1. X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
  2. X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
  3. # Normalize image vectors
  4. X_train = X_train_flatten/255.
  5. X_test = X_test_flatten/255.
  6. # Convert training and test labels to one hot matrices
  7. Y_train = convert_to_one_hot(Y_train_orig, 6)
  8. Y_test = convert_to_one_hot(Y_test_orig, 6)

我們需要建立的模型結構如下:

其中,Softmax層是在多分類問題中最常用的輸出層。

接下來,我們需要建立一些placeholder:

  1. def create_placeholders(n_x, n_y):
  2. """
  3. Creates the placeholders for the tensorflow session.
  4. Arguments:
  5. n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
  6. n_y -- scalar, number of classes (from 0 to 5, so -> 6)
  7. Returns:
  8. X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
  9. Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
  10. Tips:
  11. - You will use None because it let's us be flexible on the number of examples you will for the placeholders.
  12. In fact, the number of examples during test/train is different.
  13. """
  14.  
  15. ### START CODE HERE ### (approx. 2 lines)
  16. X = tf.placeholder(tf.float32, [n_x, None], name = "X")
  17. Y = tf.placeholder(tf.float32, [n_y, None], name = "Y")
  18. ### END CODE HERE ###
  19. return X, Y

接下來,我們需要進行引數初始化:

  1. def initialize_parameters():
  2. """
  3. Initializes parameters to build a neural network with tensorflow. The shapes are:
  4. W1 : [25, 12288]
  5. b1 : [25, 1]
  6. W2 : [12, 25]
  7. b2 : [12, 1]
  8. W3 : [6, 12]
  9. b3 : [6, 1]
  10. Returns:
  11. parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
  12. """
  13. tf.set_random_seed(1) # so that your "random" numbers match ours
  14. ### START CODE HERE ### (approx. 6 lines of code)
  15. W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  16. b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
  17. W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  18. b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
  19. W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  20. b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
  21. ### END CODE HERE ###
  22.  
  23. parameters = {"W1": W1,
  24. "b1": b1,
  25. "W2": W2,
  26. "b2": b2,
  27. "W3": W3,
  28. "b3": b3}
  29. return parameters

然後,我們需要實現前向傳播計算:

  1. def forward_propagation(X, parameters):
  2. """
  3. Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
  4. Arguments:
  5. X -- input dataset placeholder, of shape (input size, number of examples)
  6. parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
  7. the shapes are given in initialize_parameters
  8.  
  9. Returns:
  10. Z3 -- the output of the last LINEAR unit
  11. """
  12. # Retrieve the parameters from the dictionary "parameters"
  13. W1 = parameters['W1']
  14. b1 = parameters['b1']
  15. W2 = parameters['W2']
  16. b2 = parameters['b2']
  17. W3 = parameters['W3']
  18. b3 = parameters['b3']
  19. ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
  20. Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1
  21. A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
  22. Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2
  23. A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
  24. Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3
  25. ### END CODE HERE ###
  26. return Z3

最後,我們需要計算代價函式:

  1. def compute_cost(Z3, Y):
  2. """
  3. Computes the cost
  4. Arguments:
  5. Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
  6. Y -- "true" labels vector placeholder, same shape as Z3
  7. Returns:
  8. cost - Tensor of the cost function
  9. """
  10. # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
  11. logits = tf.transpose(Z3)
  12. labels = tf.transpose(Y)
  13. ### START CODE HERE ### (1 line of code)
  14. cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
  15. ### END CODE HERE ###
  16. return cost

需要說明的是,對於反向傳播計算和引數更新這兩個步驟,在Tensorflow等框架中,已經自動的根據我們編寫的前向傳播計算和代價函式自動完成了,無需我們自己編寫。

下面,讓我們根據剛才實現的一些方法來構建我們的模型吧:

  1. def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
  2. num_epochs = 1500, minibatch_size = 32, print_cost = True):
  3. """
  4. Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
  5. Arguments:
  6. X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
  7. Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
  8. X_test -- training set, of shape (input size = 12288, number of training examples = 120)
  9. Y_test -- test set, of shape (output size = 6, number of test examples = 120)
  10. learning_rate -- learning rate of the optimization
  11. num_epochs -- number of epochs of the optimization loop
  12. minibatch_size -- size of a minibatch
  13. print_cost -- True to print the cost every 100 epochs
  14. Returns:
  15. parameters -- parameters learnt by the model. They can then be used to predict.
  16. """
  17. ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
  18. tf.set_random_seed(1) # to keep consistent results
  19. seed = 3 # to keep consistent results
  20. (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
  21. n_y = Y_train.shape[0] # n_y : output size
  22. costs = [] # To keep track of the cost
  23. # Create Placeholders of shape (n_x, n_y)
  24. ### START CODE HERE ### (1 line)
  25. X, Y = create_placeholders(n_x, n_y)
  26. ### END CODE HERE ###
  27.  
  28. # Initialize parameters
  29. ### START CODE HERE ### (1 line)
  30. parameters = initialize_parameters()
  31. ### END CODE HERE ###
  32. # Forward propagation: Build the forward propagation in the tensorflow graph
  33. ### START CODE HERE ### (1 line)
  34. Z3 = forward_propagation(X, parameters)
  35. ### END CODE HERE ###
  36. # Cost function: Add cost function to tensorflow graph
  37. ### START CODE HERE ### (1 line)
  38. cost = compute_cost(Z3, Y)
  39. ### END CODE HERE ###
  40. # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
  41. ### START CODE HERE ### (1 line)
  42. optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
  43. ### END CODE HERE ###
  44. # Initialize all the variables
  45. init = tf.global_variables_initializer()
  46.  
  47. # Start the session to compute the tensorflow graph
  48. with tf.Session() as sess:
  49. # Run the initialization
  50. sess.run(init)
  51. # Do the training loop
  52. for epoch in range(num_epochs):
  53.  
  54. epoch_cost = 0. # Defines a cost related to an epoch
  55. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
  56. seed = seed + 1
  57. minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
  58.  
  59. for minibatch in minibatches:
  60.  
  61. # Select a minibatch
  62. (minibatch_X, minibatch_Y) = minibatch
  63. # IMPORTANT: The line that runs the graph on a minibatch.
  64. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
  65. ### START CODE HERE ### (1 line)
  66. _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
  67. ### END CODE HERE ###
  68. epoch_cost += minibatch_cost / num_minibatches
  69.  
  70. # Print the cost every epoch
  71. if print_cost == True and epoch % 100 == 0:
  72. print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
  73. if print_cost == True and epoch % 5 == 0:
  74. costs.append(epoch_cost)
  75. # plot the cost
  76. plt.plot(np.squeeze(costs))
  77. plt.ylabel('cost')
  78. plt.xlabel('iterations (per tens)')
  79. plt.title("Learning rate =" + str(learning_rate))
  80. plt.show()
  81.  
  82. # lets save the parameters in a variable
  83. parameters = sess.run(parameters)
  84. print ("Parameters have been trained!")
  85.  
  86. # Calculate the correct predictions
  87. correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
  88.  
  89. # Calculate accuracy on the test set
  90. accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
  91.  
  92. print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
  93. print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
  94. return parameters

用我們的模型來測試一下吧:

  1. parameters = model(X_train, Y_train, X_test, Y_test)

可以看到,經過一段時間的訓練後,訓練集的精確度為99.9%。而測試集的精確度為71.7%。

出現了一定的過擬合!想想應該怎麼處理呢?


用一些其他圖片來測試一下吧

除了一些訓練集和測試集中的圖片,我們還可以使用一些其他的圖片來進行測試。

  1. import scipy
  2. from PIL import Image
  3. from scipy import ndimage
  4.  
  5. ## START CODE HERE ## (PUT YOUR IMAGE NAME)
  6. my_image = "thumbs_up.jpg"
  7. ## END CODE HERE ##
  8.  
  9. # We preprocess your image to fit your algorithm.
  10. fname = "images/" + my_image
  11. image = np.array(ndimage.imread(fname, flatten=False))
  12. my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
  13. my_image_prediction = predict(my_image, parameters)
  14.  
  15. plt.imshow(image)
  16. print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))

關於Tensorflow的入門講解,我們就講解到這裡,後續更多的實踐都會通過tensorflow來進行!


 

更多更詳細的內容,請訪問原創網站:

http://www.missshi.cn/api/view/blog/59bbcb46e519f50d04000206

Ps:初次訪問由於js檔案較大,請耐心等候(8s左右)

相關文章