02改善深層神經網路-Initialization-第一週程式設計作業1
分別使用
全零:parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
隨機:parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
“he”:parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * (math.sqrt(2./layers_dims[l-1]))
三種初始化方法對引數W進行初始化,其中引數b始終parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)),初始化為0。
#coding=utf-8
import math
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * (math.sqrt(2./layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
if __name__=='__main__':
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
執行結果:
(1)全零
(2)隨機
(3)He
Note:
我發現如果隨機初始化W:parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])
即不“/10”的話,效果比he還好……準確率達到0.997,而he是0.993……
相關文章
- 02改善深層神經網路-Regularization-第一週程式設計作業2神經網路程式設計
- 02改善深層神經網路-Optimization+methods-第二週程式設計作業1神經網路程式設計
- 02改善深層神經網路-Gradient+Checking-第一週程式設計作業3神經網路程式設計
- 第二課改善深層神經網路assignment1-Initialization神經網路
- 深層神經網路2神經網路
- 01神經網路和深度學習-Python-Basics-With-Numpy-第二週程式設計作業1神經網路深度學習Python程式設計
- Coursera Deep Learning 2 改善深層神經網路:超引數除錯、正則化以及優化 第一週習題神經網路除錯優化
- 【深度學習】1.4深層神經網路深度學習神經網路
- 深度學習教程 | 深層神經網路深度學習神經網路
- 吳恩達神經網路-第一週吳恩達神經網路
- Python 第一週程式設計作業Python程式設計
- 吳恩達《神經網路與深度學習》課程筆記(5)– 深層神經網路吳恩達神經網路深度學習筆記
- Coursera Deep Learning 2 改善深層神經網路:超引數除錯、正則化以及優化 第二週習題神經網路除錯優化
- (三)神經網路入門之隱藏層設計神經網路
- 01神經網路和深度學習-Logistic-Regression-with-a-Neural-Network-mindset-第二週程式設計作業2神經網路深度學習程式設計
- 三、淺層神經網路神經網路
- Java中神經網路Triton GPU程式設計Java神經網路GPU程式設計
- 第二次作業:卷積神經網路 part 1卷積神經網路
- 01神經網路和深度學習-Planar data classification with one hidden layer v3-第三週程式設計作業神經網路深度學習程式設計
- MXNET:多層神經網路神經網路
- 神經網路入門篇之深層神經網路:詳解前向傳播和反向傳播(Forward and backward propagation)神經網路反向傳播Forward
- 案例剖析:利用LSTM深層神經網路進行時間序列預測神經網路
- 何為神經網路卷積層?神經網路卷積
- 神經網路中間層輸出神經網路
- 設計卷積神經網路CNN為什麼不是程式設計?卷積神經網路CNN程式設計
- 卷積神經網路-1卷積神經網路
- 1. 從多層感知機到卷積神經網路卷積神經網路
- 嵌入式系統程式設計基礎第一二週作業程式設計
- 第五週:迴圈神經網路神經網路
- 01神經網路和深度學習-Building your Deep Neural Network: Step by Step-第四周程式設計作業1神經網路深度學習UI程式設計
- 【deeplearning.ai-神經網路和深度學習】第一週答案AI神經網路深度學習
- 0603-常用的神經網路層神經網路
- 模糊神經網路系統1神經網路
- 構建兩層以上BP神經網路(python程式碼)神經網路Python
- 卷積神經網路:卷積層和池化層卷積神經網路
- 神經網路篇——從程式碼出發理解BP神經網路神經網路
- 讀經典【1】重構:改善既有程式碼的設計
- Matlab程式設計之——卷積神經網路CNN程式碼解析Matlab程式設計卷積神經網路CNN