撒花!《圖解深度學習》已開源,16 章帶你無障礙深度學習,高中生數學就 ok!

紅色石頭發表於2019-04-08

今天給大家介紹一個深度學習入門和進階的絕佳教程:《Grokking Deep Learning》,中文譯名為:《圖解深度學習》。這本書是由 Manning 出版社出版,並採用 MEAP(訂閱更新方式),從 2016 年 8 月開始,一直採用不定期更新的方式放送。時至今日,這本書終於完本啦,完結撒花。本書主打入門教學,書中各種插畫豐富生動,是學習深度學習的入門好書。

作者簡介

這本書的作者 Andrew Trask 是 DeepMind 的科學家,同時也是 OpenMinded的負責人,博士畢業於牛津大學。

個人主頁是:https://iamtrask.github.io/

書籍簡介

這本書會教你的從直覺的角度深入學習的基礎知識,這樣你就可以瞭解機器如何使用深度學習進行學習。這本書沒有重點學習框架,如 Torch、TensorFlow 或 Keras。相反,它的重點是教你熟悉框架背後的深層次學習方法。一切都將從頭開始,只使用 Python 和 NumPy。這樣,你就能理解訓練神經系統的每一個細節。網路,而不僅僅是如何使用程式碼庫。你應該把這本書當作掌握其中一個主要框架的必要條件。

該書總共分為兩大部分,第一部分是介紹神經網路的基礎知識,總共包含 9 章內容:

第二部分是介紹深度學習中的高階層和架構,總共包含 7 章內容:

《圖解深度學習》最大的特點就是在調包類書籍氾濫的當下,這本書可以說是非常良心了,作者透過 10 多章的鋪墊,最終完成了一個微型的深度學習庫,這應該也是本書的最大價值。

書籍資源

《圖解深度學習》已經開放了線上版閱讀並開源了書籍中所有的原始碼。

線上閱讀地址:

https://livebook.manning.com/#!/book/grokking-deep-learning/brief-contents/v-12/

程式碼地址:

https://github.com/iamtrask/Grokking-Deep-Learning

本書所有的程式碼實現都是基於 Python,並沒有簡單地呼叫庫。這樣能夠最大程度地幫助你理解深度學習中的概念和原理。例如,CNN 模型的 Python 實現:

import numpy as np, sys
np.random.seed(1)

from keras.datasets import mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()

images, labels = (x_train[0:1000].reshape(1000,28*28) / 255,
y_train[0:1000])


one_hot_labels = np.zeros((len(labels),10))
for i,l in enumerate(labels):
one_hot_labels[i][l] = 1
labels = one_hot_labels

test_images = x_test.reshape(len(x_test),28*28) / 255
test_labels = np.zeros((len(y_test),10))
for i,l in enumerate(y_test):
test_labels[i][l] = 1

def tanh(x):
return np.tanh(x)

def tanh2deriv(output):
return 1 - (output ** 2)

def softmax(x):
temp = np.exp(x)
return temp / np.sum(temp, axis=1, keepdims=True)

alpha, iterations = (2, 300)
pixels_per_image, num_labels = (784, 10)
batch_size = 128

input_rows = 28
input_cols = 28

kernel_rows = 3
kernel_cols = 3
num_kernels = 16

hidden_size = ((input_rows - kernel_rows) * 
(input_cols - kernel_cols)) * num_kernels

# weights_0_1 = 0.02*np.random.random((pixels_per_image,hidden_size))-0.01
kernels = 0.02*np.random.random((kernel_rows*kernel_cols,
num_kernels))-0.01

weights_1_2 = 0.2*np.random.random((hidden_size,
num_labels)) - 0.1



def get_image_section(layer,row_from, row_to, col_from, col_to):
section = layer[:,row_from:row_to,col_from:col_to]
return section.reshape(-1,1,row_to-row_from, col_to-col_from)

for j in range(iterations):
correct_cnt = 0
for i in range(int(len(images) / batch_size)):
batch_start, batch_end=((i * batch_size),((i+1)*batch_size))
layer_0 = images[batch_start:batch_end]
layer_0 = layer_0.reshape(layer_0.shape[0],28,28)
layer_0.shape

sects = list()
for row_start in range(layer_0.shape[1]-kernel_rows):
for col_start in range(layer_0.shape[2] - kernel_cols):
sect = get_image_section(layer_0,
row_start,
row_start+kernel_rows,
col_start,
col_start+kernel_cols)
sects.append(sect)

expanded_input = np.concatenate(sects,axis=1)
es = expanded_input.shape
flattened_input = expanded_input.reshape(es[0]*es[1],-1)

kernel_output = flattened_input.dot(kernels)
layer_1 = tanh(kernel_output.reshape(es[0],-1))
dropout_mask = np.random.randint(2,size=layer_1.shape)
layer_1 *= dropout_mask * 2
layer_2 = softmax(np.dot(layer_1,weights_1_2))

for k in range(batch_size):
labelset = labels[batch_start+k:batch_start+k+1]
_inc = int(np.argmax(layer_2[k:k+1]) == 
np.argmax(labelset))
correct_cnt += _inc

layer_2_delta = (labels[batch_start:batch_end]-layer_2)\
/ (batch_size * layer_2.shape[0])
layer_1_delta = layer_2_delta.dot(weights_1_2.T) * \
tanh2deriv(layer_1)
layer_1_delta *= dropout_mask
weights_1_2 += alpha * layer_1.T.dot(layer_2_delta)
l1d_reshape = layer_1_delta.reshape(kernel_output.shape)
k_update = flattened_input.T.dot(l1d_reshape)
kernels -= alpha * k_update

test_correct_cnt = 0

for i in range(len(test_images)):

layer_0 = test_images[i:i+1]
# layer_1 = tanh(np.dot(layer_0,weights_0_1))
layer_0 = layer_0.reshape(layer_0.shape[0],28,28)
layer_0.shape

sects = list()
for row_start in range(layer_0.shape[1]-kernel_rows):
for col_start in range(layer_0.shape[2] - kernel_cols):
sect = get_image_section(layer_0,
row_start,
row_start+kernel_rows,
col_start,
col_start+kernel_cols)
sects.append(sect)

expanded_input = np.concatenate(sects,axis=1)
es = expanded_input.shape
flattened_input = expanded_input.reshape(es[0]*es[1],-1)

kernel_output = flattened_input.dot(kernels)
layer_1 = tanh(kernel_output.reshape(es[0],-1))
layer_2 = np.dot(layer_1,weights_1_2)

test_correct_cnt += int(np.argmax(layer_2) == 
np.argmax(test_labels[i:i+1]))
if(j % 1 == 0):
sys.stdout.write("\n"+ \
"I:" + str(j) + \
" Test-Acc:"+str(test_correct_cnt/float(len(test_images)))+\
" Train-Acc:" + str(correct_cnt/float(len(images))))

資源下載

最後,本書的的前 11 章電子版 pdf 和所有原始碼已經打包完畢,需要的可以按照以下方式獲取:

1.掃描下方二維碼關注 “AI有道” 公眾號

2.公眾號後臺回覆關鍵詞:GDL

相關文章