API 說明
- Tf1.0 實現全連線網路
- placeholder, tf.layers.dense, tf.train.AdamOptimizer
- tf.losses.sparse_softmax_cross_entropy,
- tf.global_variables_initializer 「初始化引數」
- feed_dict 「填充資料」
- Dataset
初始化 Dataset:[這兩個 API 在 Tf2.0 中已經棄用]- Dataset.make_one_shot_iterator
- Dataset.make_initializable_iterator
- 自定義 estimator
- Tf.feature_column.input_layer
- Tf.estimator.EstimatorSpec
- Tf.metrics.accuracy
影像分類
匯入包
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
輸出:
1.13.1
sys.version_info(major=3, minor=7, micro=4, releaselevel=’final’, serial=0)
matplotlib 3.1.3
numpy 1.18.1
pandas 1.0.1
sklearn 0.22.1
tensorflow 1.13.1
tensorflow._api.v1.keras 2.2.4-tf
資料預處理:
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
輸出:
(5000, 28, 28) (5000,)
(55000, 28, 28) (55000,)
(10000, 28, 28) (10000,)
print(np.max(x_train), np.min(x_train))
output-> 255 0
歸一化:
from sklearn.preprocessing import StandardScaler
# x_train: [None, 28, 28] -> [None, 784]
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_valid_scaled = scaler.transform(
x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
x_test_scaled = scaler.transform(
x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28 * 28)
print(np.max(x_train_scaled), np.min(x_train_scaled))
output-> 2.0231433 -0.8105136
構建計算圖:
# 定義網路結構
hidden_units = [100, 100] # 定義一個陣列, 2層神經網路, 每層有100個單元
class_num = 10 # 10個類別
x = tf.placeholder(tf.float32, [None, 28 * 28]) # 佔位符, 浮點數定義, None是batch_size的大小
y = tf.placeholder(tf.int64, [None])
# 定義臨時變數做遍歷
input_for_next_layer = x
# 定義整個網路的隱含層
for hidden_unit in hidden_units:
input_for_next_layer = tf.layers.dense(input_for_next_layer, # 輸入
hidden_unit, # 數目
activation=tf.nn.relu) # 啟用函式
# 定義輸出層
logits = tf.layers.dense(input_for_next_layer,
class_num)
# 定義損失函式
# logits = last_hidden_output * W(logits) -> softmax -> prob
# 1. logit -> softmax -> prob
# 2. labels -> one_hot
# 3. calculate cross entropy
loss = tf.losses.sparse_softmax_cross_entropy(labels = y, logits = logits)
# 計算準確率(get accuracy.)
prediction = tf.argmax(logits, 1) # logits最大值的索引
correct_prediction = tf.equal(prediction, y) # 輸出[0,1,...]向量,0代表錯誤,1代表正確
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 轉化為浮點數型別,然後求平均
# 訓練一次網路的定義
train_op = tf.train.AdamOptimizer(1e-3).minimize(loss)
print(x)
,print(logits)
output -> Tensor(“Placeholder:0”, shape=(?, 784), dtype=float32)
Tensor(“dense_2/BiasAdd:0”, shape=(?, 10), dtype=float32)
定義訓練圖程式:
init = tf.global_variables_initializer()
batch_size = 20
epochs = 10
train_steps_per_epoch = x_train.shape[0] // batch_size
valid_steps = x_valid.shape[0] // batch_size
def eval_with_sess(sess, x, y, accuracy, images, labels, batch_size):
eval_steps = images.shape[0] // batch_size
eval_accuracies = []
for step in range(eval_steps):
batch_data = images[step * batch_size : (step+1) * batch_size]
batch_label = labels[step * batch_size : (step+1) * batch_size]
accuracy_val = sess.run(accuracy,
feed_dict = {
x: batch_data,
y: batch_label
})
eval_accuracies.append(accuracy_val)
return np.mean(eval_accuracies)
# 開啟一個 Session
with tf.Session() as sess:
# 初始化
sess.run(init)
for epoch in range(epochs):
for step in range(train_steps_per_epoch):
batch_data = x_train_scaled[
# 取出對應位置的樣本
step * batch_size:(step+1) * batch_size]
batch_label = y_train[
step * batch_size:(step+1) * batch_size]
# 放入圖中進行訓練
loss_val, accuracy_val, _ = sess.run(
[loss, accuracy, train_op],
feed_dict = {
x: batch_data,
y: batch_label
})
# 列印訓練過程
print('\r[Train] epoch: %d, step: %d, loss: %3.5f, accuracy: %2.2f' %
(epoch, step, loss_val, accuracy_val), end="")
valid_accuracy = eval_with_sess(sess, x, y, accuracy,
x_valid_scaled, y_valid,
batch_size)
print("\t[Valid] acc: %2.2f" % (valid_accuracy))
輸出:
[Train] epoch: 0, step: 2749, loss: 0.28534, accuracy: 0.85 [Valid] acc: 0.86
[Train] epoch: 1, step: 2749, loss: 0.16660, accuracy: 0.90 [Valid] acc: 0.87
[Train] epoch: 2, step: 2749, loss: 0.13731, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 3, step: 2749, loss: 0.12373, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 4, step: 2749, loss: 0.12829, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 5, step: 2749, loss: 0.11939, accuracy: 1.00 [Valid] acc: 0.88
[Train] epoch: 6, step: 2749, loss: 0.10637, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 7, step: 2749, loss: 0.11134, accuracy: 0.95 [Valid] acc: 0.88
[Train] epoch: 8, step: 2749, loss: 0.09228, accuracy: 0.95 [Valid] acc: 0.89
[Train] epoch: 9, step: 2749, loss: 0.12499, accuracy: 0.95 [Valid] acc: 0.88
本作品採用《CC 協議》,轉載必須註明作者和本文連結