深度學習程式碼積累
文章目錄
為方便列印除錯
記錄行號和檔名 變數值和型別
def print_variables(var, line_no):
import inspect
frame = inspect.currentframe()
# __FILE__
fileName = frame.f_code.co_filename
# __LINE__
# fileNo = frame.f_lineno
print('value: ', var, 'File:', fileName, 'at Line:', line_no)
print('type: ', type(var), 'File:', fileName, 'at Line:', line_no)
import inspect
frame = inspect.currentframe()
print_variables(val_batches,frame.f_lineno)
print_variables(val_batches // 5, frame.f_lineno)
列印9宮格
從資料集列印圖片
def plot_9_img():
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
plt.show()
# plot_9_img()
獲得資料集資料夾的名字
class_names = train_dataset.class_names
print(class_names, 'is class_names')
從目錄構建資料集
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
如果最後不用啟用函式 logit
from_logits=True
model.compile(optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
next iter 只迭代一個batch資料集
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print_variables(feature_batch.shape,frame.f_lineno)
自定義模型
training=False
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
def build_model():
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
return model
data_augmentation = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
此時可訓練引數很少
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 160, 160, 3)] 0
_________________________________________________________________
sequential (Sequential) (None, 160, 160, 3) 0
_________________________________________________________________
tf.math.truediv (TFOpLambda) (None, 160, 160, 3) 0
_________________________________________________________________
tf.math.subtract (TFOpLambda (None, 160, 160, 3) 0
_________________________________________________________________
mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280) 2257984
_________________________________________________________________
global_average_pooling2d (Gl (None, 1280) 0
_________________________________________________________________
dropout (Dropout) (None, 1280) 0
_________________________________________________________________
dense (Dense) (None, 1) 1281
=================================================================
Total params: 2,259,265
Trainable params: 1,281
Non-trainable params: 2,257,984
_________________________________________________________________
儲存與恢復模型
checkpoint_path = 'checkpoints/' + dataset_modelname
checkpoint = tf.train.Checkpoint(myAwesomeModel=model)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
# 使用tf.train.CheckpointManager管理Checkpoint
manager = tf.train.CheckpointManager(checkpoint, directory=checkpoint_path, max_to_keep=3)
history = model.fit(train_ds, epochs=max_epochs, validation_data=test_ds, steps_per_epoch=steps_per_epoch)
path = manager.save(checkpoint_number=20)
print("model 儲存在 %s" % path)
儲存訓練結果
dataset_modelname = 'flower_photos_mobile'
history = compile_and_fit_v1(model, dataset_modelname=dataset_modelname, max_epochs=10,
train_ds=ds, test_ds=vs, steps_per_epoch=steps_per_epoch)
npz_path = 'plot_npz/' + dataset_modelname + '.npz'
npz_save(npz_path, history=history)
plot_from_npz_loss(npz_path)
checkpoint_path = 'checkpoints/' + dataset_modelname
checkpoint = tf.train.Checkpoint(myAwesomeModel=model)
# checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
# 使用tf.train.CheckpointManager管理Checkpoint
manager = tf.train.CheckpointManager(checkpoint, directory=checkpoint_path, max_to_keep=3)
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
path = manager.save(checkpoint_number=initial_epochs)
print("model 儲存在 %s" % path)
json儲存
import json
history_json = {}
history_json['history'] = history.history
history_json['epoch'] = history.epoch
with open('plot_npz/'+dataset_modelname+'.json','w') as file:
file.write(json.dumps(history_json))
preprocess from img dir
from tensorflow.keras.preprocessing import image_dataset_from_directory
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
print(PATH, 'is PATH')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = image_dataset_from_directory(train_dir,
shuffle=True,
seed=10,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(validation_dir,
shuffle=True,
seed=10,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
class_names = train_dataset.class_names
print(class_names, 'is class_names')
PATH = os.path.join('datasets', 'flower_photos')
print(PATH, 'is PATH')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = image_dataset_from_directory(PATH,
shuffle=True,
seed=123,
validation_split=0.2,
subset="training",
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = image_dataset_from_directory(PATH,
shuffle=True,
seed=123,
validation_split=0.2,
subset="validation",
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
class_names = train_dataset.class_names
print(class_names, 'is class_names')
data augmentation
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
data_augmentation = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
def plot_aug():
for image, _ in train_dataset.take(1):
plt.figure(figsize=(10, 10))
first_image = image[0]
import inspect
frame = inspect.currentframe()
print_variables(first_image, frame.f_lineno)
print_variables(tf.expand_dims(first_image, 0), frame.f_lineno)
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.axis('off')
plt.show()
# plot_aug()
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1. / 127.5, offset=-1)
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print_variables(feature_batch.shape,frame.f_lineno)
base_model.trainable = False
# Let's take a look at the base model architecture
# base_model.summary()
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print_variables(feature_batch_average.shape,frame.f_lineno)
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print_variables(prediction_batch.shape, frame.f_lineno)
相關文章
- [原始碼解析] 深度學習流水線並行GPipe (2) ----- 梯度累積原始碼深度學習並行梯度
- javascript程式碼積累JavaScript
- 程式設計師的學習和積累程式設計師
- 好的程式碼風格積累
- webpack學習筆記丁點積累Web筆記
- 程式設計師要不斷學習和實時積累程式設計師
- Android 開發有用程式碼積累Android
- JAVA學習筆記及知識積累Java筆記
- Mysql學習積累之二[網摘收藏個人學習參考]MySql
- Mysql學習積累之一[網摘收藏個人學習參考]MySql
- 【深度學習】研究Fast rcnn程式碼深度學習ASTCNN
- 知識積累,韓語中的俗語學習
- sql查詢學習和實踐點滴積累SQL
- 深度學習之卷積模型應用深度學習卷積模型
- 深度學習(視覺化卷積核)深度學習視覺化卷積
- 學會總結和積累
- postfix配置積累(不斷的積累)
- 深度學習高頻手撕程式碼深度學習
- 【深度學習】深度學習md筆記總結第1篇:深度學習課程,要求【附程式碼文件】深度學習筆記
- 積累的一些程式碼片段/小知識
- 吳恩達深度學習:三維卷積吳恩達深度學習卷積
- 深度學習三:卷積神經網路深度學習卷積神經網路
- Oracle積累Oracle
- 李沐動手學習深度學習 錨框部分程式碼解析深度學習
- 用深度學習自動生成HTML程式碼深度學習HTML
- 小程式開發點滴積累
- 功德+N!Python敲擊木魚積累功德程式碼Python
- 深度學習筆記------卷積神經網路深度學習筆記卷積神經網路
- 深度學習卷積神經網路筆記深度學習卷積神經網路筆記
- 機器學習數學知識積累總結機器學習
- vue 個人積累Vue
- lunix 命令積累
- 日期操作積累
- linux 積累Linux
- 吳恩達深度學習:簡單卷積網路吳恩達深度學習卷積
- 吳恩達深度學習:單層卷積網路吳恩達深度學習卷積
- 深度學習——LeNet卷積神經網路初探深度學習卷積神經網路
- 機器學習數學知識積累之概率論機器學習