Ubuntu18.04執行mnist_cnn.py記錄
# -*- coding: utf-8 -*-
'''Trains a simple deep NN on the MNIST dataset.
Gets to 98.40% test accuracy after 20 epochs
(there is *a lot* of margin for parameter tuning).
2 seconds per epoch on a K520 GPU.
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop
batch_size = 128
num_classes = 10
epochs = 20
# the data, shuffled and split between train and test sets
# (x_train, y_train), (x_test, y_test) = mnist.load_data()
import numpy as np
path='src/mnist.npz'
f = np.load(path)
x_train, y_train = f['x_train'], f['y_train']
x_test, y_test = f['x_test'], f['y_test']
f.close()
x_train = x_train.reshape(60000, 784).astype('float32')
x_test = x_test.reshape(10000, 784).astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
# label為0~9共10個類別,keras要求格式為binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# add by hcq-20171106
# Dense of keras is full-connection.
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
wang@ws:/usr$ ll
total 12164
drwxr-xr-x 1 root root 4096 Nov 25 00:24 ./
drwxr-xr-x 1 root root 4096 Nov 24 21:57 ../
drwxr-xr-x 1 root root 4096 Nov 24 23:30 bin/
drwxr-xr-x 1 root root 4096 Apr 24 2018 games/
drwxr-xr-x 1 root root 4096 Nov 24 22:39 include/
drwxr-xr-x 1 root root 4096 Nov 24 22:37 lib/
drwxr-xr-x 1 root root 4096 Aug 22 01:20 local/
-r-xr-xr-x 1 root root 11490434 Nov 25 00:24 mnist.npz*
---------- 1 root root 2000 Nov 25 00:06 mnist_cnn.py
drwxr-xr-x 1 root root 4096 Nov 24 22:29 sbin/
drwxr-xr-x 1 root root 4096 Nov 24 23:30 share/
drwxr-xr-x 1 root root 4096 Nov 25 00:21 src/
wang@ws:/usr$ sudo python3 mnist_cnn.py
Using TensorFlow backend.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
60000 train samples
10000 test samples
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 512) 262656
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_3 (Dense) (None, 10) 5130
=================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
_________________________________________________________________
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3576: The name tf.log is deprecated. Please use tf.math.log instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
2020-11-25 00:24:42.263649: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-11-25 00:24:42.317977: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2301000000 Hz
2020-11-25 00:24:42.320236: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4eb3940 executing computations on platform Host. Devices:
2020-11-25 00:24:42.321184: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2020-11-25 00:24:42.491731: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
60000/60000 [==============================] - 5s 86us/step - loss: 0.2516 - acc: 0.9222 - val_loss: 0.0942 - val_acc: 0.9705
Epoch 2/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.1035 - acc: 0.9685 - val_loss: 0.0932 - val_acc: 0.9714
Epoch 3/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0750 - acc: 0.9772 - val_loss: 0.0825 - val_acc: 0.9777
Epoch 4/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0614 - acc: 0.9818 - val_loss: 0.0822 - val_acc: 0.9785
Epoch 5/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0503 - acc: 0.9850 - val_loss: 0.0783 - val_acc: 0.9800
Epoch 6/20
60000/60000 [==============================] - 5s 79us/step - loss: 0.0428 - acc: 0.9869 - val_loss: 0.0874 - val_acc: 0.9790
Epoch 7/20
60000/60000 [==============================] - 5s 78us/step - loss: 0.0387 - acc: 0.9882 - val_loss: 0.0931 - val_acc: 0.9765
Epoch 8/20
60000/60000 [==============================] - 5s 81us/step - loss: 0.0353 - acc: 0.9899 - val_loss: 0.0802 - val_acc: 0.9824
Epoch 9/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0324 - acc: 0.9906 - val_loss: 0.0886 - val_acc: 0.9813
Epoch 10/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0295 - acc: 0.9919 - val_loss: 0.0824 - val_acc: 0.9823
Epoch 11/20
60000/60000 [==============================] - 5s 76us/step - loss: 0.0260 - acc: 0.9923 - val_loss: 0.0928 - val_acc: 0.9832
Epoch 12/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0244 - acc: 0.9930 - val_loss: 0.1064 - val_acc: 0.9811
Epoch 13/20
60000/60000 [==============================] - 5s 78us/step - loss: 0.0239 - acc: 0.9935 - val_loss: 0.0933 - val_acc: 0.9835
Epoch 14/20
60000/60000 [==============================] - 5s 78us/step - loss: 0.0225 - acc: 0.9938 - val_loss: 0.0983 - val_acc: 0.9822
Epoch 15/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0216 - acc: 0.9941 - val_loss: 0.1015 - val_acc: 0.9826
Epoch 16/20
60000/60000 [==============================] - 5s 80us/step - loss: 0.0204 - acc: 0.9947 - val_loss: 0.1101 - val_acc: 0.9832
Epoch 17/20
60000/60000 [==============================] - 5s 79us/step - loss: 0.0198 - acc: 0.9947 - val_loss: 0.1044 - val_acc: 0.9837
Epoch 18/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0203 - acc: 0.9945 - val_loss: 0.1180 - val_acc: 0.9838
Epoch 19/20
60000/60000 [==============================] - 5s 76us/step - loss: 0.0203 - acc: 0.9950 - val_loss: 0.1022 - val_acc: 0.9841
Epoch 20/20
60000/60000 [==============================] - 5s 77us/step - loss: 0.0188 - acc: 0.9953 - val_loss: 0.1186 - val_acc: 0.9833
Test loss: 0.11857890235050786
Test accuracy: 0.9833
相關文章
- fabric執行記錄
- 執行緒池小記錄執行緒
- go 學習記錄--如何執行Go
- iOS 多執行緒記錄(二)iOS執行緒
- iOS 多執行緒記錄(一)iOS執行緒
- 【記錄】Ubuntu18.04 切換 Python 版本UbuntuPython
- 透過DNS TXT記錄執行powershellDNS
- 王廣帥IM工程執行記錄
- linux 清空歷史執行記錄Linux
- 記錄 Ubuntu18.04 中文亂碼,解決方法Ubuntu
- 記錄錯誤並繼續執行:錯誤事件記錄子句 --轉事件
- Go 多協程記錄執行結果Go
- java效能調優記錄(執行緒阻塞)Java執行緒
- 記錄Java執行緒相關知識Java執行緒
- Apache Hop新執行資訊記錄平臺Apache
- 查詢SQLSERVER執行過的SQL記錄SQLServer
- 記錄ORACLE語句的執行時間Oracle
- 透過歷史記錄執行本地模型模型
- 隨記(九):記錄Fastjson遠端命令執行流程ASTJSON
- Activiti 學習筆記六:流程執行歷史記錄筆記
- Lumen 實時記錄 SQL 執行解決方案SQL
- 查詢orcale執行的SQL語句記錄SQL
- R語言記錄程式執行的時間R語言
- 查詢SQL Server的歷史執行記錄SQLServer
- win10執行記錄不儲存怎麼辦 win10執行儲存歷史記錄設定方法Win10
- 執行轉換時如何讓Kettle記錄錯誤並繼續執行?——記一種解決方案
- 微信小程式開發記錄02_執行機制微信小程式
- 記錄一下簡單的執行 Laravel Mix 命令Laravel
- 計劃任務執行批處理指令碼,執行記錄顯示“上次執行結果(0x1)”指令碼
- Win10執行無法儲存歷史記錄怎麼辦 win10開始執行記錄不儲存如何解決Win10
- win10程式使用記錄怎麼檢視 win10程式執行記錄在哪檢視Win10
- 關於Word列印自定義的教學執行記錄表
- 【記錄】Ubuntu18.04 下升級 Node.js 至最新版本UbuntuNode.js
- [記錄] Ubuntu18.04 下升級 Node.js 至最新版本UbuntuNode.js
- iOS多執行緒之GCD、OperationQueue 對比和實踐記錄iOS執行緒GC
- oracle外部表記錄alert日誌&&資料庫執行報告Oracle資料庫
- 記錄一個LifeCycle 多執行緒使用導致的崩潰執行緒
- 執行一個專案中間報錯裝包過程記錄