Keras 練習2 - CNN
《Tensorflow + Keras深度學習人工智慧實踐應用》 一書第8章的完整例子,相對於第7章的感知機,第八章使用了神經網路,進一步提高了準確率。
在實現上,主要增加了卷積層和池化層。卷積層,使用了filters(即使用多少個卷積核)和卷積核的大小,以及做卷積的填充方式。池化,主要縮減了圖片的大小。
import numpy as npimport pandas as pdfrom keras.utils import np_utilsfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropout, Flatten, Conv2D, MaxPooling2Dimport matplotlib.pyplot as pltimport os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'np.random.seed(10) (x_train_image, y_train_label), (x_test_image, y_test_label) = mnist.load_data() x_train = x_train_image.reshape(x_train_image.shape[0], 28, 28, 1).astype("float32") x_test = x_test_image.reshape(x_test_image.shape[0], 28, 28, 1).astype("float32") x_train_normal = x_train / 255x_test_normal = x_test / 255y_train_onehot = np_utils.to_categorical(y_train_label) y_test_onehot = np_utils.to_categorical(y_test_label) model = Sequential() model.add(Conv2D(filters=16, kernel_size = (5, 5), padding = 'same', input_shape = (28, 28, 1), activation = 'relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=36, kernel_size = (5, 5), padding = 'same', activation = 'relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(units = 128, activation = 'relu')) model.add(Dropout(0.5)) model.add(Dense(units = 10, kernel_initializer = 'normal', activation = 'softmax')) print(model.summary()) model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"]) history = model.fit(x = x_train_normal, y = y_train_onehot, validation_split = 0.2, epochs = 10, batch_size = 300, verbose = 2)def show_train_history(train_history, train, val): plt.plot(train_history.history[train]) plt.plot(train_history.history[val]) plt.title("Train History") plt.ylabel(train) plt.xlabel("Epochs") plt.legend(["train", "validation"], loc="upper left") plt.show()def plot_image_label_prediction(images, labels, prediction, idx = 0, num = 10): fig = plt.gcf() fig.set_size_inches(12, 14) if num > 25: num = 25 for i in range(0, num): ax = plt.subplot(5, 5, 1 + i) ax.imshow(images[idx], cmap="binary") title = "label = " + str(labels[idx]) if len(prediction) > 0: title += ", prediction = " + str(prediction[idx]) ax.set_title(title, fontsize = 12) ax.set_xticks([]) ax.set_yticks([]) idx += 1 plt.show() show_train_history(history, "acc", "val_acc") show_train_history(history, "loss", "val_loss") scores = model.evaluate(x_test_normal, y_test_onehot) print("accuracy = ", scores[1]) prediction = model.predict_classes(x_test_normal)#plot_image_label_prediction(x_test_image, y_test_label, prediction, idx=340, num=25)print(pd.crosstab(y_test_label, prediction, rownames = ["label"], colnames = ["predict"])) df = pd.DataFrame({"label": y_test_label, "predict": prediction}) print(df[(df.label == 5) & (df.predict == 3)])
訓練及精度:
Train on 48000 samples, validate on 12000 samples Epoch 1/10 - 43s - loss: 0.5479 - acc: 0.8286 - val_loss: 0.1138 - val_acc: 0.9670 Epoch 2/10 - 44s - loss: 0.1516 - acc: 0.9541 - val_loss: 0.0766 - val_acc: 0.9765 Epoch 3/10 - 46s - loss: 0.1127 - acc: 0.9663 - val_loss: 0.0575 - val_acc: 0.9827 Epoch 4/10 - 46s - loss: 0.0937 - acc: 0.9721 - val_loss: 0.0502 - val_acc: 0.9857 Epoch 5/10 - 46s - loss: 0.0812 - acc: 0.9756 - val_loss: 0.0457 - val_acc: 0.9870 Epoch 6/10 - 45s - loss: 0.0693 - acc: 0.9789 - val_loss: 0.0426 - val_acc: 0.9879 Epoch 7/10 - 46s - loss: 0.0617 - acc: 0.9808 - val_loss: 0.0410 - val_acc: 0.9883 Epoch 8/10 - 44s - loss: 0.0568 - acc: 0.9830 - val_loss: 0.0375 - val_acc: 0.9896 Epoch 9/10 - 44s - loss: 0.0523 - acc: 0.9845 - val_loss: 0.0358 - val_acc: 0.9898 Epoch 10/10 - 43s - loss: 0.0468 - acc: 0.9859 - val_loss: 0.0360 - val_acc: 0.9895 10000/10000 [==============================] - 4s 389us/step accuracy = 0.9922
作者:YANWeichuan
連結:
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/2524/viewspace-2817210/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- MNIST3_tf2.keras訓練預測TF2Keras
- TensorFlow2.0 + CNN + keras + 人臉識別CNNKeras
- Keras上實現卷積神經網路CNNKeras卷積神經網路CNN
- 【tf.keras】tf.keras載入AlexNet預訓練模型Keras模型
- 團隊練習2
- 程式練習題(2)
- 視覺化 Keras 訓練過程視覺化Keras
- 分散式訓練|horovod+keras(1)分散式Keras
- 運用預訓練 Keras 模型來處理影像分類請求,學習如何使用從 Keras 建立 SavedModelKeras模型
- 基於Theano的深度學習框架keras及配合SVM訓練模型深度學習框架Keras模型
- swoole 的練習 demo(2)
- 【CSS練習】IT修真院–練習2-開發工具CSS
- 《深度學習案例精粹:基於TensorFlow與Keras》案例集用於深度學習訓練深度學習Keras
- TensorFlow2.0教程-使用keras訓練模型Keras模型
- 我的Keras使用總結(5)——Keras指定顯示卡且限制視訊記憶體用量,常見函式的用法及其習題練習Keras記憶體函式
- day 11 – 2 裝飾器練習
- 第七章練習題2
- 第五章練習題2
- 深度學習keras筆記深度學習Keras筆記
- keras-retinanet 用自己的資料集訓練KerasNaN
- keras中VGG19預訓練模型的使用Keras模型
- 手把手教你開發CNN LSTM模型,並應用在Keras中(附程式碼)CNN模型Keras
- day 4 – 2 資料型別練習資料型別
- TIA做交通訊號燈練習2
- Day40--練習--程式設計2程式設計
- Tensorflow2的Keras介面:compile、fitKerasCompile
- seq2seq 的 keras 實現Keras
- TensorFlow之keras.layers.Conv2D( )Keras
- keras 手動搭建alexnet並訓練mnist資料集Keras
- [深度學習]人臉檢測-Tensorflow2.x keras程式碼實現深度學習Keras
- markdown 使用練習練習
- Introduction to Keras for Engineers--官網學習Keras
- 如何將keras訓練的模型轉換成tensorflow lite模型Keras模型
- pytorch訓練簡單的CNN(visdom進行視覺化)PyTorchCNN視覺化
- 新手練習:Python練習題目Python
- TensorFlow2基礎:CNN影像分類CNN
- HTTP2 的一些理解與練習HTTP
- 用免費TPU訓練Keras模型,速度還能提高20倍!Keras模型