TensorFlow2.0教程-文字分類
TensorFlow2.0教程-文字分類
Tensorflow 2.0 教程持續更新: https://blog.csdn.net/qq_31456593/article/details/88606284
TensorFlow 2.0 教程- Keras 快速入門
TensorFlow 2.0 教程-keras 函式api
TensorFlow 2.0 教程-使用keras訓練模型
TensorFlow 2.0 教程-用keras構建自己的網路層
TensorFlow 2.0 教程-keras模型儲存和序列化
TensorFlow 2.0 教程-eager模式
TensorFlow 2.0 教程-Variables
TensorFlow 2.0 教程–AutoGraph
TensorFlow 2.0 深度學習實踐
TensorFlow2.0 教程-影象分類
TensorFlow2.0 教程-文字分類
TensorFlow2.0 教程-過擬合和欠擬合
完整tensorflow2.0教程程式碼請看tensorflow2.0:中文教程tensorflow2_tutorials_chinese(歡迎star)
我們將構建一個簡單的文字分類器,並使用IMDB進行訓練和測試
from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
2.0.0-alpha0
1.IMDB資料集
下載
imdb=keras.datasets.imdb
(train_x, train_y), (test_x, text_y)=keras.datasets.imdb.load_data(num_words=10000)
瞭解IMDB資料
print("Training entries: {}, labels: {}".format(len(train_x), len(train_y)))
print(train_x[0])
print('len: ',len(train_x[0]), len(train_x[1]))
Training entries: 25000, labels: 25000
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
len: 218 189
建立id和詞的匹配字典
word_index = imdb.get_word_index()
word2id = {k:(v+3) for k, v in word_index.items()}
word2id['<PAD>'] = 0
word2id['<START>'] = 1
word2id['<UNK>'] = 2
word2id['<UNUSED>'] = 3
id2word = {v:k for k, v in word2id.items()}
def get_words(sent_ids):
return ' '.join([id2word.get(i, '?') for i in sent_ids])
sent = get_words(train_x[0])
print(sent)
<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all
2.準備資料
# 句子末尾padding
train_x = keras.preprocessing.sequence.pad_sequences(
train_x, value=word2id['<PAD>'],
padding='post', maxlen=256
)
test_x = keras.preprocessing.sequence.pad_sequences(
test_x, value=word2id['<PAD>'],
padding='post', maxlen=256
)
print(train_x[0])
print('len: ',len(train_x[0]), len(train_x[1]))
[ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941
4 173 36 256 5 25 100 43 838 112 50 670 2 9
35 480 284 5 150 4 172 112 167 2 336 385 39 4
172 4536 1111 17 546 38 13 447 4 192 50 16 6 147
2025 19 14 22 4 1920 4613 469 4 22 71 87 12 16
43 530 38 76 15 13 1247 4 22 17 515 17 12 16
626 18 2 5 62 386 12 8 316 8 106 5 4 2223
5244 16 480 66 3785 33 4 130 12 16 38 619 5 25
124 51 36 135 48 25 1415 33 6 22 12 215 28 77
52 5 14 407 16 82 2 8 4 107 117 5952 15 256
4 2 7 3766 5 723 36 71 43 530 476 26 400 317
46 7 4 2 1029 13 104 88 4 381 15 297 98 32
2071 56 26 141 6 194 7486 18 4 226 22 21 134 476
26 480 5 144 30 5535 18 51 36 28 224 92 25 104
4 226 65 16 38 1334 88 12 16 283 5 16 4472 113
103 32 15 16 5345 19 178 32 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0]
len: 256 256
3.構建模型
import tensorflow.keras.layers as layers
vocab_size = 10000
model = keras.Sequential()
model.add(layers.Embedding(vocab_size, 16))
model.add(layers.GlobalAveragePooling1D())
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 16) 160000
_________________________________________________________________
global_average_pooling1d (Gl (None, 16) 0
_________________________________________________________________
dense (Dense) (None, 16) 272
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________
4.模型訓練與驗證
x_val = train_x[:10000]
x_train = train_x[10000:]
y_val = train_y[:10000]
y_train = train_y[10000:]
history = model.fit(x_train,y_train,
epochs=40, batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
result = model.evaluate(test_x, text_y)
print(result)
Train on 15000 samples, validate on 10000 samples
Epoch 1/40
15000/15000 [==============================] - 1s 73us/sample - loss: 0.6919 - accuracy: 0.5071 - val_loss: 0.6901 - val_accuracy: 0.5101
Epoch 2/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.6864 - accuracy: 0.6242 - val_loss: 0.6829 - val_accuracy: 0.6380
Epoch 3/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.6752 - accuracy: 0.6881 - val_loss: 0.6691 - val_accuracy: 0.7091
Epoch 4/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.6559 - accuracy: 0.7162 - val_loss: 0.6471 - val_accuracy: 0.7509
Epoch 5/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.6274 - accuracy: 0.7697 - val_loss: 0.6175 - val_accuracy: 0.7724
Epoch 6/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.5909 - accuracy: 0.8049 - val_loss: 0.5821 - val_accuracy: 0.7869
Epoch 7/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.5490 - accuracy: 0.8208 - val_loss: 0.5418 - val_accuracy: 0.8158
Epoch 8/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.5054 - accuracy: 0.8437 - val_loss: 0.5030 - val_accuracy: 0.8285
Epoch 9/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.4630 - accuracy: 0.8557 - val_loss: 0.4662 - val_accuracy: 0.8400
Epoch 10/40
15000/15000 [==============================] - 1s 49us/sample - loss: 0.4239 - accuracy: 0.8707 - val_loss: 0.4345 - val_accuracy: 0.8470
Epoch 11/40
15000/15000 [==============================] - 1s 46us/sample - loss: 0.3896 - accuracy: 0.8772 - val_loss: 0.4070 - val_accuracy: 0.8563
Epoch 12/40
15000/15000 [==============================] - 1s 47us/sample - loss: 0.3599 - accuracy: 0.8867 - val_loss: 0.3856 - val_accuracy: 0.8594
Epoch 13/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.3352 - accuracy: 0.8925 - val_loss: 0.3660 - val_accuracy: 0.8646
Epoch 14/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.3131 - accuracy: 0.8978 - val_loss: 0.3517 - val_accuracy: 0.8697
Epoch 15/40
15000/15000 [==============================] - 1s 48us/sample - loss: 0.2947 - accuracy: 0.9013 - val_loss: 0.3392 - val_accuracy: 0.8716
Epoch 16/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.2782 - accuracy: 0.9077 - val_loss: 0.3293 - val_accuracy: 0.8747
Epoch 17/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.2632 - accuracy: 0.9126 - val_loss: 0.3208 - val_accuracy: 0.8757
Epoch 18/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.2500 - accuracy: 0.9159 - val_loss: 0.3132 - val_accuracy: 0.8800
Epoch 19/40
15000/15000 [==============================] - 1s 46us/sample - loss: 0.2381 - accuracy: 0.9197 - val_loss: 0.3073 - val_accuracy: 0.8792
Epoch 20/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.2274 - accuracy: 0.9229 - val_loss: 0.3029 - val_accuracy: 0.8801
Epoch 21/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.2167 - accuracy: 0.9277 - val_loss: 0.2992 - val_accuracy: 0.8811
Epoch 22/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.2077 - accuracy: 0.9299 - val_loss: 0.2951 - val_accuracy: 0.8835
Epoch 23/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1986 - accuracy: 0.9335 - val_loss: 0.2931 - val_accuracy: 0.8827
Epoch 24/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1907 - accuracy: 0.9371 - val_loss: 0.2911 - val_accuracy: 0.8835
Epoch 25/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1828 - accuracy: 0.9415 - val_loss: 0.2885 - val_accuracy: 0.8841
Epoch 26/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1756 - accuracy: 0.9436 - val_loss: 0.2884 - val_accuracy: 0.8840
Epoch 27/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1689 - accuracy: 0.9463 - val_loss: 0.2870 - val_accuracy: 0.8836
Epoch 28/40
15000/15000 [==============================] - 1s 41us/sample - loss: 0.1624 - accuracy: 0.9497 - val_loss: 0.2870 - val_accuracy: 0.8853
Epoch 29/40
15000/15000 [==============================] - 1s 46us/sample - loss: 0.1568 - accuracy: 0.9523 - val_loss: 0.2872 - val_accuracy: 0.8840
Epoch 30/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1509 - accuracy: 0.9534 - val_loss: 0.2864 - val_accuracy: 0.8858
Epoch 31/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1449 - accuracy: 0.9567 - val_loss: 0.2866 - val_accuracy: 0.8858
Epoch 32/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.1395 - accuracy: 0.9595 - val_loss: 0.2874 - val_accuracy: 0.8856
Epoch 33/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1343 - accuracy: 0.9600 - val_loss: 0.2888 - val_accuracy: 0.8863
Epoch 34/40
15000/15000 [==============================] - 1s 44us/sample - loss: 0.1297 - accuracy: 0.9623 - val_loss: 0.2903 - val_accuracy: 0.8843
Epoch 35/40
15000/15000 [==============================] - 1s 43us/sample - loss: 0.1255 - accuracy: 0.9630 - val_loss: 0.2915 - val_accuracy: 0.8870
Epoch 36/40
15000/15000 [==============================] - 1s 42us/sample - loss: 0.1208 - accuracy: 0.9659 - val_loss: 0.2928 - val_accuracy: 0.8862
Epoch 37/40
15000/15000 [==============================] - 1s 48us/sample - loss: 0.1162 - accuracy: 0.9679 - val_loss: 0.2949 - val_accuracy: 0.8851
Epoch 38/40
15000/15000 [==============================] - 1s 49us/sample - loss: 0.1121 - accuracy: 0.9691 - val_loss: 0.2975 - val_accuracy: 0.8848
Epoch 39/40
15000/15000 [==============================] - 1s 49us/sample - loss: 0.1088 - accuracy: 0.9697 - val_loss: 0.3003 - val_accuracy: 0.8840
Epoch 40/40
15000/15000 [==============================] - 1s 45us/sample - loss: 0.1046 - accuracy: 0.9721 - val_loss: 0.3022 - val_accuracy: 0.8843
25000/25000 [==============================] - 1s 22us/sample - loss: 0.3216 - accuracy: 0.8729
[0.32155542838573453, 0.87292]
5.檢視準確率時序圖
import matplotlib.pyplot as plt
history_dict = history.history
history_dict.keys()
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc)+1)
plt.plot(epochs, loss, 'bo', label='train loss')
plt.plot(epochs, val_loss, 'b', label='val loss')
plt.title('Train and val loss')
plt.xlabel('Epochs')
plt.xlabel('loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
相關文章
- 教程 | 用Scikit-Learn實現多類別文字分類文字分類
- 文字分類-TextCNN文字分類CNN
- 文字分類模型文字分類模型
- BiLSTM-Attention文字分類文字分類
- 文字分類論文系列---文字分類
- 【人人都能學得會的NLP - 文字分類篇 03】長文字多標籤分類分類如何做?文字分類
- 5.2.2 用TextCNN做文字分類CNN文字分類
- pyhanlp文字分類與情感分析HanLP文字分類
- 文字分類(下)-卷積神經網路(CNN)在文字分類上的應用文字分類卷積神經網路CNN
- 文字圖Tranformer在文字分類中的應用ORM文字分類
- 圖卷積實戰——文字分類卷積文字分類
- 訓練PaddleOCR文字方向分類模型模型
- TensorFlow2.0教程-使用keras訓練模型Keras模型
- 樸素貝葉斯分類-實戰篇-如何進行文字分類文字分類
- 樸素貝葉斯/SVM文字分類文字分類
- 使用Facebook的FastText簡化文字分類AST文字分類
- NLP-使用CNN進行文字分類CNN文字分類
- Bert文字分類實踐(一):實現一個簡單的分類模型文字分類模型
- 【人人都能學得會的NLP - 文字分類篇 04】層次化多標籤文字分類如何做?文字分類
- (一)文字分類經典模型之CNN篇文字分類模型CNN
- 如何用機器學習對文字分類機器學習文字分類
- CNN+pytorch實現文字二分類CNNPyTorch
- tensorflow2.0 自定義類模組列印問題
- 如何透過Scikit-Learn實現多類別文字分類?文字分類
- 如何通過Scikit-Learn實現多類別文字分類?文字分類
- [文件教程]首頁分類遍歷
- 系統學習NLP(十九)--文字分類之FastText文字分類AST
- Flair初體驗,如何構建文字分類器?AI文字分類
- 深度學習——如何用LSTM進行文字分類深度學習文字分類
- 淺談NLP 文字分類/情感分析 任務中的文字預處理工作文字分類
- 膠囊網路(Capsule Network)在文字分類的探索文字分類
- 使用 TensorFlow Hub 和估算器構建文字分類模型文字分類模型
- Python運算子分為哪幾類?Python教程!Python
- [譯] RNN 迴圈神經網路系列 2:文字分類RNN神經網路文字分類
- 如何用 Python 和深度遷移學習做文字分類?Python遷移學習文字分類
- 基於spark2.0文字分詞+多分類模型Spark分詞模型
- Pytext實戰-構建一個文字分類器有多快文字分類
- NLP概述和文字自動分類演算法詳解演算法