深度有趣 | 15 淺談中文分詞

張巨集倫發表於2018-09-20

簡介

簡單瞭解下中文分詞的概念,並用標準資料集、Keras和TensorFlow,分別基於LSTM和CNN實現中文分詞器

原理

中文分詞是指,將句子根據語義切分成詞

我來到北京清華大學 -> 我  來到  北京  清華大學
複製程式碼

中文分詞的兩大難題:

  • 歧義、多義詞
  • 未登陸詞(Out Of Vocabulary,OOV)、新詞識別

常用的兩大類分詞方法:

  • 基於詞典:使用已有的詞典和一些啟發式規則,例如最大匹配法、反向最大匹配法、最少詞數法、基於有向無環圖的最大概率組合等
  • 基於標註:字標註問題,可以看作序列標註(Sequence Labeling)的一種,即SBME四標註,例如隱馬爾科夫模型HMM、最大熵模型ME、條件隨機場模型CRF、神經網路模型等

序列標註屬於Seq2Seq Learning的一種,即下圖中的最後一種情況

Seq2Seq Learning的常見情況

來自吳恩達深度學習微專業第五課的例子

吳恩達深度學習微專業序列模型例子

全棧課程中介紹過jieba分詞,其所用的方法:

  • 基於字首詞典實現高效的詞圖掃描,生成句子中漢字所有可能成詞情況所構成的有向無環圖 (DAG)
  • 採用了動態規劃查詢最大概率路徑, 找出基於詞頻的最大切分組合
  • 對於未登入詞,採用了基於漢字成詞能力的 HMM 模型,使用了 Viterbi 演算法

資料

使用Bakeoff 2005提供的標註語料,包括四個來源

sighan.cs.uchicago.edu/bakeoff2005…

  • Academia Sinica:as
  • CityU:cityu
  • Peking University:pku
  • Microsoft Research:msr

以msr為例,共包括四個檔案

  • msr_training.utf8:已切分的訓練資料
  • msr_training_words.utf8:訓練資料詞庫
  • msr_test.utf8:未切分的測試資料
  • msr_test_gold.utf8:已切分的測試資料

BiLSTM

將資料整理並進行字嵌入(Character Embedding)之後,使用Keras實現雙向LSTM進行序列標註

載入庫

# -*- coding: utf-8 -*-

from keras.layers import Input, Dense, Embedding, LSTM, Dropout, TimeDistributed, Bidirectional
from keras.models import Model, load_model
from keras.utils import np_utils
import numpy as np
import re
複製程式碼

準備字典

# 讀取字典
vocab = open('data/msr/msr_training_words.utf8').read().rstrip('\n').split('\n')
vocab = list(''.join(vocab))
stat = {}
for v in vocab:
    stat[v] = stat.get(v, 0) + 1
stat = sorted(stat.items(), key=lambda x:x[1], reverse=True)
vocab = [s[0] for s in stat]
# 5167 個字
print(len(vocab))
# 對映
char2id = {c : i + 1 for i, c in enumerate(vocab)}
id2char = {i + 1 : c for i, c in enumerate(vocab)}
tags = {'s': 0, 'b': 1, 'm': 2, 'e': 3, 'x': 4}
複製程式碼

定義一些引數

embedding_size = 128
maxlen = 32 # 長於32則截斷,短於32則填充0
hidden_size = 64
batch_size = 64
epochs = 50
複製程式碼

定義一個讀取並整理資料的函式

def load_data(path):
    data = open(path).read().rstrip('\n')
    # 按標點符號和換行符分隔
    data = re.split('[,。!?、\n]', data)
    print('共有資料 %d 條' % len(data))
    print('平均長度:', np.mean([len(d.replace(' ', '')) for d in data]))	
    
    # 準備資料
    X_data = []
    y_data = []
    
    for sentence in data:
        sentence = sentence.split(' ')
        X = []
        y = []
        
        try:
            for s in sentence:
                s = s.strip()
                # 跳過空字元
                if len(s) == 0:
                    continue
                # s
                elif len(s) == 1:
                    X.append(char2id[s])
                    y.append(tags['s'])
                elif len(s) > 1:
                    # b
                    X.append(char2id[s[0]])
                    y.append(tags['b'])
                    # m
                    for i in range(1, len(s) - 1):
                        X.append(char2id[s[i]])
                        y.append(tags['m'])
                    # e
                    X.append(char2id[s[-1]])
                    y.append(tags['e'])
            
            # 統一長度
            if len(X) > maxlen:
                X = X[:maxlen]
                y = y[:maxlen]
            else:
                for i in range(maxlen - len(X)):
                    X.append(0)
                    y.append(tags['x'])
        except:
            continue
        else:
            if len(X) > 0:
                X_data.append(X)
                y_data.append(y)
    
    X_data = np.array(X_data)
    y_data = np_utils.to_categorical(y_data, 5)
    
    return X_data, y_data

X_train, y_train = load_data('data/msr/msr_training.utf8')
X_test, y_test = load_data('data/msr/msr_test_gold.utf8')
print('X_train size:', X_train.shape)
print('y_train size:', y_train.shape)
print('X_test size:', X_test.shape)
print('y_test size:', y_test.shape)
複製程式碼

定義模型,訓練並儲存

X = Input(shape=(maxlen,), dtype='int32')
embedding = Embedding(input_dim=len(vocab) + 1, output_dim=embedding_size, input_length=maxlen, mask_zero=True)(X)
blstm = Bidirectional(LSTM(hidden_size, return_sequences=True), merge_mode='concat')(embedding)
blstm = Dropout(0.6)(blstm)
blstm = Bidirectional(LSTM(hidden_size, return_sequences=True), merge_mode='concat')(blstm)
blstm = Dropout(0.6)(blstm)
output = TimeDistributed(Dense(5, activation='softmax'))(blstm)

model = Model(X, output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs)
model.save('msr_bilstm.h5')
複製程式碼

檢視模型在訓練集和測試集上的分詞正確率

print(model.evaluate(X_train, y_train, batch_size=batch_size))
print(model.evaluate(X_test, y_test, batch_size=batch_size))
複製程式碼

定義viterbi函式,使用動態規劃演算法獲得最大概率路徑

def viterbi(nodes):
    trans = {'be': 0.5, 'bm': 0.5, 'eb': 0.5, 'es': 0.5, 'me': 0.5, 'mm': 0.5, 'sb': 0.5, 'ss': 0.5}
    paths = {'b': nodes[0]['b'], 's': nodes[0]['s']}
    for l in range(1, len(nodes)):
        paths_ = paths.copy()
        paths = {}
        for i in nodes[l].keys():
            nows = {}
            for j in paths_.keys():
                if j[-1] + i in trans.keys():
                    nows[j + i] = paths_[j] + nodes[l][i] + trans[j[-1] + i]
            nows = sorted(nows.items(), key=lambda x: x[1], reverse=True)
            paths[nows[0][0]] = nows[0][1]
    
    paths = sorted(paths.items(), key=lambda x: x[1], reverse=True)
    return paths[0][0]
複製程式碼

使用訓練好的模型定義分詞函式

def cut_words(data):
    data = re.split('[,。!?、\n]', data)
    sens = []
    Xs = []
    for sentence in data:
        sen = []
        X = []
        sentence = list(sentence)
        for s in sentence:
            s = s.strip()
            if not s == '' and s in char2id:
                sen.append(s)
                X.append(char2id[s])
        if len(X) > maxlen:
            sen = sen[:maxlen]
            X = X[:maxlen]
        else:
            for i in range(maxlen - len(X)):
                X.append(0)
        
        if len(sen) > 0:
            Xs.append(X)
            sens.append(sen)
    
    Xs = np.array(Xs)
    ys = model.predict(Xs)
    
    results = ''
    for i in range(ys.shape[0]):
        nodes = [dict(zip(['s', 'b', 'm', 'e'], d[:4])) for d in ys[i]]
        ts = viterbi(nodes)
        for x in range(len(sens[i])):
            if ts[x] in ['s', 'e']:
                results += sens[i][x] + '/'
            else:
                results += sens[i][x]
        
    return results[:-1]
複製程式碼

呼叫分詞函式並測試

print(cut_words('中國共產黨第十九次全國代表大會,是在全面建成小康社會決勝階段、中國特色社會主義進入新時代的關鍵時期召開的一次十分重要的大會。'))
print(cut_words('把這本書推薦給,具有一定程式設計基礎,希望瞭解資料分析、人工智慧等知識領域,進一步提升個人技術能力的社會各界人士。'))
print(cut_words('結婚的和尚未結婚的。'))
複製程式碼

在CPU上每輪訓練耗時1500多秒,共訓練50輪,訓練集準確率98.91%,測試集準確率95.47%

再來一個程式碼,使用訓練好的模型進行分詞

# -*- coding: utf-8 -*-

from keras.models import Model, load_model
import numpy as np
import re

# 讀取字典
vocab = open('data/msr/msr_training_words.utf8').read().rstrip('\n').split('\n')
vocab = list(''.join(vocab))
stat = {}
for v in vocab:
    stat[v] = stat.get(v, 0) + 1
stat = sorted(stat.items(), key=lambda x:x[1], reverse=True)
vocab = [s[0] for s in stat]
# 5167 個字
print(len(vocab))
# 對映
char2id = {c : i + 1 for i, c in enumerate(vocab)}
id2char = {i + 1 : c for i, c in enumerate(vocab)}
tags = {'s': 0, 'b': 1, 'm': 2, 'e': 3, 'x': 4}

maxlen = 32 # 長於32則截斷,短於32則填充0
model = load_model('msr_bilstm.h5')

def viterbi(nodes):
    trans = {'be': 0.5, 'bm': 0.5, 'eb': 0.5, 'es': 0.5, 'me': 0.5, 'mm': 0.5, 'sb': 0.5, 'ss': 0.5}
    paths = {'b': nodes[0]['b'], 's': nodes[0]['s']}
    for l in range(1, len(nodes)):
        paths_ = paths.copy()
        paths = {}
        for i in nodes[l].keys():
            nows = {}
            for j in paths_.keys():
                if j[-1] + i in trans.keys():
                    nows[j + i] = paths_[j] + nodes[l][i] + trans[j[-1] + i]
            nows = sorted(nows.items(), key=lambda x: x[1], reverse=True)
            paths[nows[0][0]] = nows[0][1]
    
    paths = sorted(paths.items(), key=lambda x: x[1], reverse=True)
    return paths[0][0]

def cut_words(data):
    data = re.split('[,。!?、\n]', data)
    sens = []
    Xs = []
    for sentence in data:
        sen = []
        X = []
        sentence = list(sentence)
        for s in sentence:
            s = s.strip()
            if not s == '' and s in char2id:
                sen.append(s)
                X.append(char2id[s])
        if len(X) > maxlen:
            sen = sen[:maxlen]
            X = X[:maxlen]
        else:
            for i in range(maxlen - len(X)):
                X.append(0)
        
        if len(sen) > 0:
            Xs.append(X)
            sens.append(sen)
    
    Xs = np.array(Xs)
    ys = model.predict(Xs)
    
    results = ''
    for i in range(ys.shape[0]):
        nodes = [dict(zip(['s', 'b', 'm', 'e'], d[:4])) for d in ys[i]]
        ts = viterbi(nodes)
        for x in range(len(sens[i])):
            if ts[x] in ['s', 'e']:
                results += sens[i][x] + '/'
            else:
                results += sens[i][x]
        
    return results[:-1]

print(cut_words('中國共產黨第十九次全國代表大會,是在全面建成小康社會決勝階段、中國特色社會主義進入新時代的關鍵時期召開的一次十分重要的大會。'))
print(cut_words('把這本書推薦給,具有一定程式設計基礎,希望瞭解資料分析、人工智慧等知識領域,進一步提升個人技術能力的社會各界人士。'))
print(cut_words('結婚的和尚未結婚的。'))
複製程式碼

FCN

全卷積網路(Fully Convolutional Networks,FCN)的好處是,輸入資料的shape是可變的

尤其適合輸入長度不定,但輸入和輸出長度相等的任務,例如序列標註

  • 影像:四維tensor,NHWC,即樣本數量、高度、寬度、通道數。用conv2d,卷的是中間的兩個維度,即高度和寬度
  • 文字序列:三維tensor,NTE,即樣本數量、序列長度、詞向量維度。用conv1d,卷的是中間的一個維度,即序列長度。和N-gram類似,詞向量維度對應通道數

使用TensorFlow實現FCN,通過卷積核大小為3的conv1d降低通道數,從詞向量維度降到序列標註類別數,此處為SBME共四類

載入庫

# -*- coding: utf-8 -*-

import tensorflow as tf
import numpy as np
import re
import time
複製程式碼

準備字典

# 讀取字典
vocab = open('data/msr/msr_training_words.utf8').read().rstrip('\n').split('\n')
vocab = list(''.join(vocab))
stat = {}
for v in vocab:
    stat[v] = stat.get(v, 0) + 1
stat = sorted(stat.items(), key=lambda x:x[1], reverse=True)
vocab = [s[0] for s in stat]
# 5167 個字
print(len(vocab))
# 對映
char2id = {c : i + 1 for i, c in enumerate(vocab)}
id2char = {i + 1 : c for i, c in enumerate(vocab)}
tags = {'s': [1, 0, 0, 0], 'b': [0, 1, 0, 0], 'm': [0, 0, 1, 0], 'e': [0, 0, 0, 1]}
複製程式碼

定義一個載入資料並返回批資料的函式

batch_size = 64

def load_data(path):
    data = open(path).read().rstrip('\n')
    # 按標點符號和換行符分隔
    data = re.split('[,。!?、\n]', data)
    
    # 準備資料
    X_data = []
    Y_data = []
    
    for sentence in data:
        sentence = sentence.split(' ')
        X = []
        Y = []
        
        try:
            for s in sentence:
                s = s.strip()
                # 跳過空字元
                if len(s) == 0:
                    continue
                # s
                elif len(s) == 1:
                    X.append(char2id[s])
                    Y.append(tags['s'])
                elif len(s) > 1:
                    # b
                    X.append(char2id[s[0]])
                    Y.append(tags['b'])
                    # m
                    for i in range(1, len(s) - 1):
                        X.append(char2id[s[i]])
                        Y.append(tags['m'])
                    # e
                    X.append(char2id[s[-1]])
                    Y.append(tags['e'])
        except:
            continue
        else:
            if len(X) > 0:
                X_data.append(X)
                Y_data.append(Y)
    
    order = np.argsort([len(X) for X in X_data])
    X_data = [X_data[i] for i in order]
    Y_data = [Y_data[i] for i in order]
    
    current_length = len(X_data[0])
    X_batch = []
    Y_batch = []
    for i in range(len(X_data)):
        if len(X_data[i]) != current_length or len(X_batch) == batch_size:
            yield np.array(X_batch), np.array(Y_batch)
            
            current_length = len(X_data[i])
            X_batch = []
            Y_batch = []
            
        X_batch.append(X_data[i])
        Y_batch.append(Y_data[i])
複製程式碼

定義模型

embedding_size = 128

embeddings = tf.Variable(tf.random_uniform([len(char2id) + 1, embedding_size], -1.0, 1.0))
X_input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='X_input')
embedded = tf.nn.embedding_lookup(embeddings, X_input)

W_conv1 = tf.Variable(tf.random_uniform([3, embedding_size, embedding_size // 2], -1.0, 1.0))
b_conv1 = tf.Variable(tf.random_uniform([embedding_size // 2], -1.0, 1.0))
Y_conv1 = tf.nn.relu(tf.nn.conv1d(embedded, W_conv1, stride=1, padding='SAME') + b_conv1)

W_conv2 = tf.Variable(tf.random_uniform([3, embedding_size // 2, embedding_size // 4], -1.0, 1.0))
b_conv2 = tf.Variable(tf.random_uniform([embedding_size // 4], -1.0, 1.0))
Y_conv2 = tf.nn.relu(tf.nn.conv1d(Y_conv1, W_conv2, stride=1, padding='SAME') + b_conv2)

W_conv3 = tf.Variable(tf.random_uniform([3, embedding_size // 4, 4], -1.0, 1.0))
b_conv3 = tf.Variable(tf.random_uniform([4], -1.0, 1.0))
Y_pred = tf.nn.softmax(tf.nn.conv1d(Y_conv2, W_conv3, stride=1, padding='SAME') + b_conv3, name='Y_pred')

Y_true = tf.placeholder(dtype=tf.float32, shape=[None, None, 4], name='Y_true')
cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y_true * tf.log(Y_pred + 1e-20), axis=[2]))
optimizer = tf.train.AdamOptimizer().minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(Y_pred, 2), tf.argmax(Y_true, 2))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
複製程式碼

訓練模型並儲存

saver = tf.train.Saver()
max_test_acc = -np.inf

epochs = 50
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    train = load_data('data/msr/msr_training.utf8')
    accs = []
    i = 0
    t0 = int(time.time())
    for X_batch, Y_batch in train:
        sess.run(optimizer, feed_dict={X_input: X_batch, Y_true: Y_batch})
        i += 1
        if i % 100 == 0:
            acc = sess.run(accuracy, feed_dict={X_input: X_batch, Y_true: Y_batch})
            accs.append(acc)
    print('Epoch %d time %ds' % (e + 1, int(time.time()) - t0))
    print('- train accuracy: %f' % (np.mean(accs)))

    test = load_data('data/msr/msr_test_gold.utf8')
    accs = []
    for X_batch, Y_batch in test:
        acc = sess.run(accuracy, feed_dict={X_input: X_batch, Y_true: Y_batch})
        accs.append(acc)
    mean_test_acc = np.mean(accs)
    print('- test accuracy: %f' % mean_test_acc)

    if mean_test_acc > max_test_acc:
        max_test_acc = mean_test_acc
        print('Saving Model......')
        saver.save(sess, './msr_fcn/msr_fcn')
複製程式碼

定義viterbi函式

def viterbi(nodes):
    trans = {'be': 0.5, 'bm': 0.5, 'eb': 0.5, 'es': 0.5, 'me': 0.5, 'mm': 0.5, 'sb': 0.5, 'ss': 0.5}
    paths = {'b': nodes[0]['b'], 's': nodes[0]['s']}
    for l in range(1, len(nodes)):
        paths_ = paths.copy()
        paths = {}
        for i in nodes[l].keys():
            nows = {}
            for j in paths_.keys():
                if j[-1] + i in trans.keys():
                    nows[j + i] = paths_[j] + nodes[l][i] + trans[j[-1] + i]
            nows = sorted(nows.items(), key=lambda x: x[1], reverse=True)
            paths[nows[0][0]] = nows[0][1]
    
    paths = sorted(paths.items(), key=lambda x: x[1], reverse=True)
    return paths[0][0]
複製程式碼

定義分詞函式

def cut_words(data):
    data = re.split('[,。!?、\n]', data)
    sens = []
    Xs = []
    for sentence in data:
        sen = []
        X = []
        sentence = list(sentence)
        for s in sentence:
            s = s.strip()
            if not s == '' and s in char2id:
                sen.append(s)
                X.append(char2id[s])
        
        if len(X) > 0:
            Xs.append(X)
            sens.append(sen)
    
    results = ''
    for i in range(len(Xs)):
        X_d = np.array([Xs[i]])
        Y_d = sess.run(Y_pred, feed_dict={X_input: X_d})
        nodes = [dict(zip(['s', 'b', 'm', 'e'], d)) for d in Y_d[0]]
        ts = viterbi(nodes)
        for x in range(len(sens[i])):
            if ts[x] in ['s', 'e']:
                results += sens[i][x] + '/'
            else:
                results += sens[i][x]
    
    return results[:-1]
複製程式碼

呼叫分詞函式並測試

print(cut_words('中國共產黨第十九次全國代表大會,是在全面建成小康社會決勝階段、中國特色社會主義進入新時代的關鍵時期召開的一次十分重要的大會。'))
print(cut_words('把這本書推薦給,具有一定程式設計基礎,希望瞭解資料分析、人工智慧等知識領域,進一步提升個人技術能力的社會各界人士。'))
print(cut_words('結婚的和尚未結婚的。'))
複製程式碼

由於GPU對CNN的加速效果顯著,在GPU上每輪訓練僅耗時20秒左右,共訓練50輪,訓練集準確率99.01%,測試集準確率92.26%

再來一個程式碼,使用訓練好的模型進行分詞

# -*- coding: utf-8 -*-

import tensorflow as tf
import numpy as np
import re
import time

# 讀取字典
vocab = open('data/msr/msr_training_words.utf8').read().rstrip('\n').split('\n')
vocab = list(''.join(vocab))
stat = {}
for v in vocab:
    stat[v] = stat.get(v, 0) + 1
stat = sorted(stat.items(), key=lambda x:x[1], reverse=True)
vocab = [s[0] for s in stat]
# 5167 個字
print(len(vocab))
# 對映
char2id = {c : i + 1 for i, c in enumerate(vocab)}
id2char = {i + 1 : c for i, c in enumerate(vocab)}
tags = {'s': [1, 0, 0, 0], 'b': [0, 1, 0, 0], 'm': [0, 0, 1, 0], 'e': [0, 0, 0, 1]}

sess = tf.Session()
sess.run(tf.global_variables_initializer())

saver = tf.train.import_meta_graph('./msr_fcn/msr_fcn.meta')
saver.restore(sess, tf.train.latest_checkpoint('./msr_fcn'))

graph = tf.get_default_graph()
X_input = graph.get_tensor_by_name('X_input:0')
Y_pred = graph.get_tensor_by_name('Y_pred:0')

def viterbi(nodes):
    trans = {'be': 0.5, 'bm': 0.5, 'eb': 0.5, 'es': 0.5, 'me': 0.5, 'mm': 0.5, 'sb': 0.5, 'ss': 0.5}
    paths = {'b': nodes[0]['b'], 's': nodes[0]['s']}
    for l in range(1, len(nodes)):
        paths_ = paths.copy()
        paths = {}
        for i in nodes[l].keys():
            nows = {}
            for j in paths_.keys():
                if j[-1] + i in trans.keys():
                    nows[j + i] = paths_[j] + nodes[l][i] + trans[j[-1] + i]
            nows = sorted(nows.items(), key=lambda x: x[1], reverse=True)
            paths[nows[0][0]] = nows[0][1]
    
    paths = sorted(paths.items(), key=lambda x: x[1], reverse=True)
    return paths[0][0]

def cut_words(data):
    data = re.split('[,。!?、\n]', data)
    sens = []
    Xs = []
    for sentence in data:
        sen = []
        X = []
        sentence = list(sentence)
        for s in sentence:
            s = s.strip()
            if not s == '' and s in char2id:
                sen.append(s)
                X.append(char2id[s])
        
        if len(X) > 0:
            Xs.append(X)
            sens.append(sen)
    
    results = ''
    for i in range(len(Xs)):
        X_d = np.array([Xs[i]])
        Y_d = sess.run(Y_pred, feed_dict={X_input: X_d})
        nodes = [dict(zip(['s', 'b', 'm', 'e'], d)) for d in Y_d[0]]
        ts = viterbi(nodes)
        for x in range(len(sens[i])):
            if ts[x] in ['s', 'e']:
                results += sens[i][x] + '/'
            else:
                results += sens[i][x]
    
    return results[:-1]

print(cut_words('中國共產黨第十九次全國代表大會,是在全面建成小康社會決勝階段、中國特色社會主義進入新時代的關鍵時期召開的一次十分重要的大會。'))
print(cut_words('把這本書推薦給,具有一定程式設計基礎,希望瞭解資料分析、人工智慧等知識領域,進一步提升個人技術能力的社會各界人士。'))
print(cut_words('結婚的和尚未結婚的。'))
複製程式碼

其他

進一步改進可以從三個方面進行考慮

  • 修改網路結構
  • 調參
  • 完善功能,例如對標點符號的處理

參考

視訊講解課程

深度有趣(一)

相關文章