Tensorflow做閱讀理解與完形填空

Andrew.Hann發表於2017-02-24

catalogue

0. 前言
1. 使用的資料集
2. 資料預處理
3. 訓練
4. 測試模型執行結果: 進行實際完形填空

 

0. 前言

開始寫這篇文章的時候是晚上12點,突然想到幾點新的理解,趕緊記下來。我們用深度學習(例如tensorflow)的時候,一定要著重訓練自己的建模和抽象能力,即把一個複雜的業務問題抽象為一個數學模型問題。從本質上說,閱讀理解做完形填空和人機對話AI是一樣的,所不同的地方在於,前者的輸入一段長對話,且是帶有上下文的長對話,而輸出可能是一段短語,這要求神經網路需要訓練出一個"長對話問題-短語回答"的最佳模型

Relevant Link:

http://blog.topspeedsnail.com/archives/11062
http://blog.topspeedsnail.com/archives/11062
http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_pros.html

 

1. 使用的資料集

對於深度學習來說,訓練集的覆蓋度很重要,tensorflow的RGD隨機遞迴下降會不斷調整引數,直到嘗試出一組最優地的引數去擬合我們的輸入訓練集,為了輔助tensorflow的loss函式找到最優解,我們把訓練集分成2部分,一半用於RGD訓練引數,另一半用於隨時驗證當前引數結果(數學上即loss函式獲得最小值)

0x1: Children’s Book Test
Data is in the included "data" folder. Questions are separated according to whether the missing word is a named entity (NE), common noun (CN), verb (V) or preposition (P)

cbtest_NE_train.txt : 67128 questions
cbtest_NE_valid_2000ex.txt : 2000
cbtest_NE_test_2500ex.txt : 2500

cbtest_CN_train.txt : 121176 questions
cbtest_CN_valid_2000ex.txt : 2000
cbtest_CN_test_2500ex.txt : 2500

cbtest_V_train.txt : 109111 questions
cbtest_V_valid_2000ex.txt : 2000
cbtest_V_test_2500ex.txt : 2500

cbtest_P_train.txt : 67128 questions
cbtest_P_valid_2000ex.txt : 2000
cbtest_P_test_2500ex.txt : 2500

0x2: CBT questions

Questions are built from sets of 21 consecutive sentences from the books. A sentence is defined by the Stanford Core NLP sentence splitter.
A Named Entity (NE) is any entity identified by the Stanford Core NLP NER system. A Common Noun (CN) is any word tagged as a noun by the Stanford Core NLP POS tagger that is not already a NE. Verbs and Prepositions are identified similarly.

訓練語料集的組成是每20個閱讀上下文,對應一個問題,然後給出一個備選答案的集合,這個集合中的任何一個都有可能成為正確的答案,它是一個開放式的問題回答

Relevant Link:

https://research.fb.com/projects/babi/
http://cs.nyu.edu/~kcho/DMQA/

 

2. 資料預處理

0x1: 把句子token化、把訓練集轉化為20條語境上下文+1條問題+一段備選答案

def preprocess_data(data_file, out_file):
    # stories[x][0]  tories[x][1]  tories[x][2]
    stories = []
    with open(data_file) as f:
        story = []
        for line in f:
            line = line.strip()
            if not line:
                story = []
            else:
                _, line = line.split(' ', 1)
                if line:
                    if '\t' in line:
                        q, a, _, answers = line.split('\t')
                        # tokenize
                        q = [s.strip() for s in re.split('(\W+)+', q) if s.strip()]
                        stories.append((story, q, a))
                    else:
                        line = [s.strip() for s in re.split('(\W+)+', line) if s.strip()]
                        story.append(line)
                    # print stories

    #print stories
    samples = []
    for story in stories:
        story_tmp = []
        content = []
        for c in story[0]:
            content += c
        story_tmp.append(content)
        story_tmp.append(story[1])
        story_tmp.append(story[2])

        samples.append(story_tmp)

    #print samples

    # 把每一段閱讀與完形填空片段順序打亂
    random.shuffle(samples)
    print(len(samples))

    with open(out_file, "w") as f:
        for sample in samples:
            f.write(str(sample))
            f.write('\n')

0x2: 根據訓練語料集生成詞表

這裡遵循的依然是word2vec詞向量空間模型,我們假定訓練集中的[上下文20,問題1,回答]之間都是存在關聯關係的,類似馬爾柯夫鏈中的關聯預測性思想,即出現了A詞彙,則A->B出現的概率是所有其他組合中最高的,對單詞短語來說,詞向量就相當於影象識別中的影象區域權重

# generate word vocabulary table
def read_data(data_file):
    stories = []
    with open(data_file) as f:
        for line in f:
            line = ast.literal_eval(line.strip())
            stories.append(line)
    return stories

# generate word vocabulary table
stories = read_data(train_data_token_file) + read_data(valid_data_token_file)

content_length = max([len(s) for s, _, _ in stories])
question_length = max([len(q) for _, q, _ in stories])
print(content_length, question_length)

vocab = sorted(set(itertools.chain(*(story + q + [answer] for story, q, answer in stories))))
vocab_size = len(vocab) + 1
print(vocab_size)
word2idx = dict((w, i + 1) for i, w in enumerate(vocab))
pickle.dump((word2idx, content_length, question_length, vocab_size), open(train_vocab_data_file, "wb"))

通過將詞彙進行index編碼,將詞彙序列轉化為數字序列,從而為後續計算向量最短距離作準備

0x3: 資料向量表示

將[[上下文20,問題1,回答]] list(很多段對話)縱向抽取,根據詞彙表的index編號,轉化為行矩陣形式,X(語境對話)、Q(問題)、A(回答),矩陣中的每個元素都是一個index編號,代表了該字母在"向量詞彙表(該詞彙表中的詞彙之間具備向量特徵)"的索引編號

# From keras, padding
def pad_sequences(sequences, maxlen=None, dtype='int32',
                  padding='post', truncating='post', value=0.):
    lengths = [len(s) for s in sequences]

    nb_samples = len(sequences)
    if maxlen is None:
        maxlen = np.max(lengths)

    # take the sample shape from the first non empty sequence
    # checking for consistency in the main loop below.
    sample_shape = tuple()
    for s in sequences:
        if len(s) > 0:
            sample_shape = np.asarray(s).shape[1:]
            break

    x = (np.ones((nb_samples, maxlen) + sample_shape) * value).astype(dtype)
    for idx, s in enumerate(sequences):
        if len(s) == 0:
            continue  # empty list was found
        if truncating == 'pre':
            trunc = s[-maxlen:]
        elif truncating == 'post':
            trunc = s[:maxlen]
        else:
            raise ValueError('Truncating type "%s" not understood' % truncating)

        # check `trunc` has expected shape
        trunc = np.asarray(trunc, dtype=dtype)
        if trunc.shape[1:] != sample_shape:
            raise ValueError('Shape of sample %s of sequence at position %s is different from expected shape %s' %
                             (trunc.shape[1:], idx, sample_shape))

        if padding == 'post':
            x[idx, :len(trunc)] = trunc
        elif padding == 'pre':
            x[idx, -len(trunc):] = trunc
        else:
            raise ValueError('Padding type "%s" not understood' % padding)
    return x


# conver to vector
def to_vector(data_file, output_file):
    word2idx, content_length, question_length, _ = pickle.load(open(train_vocab_data_file, "rb"))

    X = []
    Q = []
    A = []
    with open(data_file) as f_i:
        for line in f_i:
            line = ast.literal_eval(line.strip())
            x = [word2idx[w] for w in line[0]]
            q = [word2idx[w] for w in line[1]]
            a = [word2idx[line[2]]]

            X.append(x)
            Q.append(q)
            A.append(a)

    X = pad_sequences(X, content_length)
    Q = pad_sequences(Q, question_length)

    with open(output_file, "w") as f_o:
        for i in range(len(X)):
            f_o.write(str([X[i].tolist(), Q[i].tolist(), A[i]]))
            f_o.write('\n')

 

0x4: code

# -*- coding:utf-8 -*-

import re
import random
import ast
import itertools
import pickle
import numpy as np

train_data_file = './CBTest/data/cbtest_NE_train.txt'
train_data_token_file = 'train.data'

valid_data_file = './CBTest/data/cbtest_NE_valid_2000ex.txt'
valid_data_token_file = 'valid.data'

train_vocab_data_file = 'vocab.data'

train_vec_data_file = 'train.vec'
valid_vec_data_file = 'valid.vec'


def preprocess_data(data_file, out_file):
    # stories[x][0]  tories[x][1]  tories[x][2]
    stories = []
    with open(data_file) as f:
        story = []
        for line in f:
            line = line.strip()
            if not line:
                story = []
            else:
                _, line = line.split(' ', 1)
                if line:
                    if '\t' in line:
                        q, a, _, answers = line.split('\t')
                        # tokenize
                        q = [s.strip() for s in re.split('(\W+)+', q) if s.strip()]
                        stories.append((story, q, a))
                    else:
                        line = [s.strip() for s in re.split('(\W+)+', line) if s.strip()]
                        story.append(line)
                    # print stories

    #print stories
    samples = []
    for story in stories:
        story_tmp = []
        content = []
        for c in story[0]:
            content += c
        story_tmp.append(content)
        story_tmp.append(story[1])
        story_tmp.append(story[2])

        samples.append(story_tmp)

    #print samples

    # 把每一段閱讀與完形填空片段順序打亂
    random.shuffle(samples)
    print(len(samples))

    with open(out_file, "w") as f:
        for sample in samples:
            f.write(str(sample))
            f.write('\n')


# generate word vocabulary table
def read_data(data_file):
    stories = []
    with open(data_file) as f:
        for line in f:
            line = ast.literal_eval(line.strip())
            stories.append(line)
    return stories


# From keras, padding
def pad_sequences(sequences, maxlen=None, dtype='int32',
                  padding='post', truncating='post', value=0.):
    lengths = [len(s) for s in sequences]

    nb_samples = len(sequences)
    if maxlen is None:
        maxlen = np.max(lengths)

    # take the sample shape from the first non empty sequence
    # checking for consistency in the main loop below.
    sample_shape = tuple()
    for s in sequences:
        if len(s) > 0:
            sample_shape = np.asarray(s).shape[1:]
            break

    x = (np.ones((nb_samples, maxlen) + sample_shape) * value).astype(dtype)
    for idx, s in enumerate(sequences):
        if len(s) == 0:
            continue  # empty list was found
        if truncating == 'pre':
            trunc = s[-maxlen:]
        elif truncating == 'post':
            trunc = s[:maxlen]
        else:
            raise ValueError('Truncating type "%s" not understood' % truncating)

        # check `trunc` has expected shape
        trunc = np.asarray(trunc, dtype=dtype)
        if trunc.shape[1:] != sample_shape:
            raise ValueError('Shape of sample %s of sequence at position %s is different from expected shape %s' %
                             (trunc.shape[1:], idx, sample_shape))

        if padding == 'post':
            x[idx, :len(trunc)] = trunc
        elif padding == 'pre':
            x[idx, -len(trunc):] = trunc
        else:
            raise ValueError('Padding type "%s" not understood' % padding)
    return x


# conver to vector
def to_vector(data_file, output_file):
    word2idx, content_length, question_length, _ = pickle.load(open(train_vocab_data_file, "rb"))

    X = []
    Q = []
    A = []
    with open(data_file) as f_i:
        for line in f_i:
            line = ast.literal_eval(line.strip())
            x = [word2idx[w] for w in line[0]]
            q = [word2idx[w] for w in line[1]]
            a = [word2idx[line[2]]]

            X.append(x)
            Q.append(q)
            A.append(a)

    X = pad_sequences(X, content_length)
    Q = pad_sequences(Q, question_length)

    with open(output_file, "w") as f_o:
        for i in range(len(X)):
            f_o.write(str([X[i].tolist(), Q[i].tolist(), A[i]]))
            f_o.write('\n')


if __name__ == "__main__":
    preprocess_data(train_data_file, train_data_token_file)
    preprocess_data(valid_data_file, valid_data_token_file)

    # generate word vocabulary table
    stories = read_data(train_data_token_file) + read_data(valid_data_token_file)

    content_length = max([len(s) for s, _, _ in stories])
    question_length = max([len(q) for _, q, _ in stories])
    print(content_length, question_length)

    vocab = sorted(set(itertools.chain(*(story + q + [answer] for story, q, answer in stories))))
    vocab_size = len(vocab) + 1
    print(vocab_size)
    word2idx = dict((w, i + 1) for i, w in enumerate(vocab))
    pickle.dump((word2idx, content_length, question_length, vocab_size), open(train_vocab_data_file, "wb"))

    to_vector(train_data_token_file, train_vec_data_file)
    to_vector(valid_data_token_file, valid_vec_data_file)

Relevant Link: 

 

3. 訓練

0x1: Word Embeddings

向量空間模型 (VSMs)將詞彙表達(巢狀)於一個連續的向量空間中,語義近似的詞彙被對映為相鄰的資料點。向量空間模型在自然語言處理領域中有著漫長且豐富的歷史,不過幾乎所有利用這一模型的方法都依賴於 分散式假設,其核心思想為出現於上下文情景中的詞彙都有相類似的語義。採用這一假設的研究方法大致分為以下兩類:基於計數的方法 (e.g. 潛在語義分析), 和 預測方法 (e.g. 神經概率化語言模型).

0x2: 定義損失函式

loss = -tf.reduce_mean(
        tf.log(tf.reduce_sum(tf.to_float(tf.equal(tf.expand_dims(A, -1), X)) * X_attentions, 1) + tf.constant(0.00001)))

對於訓練過程來說,模型根據X上下文語境得到的answer應該和驗證機中的打標結果一致的(這和影象識別只有一個正確答案的道理是一樣的)

0x3: 優化器

這裡使用Adam演算法的Optimizer不斷訓練我們的輸入引數

optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
    grads_and_vars = optimizer.compute_gradients(loss)
    capped_grads_and_vars = [(tf.clip_by_norm(g, 5), v) for g, v in grads_and_vars]
    train_op = optimizer.apply_gradients(capped_grads_and_vars)

Tensorflow的優化器使用十分簡便,已經進行了大量高層封裝,我們只要例項化相應class,傳入指定引數執行即可

0x4: Dropout

為了減少過擬合,我們在輸出層之前加入dropout。我們用一個placeholder來代表一個神經元的輸出在dropout中保持不變的概率。這樣我們可以在訓練過程中啟用dropout,在測試過程中關閉dropout。 TensorFlow的tf.nn.dropout操作除了可以遮蔽神經元的輸出外,還會自動處理神經元輸出值的scale。所以用dropout的時候可以不用考慮scale。

keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

0x5: code

# -*- coding: utf-8 -*-

import tensorflow as tf
import pickle
import numpy as np
import ast
from collections import defaultdict

train_vec_data_file = 'train.vec'
valid_vec_data_file = 'valid.vec'


train_vocab_data_file = 'vocab.data'


def get_next_batch():
    X = []
    Q = []
    A = []
    for i in range(batch_size):
        for line in train_file:
            line = ast.literal_eval(line.strip())
            X.append(line[0])
            Q.append(line[1])
            A.append(line[2][0])
            break

    if len(X) == batch_size:
        return X, Q, A
    else:
        train_file.seek(0)
        return get_next_batch()


def get_test_batch():
    with open(valid_vec_data_file) as f:
        X = []
        Q = []
        A = []
        for line in f:
            line = ast.literal_eval(line.strip())
            X.append(line[0])
            Q.append(line[1])
            A.append(line[2][0])
        return X, Q, A


def glimpse(weights, bias, encodings, inputs):
    weights = tf.nn.dropout(weights, keep_prob)
    inputs = tf.nn.dropout(inputs, keep_prob)
    attention = tf.transpose(tf.matmul(weights, tf.transpose(inputs)) + bias)
    attention = tf.batch_matmul(encodings, tf.expand_dims(attention, -1))
    attention = tf.nn.softmax(tf.squeeze(attention, -1))
    return attention, tf.reduce_sum(tf.expand_dims(attention, -1) * encodings, 1)


def neural_attention(embedding_dim=384, encoding_dim=128):
    embeddings = tf.Variable(tf.random_normal([vocab_size, embedding_dim], stddev=0.22), dtype=tf.float32)
    tf.contrib.layers.apply_regularization(tf.contrib.layers.l2_regularizer(1e-4), [embeddings])

    with tf.variable_scope('encode'):
        with tf.variable_scope('X'):
            X_lens = tf.reduce_sum(tf.sign(tf.abs(X)), 1)
            embedded_X = tf.nn.embedding_lookup(embeddings, X)
            encoded_X = tf.nn.dropout(embedded_X, keep_prob)
            gru_cell = tf.nn.rnn_cell.GRUCell(encoding_dim)
            outputs, output_states = tf.nn.bidirectional_dynamic_rnn(gru_cell, gru_cell, encoded_X,
                                                                     sequence_length=X_lens, dtype=tf.float32,
                                                                     swap_memory=True)
            encoded_X = tf.concat(2, outputs)
        with tf.variable_scope('Q'):
            Q_lens = tf.reduce_sum(tf.sign(tf.abs(Q)), 1)
            embedded_Q = tf.nn.embedding_lookup(embeddings, Q)
            encoded_Q = tf.nn.dropout(embedded_Q, keep_prob)
            gru_cell = tf.nn.rnn_cell.GRUCell(encoding_dim)
            outputs, output_states = tf.nn.bidirectional_dynamic_rnn(gru_cell, gru_cell, encoded_Q,
                                                                     sequence_length=Q_lens, dtype=tf.float32,
                                                                     swap_memory=True)
            encoded_Q = tf.concat(2, outputs)

    W_q = tf.Variable(tf.random_normal([2 * encoding_dim, 4 * encoding_dim], stddev=0.22), dtype=tf.float32)
    b_q = tf.Variable(tf.random_normal([2 * encoding_dim, 1], stddev=0.22), dtype=tf.float32)
    W_d = tf.Variable(tf.random_normal([2 * encoding_dim, 6 * encoding_dim], stddev=0.22), dtype=tf.float32)
    b_d = tf.Variable(tf.random_normal([2 * encoding_dim, 1], stddev=0.22), dtype=tf.float32)
    g_q = tf.Variable(tf.random_normal([10 * encoding_dim, 2 * encoding_dim], stddev=0.22), dtype=tf.float32)
    g_d = tf.Variable(tf.random_normal([10 * encoding_dim, 2 * encoding_dim], stddev=0.22), dtype=tf.float32)

    with tf.variable_scope('attend') as scope:
        infer_gru = tf.nn.rnn_cell.GRUCell(4 * encoding_dim)
        infer_state = infer_gru.zero_state(batch_size, tf.float32)
        for iter_step in range(8):
            if iter_step > 0:
                scope.reuse_variables()

            _, q_glimpse = glimpse(W_q, b_q, encoded_Q, infer_state)
            d_attention, d_glimpse = glimpse(W_d, b_d, encoded_X, tf.concat_v2([infer_state, q_glimpse], 1))

            gate_concat = tf.concat_v2([infer_state, q_glimpse, d_glimpse, q_glimpse * d_glimpse], 1)

            r_d = tf.sigmoid(tf.matmul(gate_concat, g_d))
            r_d = tf.nn.dropout(r_d, keep_prob)
            r_q = tf.sigmoid(tf.matmul(gate_concat, g_q))
            r_q = tf.nn.dropout(r_q, keep_prob)

            combined_gated_glimpse = tf.concat_v2([r_q * q_glimpse, r_d * d_glimpse], 1)
            _, infer_state = infer_gru(combined_gated_glimpse, infer_state)

    return tf.to_float(tf.sign(tf.abs(X))) * d_attention


def train_neural_attention():
    X_attentions = neural_attention()
    loss = -tf.reduce_mean(
        tf.log(tf.reduce_sum(tf.to_float(tf.equal(tf.expand_dims(A, -1), X)) * X_attentions, 1) + tf.constant(0.00001)))

    optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
    grads_and_vars = optimizer.compute_gradients(loss)
    capped_grads_and_vars = [(tf.clip_by_norm(g, 5), v) for g, v in grads_and_vars]
    train_op = optimizer.apply_gradients(capped_grads_and_vars)

    saver = tf.train.Saver()
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        # writer = tf.summary.FileWriter()
        # 恢復前一次訓練
        ckpt = tf.train.get_checkpoint_state('.')
        if ckpt != None:
            print(ckpt.model_checkpoint_path)
            saver.restore(sess, ckpt.model_checkpoint_path)
        else:
            print("checkpoint not found!!")

        for step in range(20000):
            train_x, train_q, train_a = get_next_batch()
            loss_, _ = sess.run([loss, train_op], feed_dict={X: train_x, Q: train_q, A: train_a, keep_prob: 0.7})
            print(loss_)

            # 儲存模型並計算準確率
            if step % 1000 == 0:
                path = saver.save(sess, 'machine_reading.model', global_step=step)
                print(path)

                test_x, test_q, test_a = get_test_batch()
                test_x, test_q, test_a = np.array(test_x[:batch_size]), np.array(test_q[:batch_size]), np.array(
                    test_a[:batch_size])
                attentions = sess.run(X_attentions, feed_dict={X: test_x, Q: test_q, keep_prob: 1.})
                correct_count = 0
                for x in range(test_x.shape[0]):
                    probs = defaultdict(int)
                    for idx, word in enumerate(test_x[x, :]):
                        probs[word] += attentions[x, idx]
                    guess = max(probs, key=probs.get)
                    if guess == test_a[x]:
                        correct_count += 1
                print(correct_count / test_x.shape[0])



# 讀取詞彙表
word2idx, content_length, question_length, vocab_size = pickle.load(open(train_vocab_data_file, "rb"))
print(content_length, question_length, vocab_size)
#print word2idx

batch_size = 64

train_file = open(train_vec_data_file)

X = tf.placeholder(tf.int32, [batch_size, content_length])  # 洋文材料
Q = tf.placeholder(tf.int32, [batch_size, question_length])  # 問題
A = tf.placeholder(tf.int32, [batch_size])  # 答案

# drop out
keep_prob = tf.placeholder(tf.float32)

train_neural_attention()

Relevant Link:

http://docs.pythontab.com/tensorflow/tutorials/word2vec/
http://blog.csdn.net/lenbow/article/details/52218551
http://wiki.jikexueyuan.com/project/tensorflow-zh/get_started/basic_usage.html
http://wiki.jikexueyuan.com/project/tensorflow-zh/tutorials/mnist_tf.html
http://wiki.jikexueyuan.com/project/tensorflow-zh/how_tos/variable_scope.html
http://www.jianshu.com/p/45dbfe5809d4
https://www.zhihu.com/question/51325408
http://www.jianshu.com/p/c9f66bc8f96c

 

4. 測試模型執行結果: 進行實際完形填空

驗證模型的過程就是讓cnn根據和train相同的輸入格式,得到一個預測概率最大的輸出結果

0x1: 測試題目

We did manage to get the taffy made but before we could sample the result satisfactorily , and just as the girls were finishing with the washing of the dishes , Felicity glanced out of the window and exclaimed in tones of dismay , `` Oh , dear me , here ' s Great - aunt Eliza coming up the lane ! Now , is n ' t that too mean ? '' We all looked out to see a tall , gray - haired lady approaching the house , looking about her with the slightly puzzled air of a stranger . We had been expecting Great - aunt Eliza ' s advent for some weeks , for she was visiting relatives in Markdale . We knew she was liable to pounce down on us any time , being one of those delightful folk who like to `` surprise '' people , but we had never thought of her coming that particular day . It must be confessed that we did not look forward to her visit with any pleasure . None of us had ever seen her , but we knew she was very deaf , and had very decided opinions as to the way in which children should behave . `` Whew ! '' whistled Dan . `` We ' re in for a jolly afternoon . She ' s deaf as a post and we ' ll have to split our throats to make her hear at all . I ' ve a notion to skin out . '' `` Oh , do n ' t talk like that , Dan , '' said Cecily reproachfully . `` She ' s old and lonely and has had a great deal of trouble . She has buried three husbands . We must be kind to her and do the best we can to make her visit pleasant . '' `` She ' s coming to the back door , '' said Felicity , with an agitated glance around the kitchen . `` I told you , Dan , that you should have shovelled the snow away from the front door this morning . Cecily , set those pots in the pantry quick -- hide those boots , Felix -- shut the cupboard door , Peter -- Sara , straighten up the lounge . She ' s awfully particular and ma says her house is always as neat as wax . ''

模型會按照同樣的訓練過程將測試資料輸入模型,得到最大概率對應的index號,即模型預測的完形填空答案

Relevant Link:

Copyright (c) 2017 LittleHann All rights reserved

相關文章