通過Visualizing Representations來理解Deep Learning、Neural network、以及輸入樣本自身的高維空間結構

Andrew.Hann發表於2017-07-11

catalogue

1. 引言
2. Neural Networks Transform Space - 神經網路內部的空間結構
3. Understand the data itself by visualizing high-dimensional input dataset - 輸入樣本內隱含的空間結構
4. Example 1: Word Embeddings in NLP - text word文字詞語串內隱含的空間結構
5. Example 2: Paragraph Vectors in NLP - 片語短語向量化後內部隱含的空間結構
6. Example 3: Visualization images as a compact (4096-dim) feature vector derived from a trained convolutional neural network In t-SNE - 影象CNN抽象後的高維度視覺化
7. Example 4: 對網路上AV女星進行聚類分析
8. 對多詞的長文字的emberdding向量化表示

 

1. 引言

0x1: 通過詞頻角度來看NLP問題

本文要討論的是NLP領域相關問題,我們知道,bags-of-words模型是解決這個問題的一個可選的方案,關於bags-of-words,請參閱下面的連結,我希望在其他文章來專門討論這個問題,也非常有趣

https://en.wikipedia.org/wiki/Bag-of-words_model

這裡我簡單描述我理解的bags-of-words模型

1. 將輸入樣本中的詞看成無序list,並且統計它們的詞頻,frequently詞頻是這個模型的基礎
例如我們有一個包含2句話的輸入資料: 
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
切分成無序list為: 
[
    "John",
    "likes",
    "to",
    "watch",
    "movies",
    "Mary",
    "too",
    "also",
    "football",
    "games"
]
統計每個詞出現的詞頻,並得到一個詞頻map表,基於這個詞頻map表,對原始輸入資料進行詞頻編碼
(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0]
(2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]
這樣得到的一個向量化表示就是原始資料在詞頻維度的高維表示(因為詞頻考慮了整體樣本的情況,某種程度上來說也可以認為為拉伸到高維的)

2. 傳統的bags-of-words會遇到一個"停用詞問題",例如the/he/逗號/句號這些詞的出現頻率在任何樣本集(黑/白)中都是很高的,為了解決這個問題,出現了tf-idf模型,即同時根據一個詞在當前樣本集和全量樣本集中的比例來評估一個詞的重要程度,並根據重要程度來編碼詞頻

3. N-gram模型
傳統的bags-of-words問題可以看作是一個1-gram模型,這種做法的缺點就是無法捕獲詞與詞之間的序列關係,所有的詞頻統計都是單個詞的,丟失了很多序列資訊,n-gram是一種馬爾科夫language modle語言模型。我們還是拿上面的2句話作為例子
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
2-grame切分後得到
[
    "John likes",
    "likes to",
    "to watch",
    "watch movies",
    "Mary likes",
    "likes movies",
    "movies too",
]
我們通過統計2-grame切分後的短語list,來保留一定的原始輸入資料內的序列資訊

0x2: 通過詞向量的空間拓樸維度來看NLP問題

這也是我們本章討論的重點,我們知道SOM/GNG這樣的無監督聚類能夠學習(拓印)到輸入樣本中已經存在的(注意一定要是已經存在的)的空間結構,並通過權重向量的形式把它反應出來。而CNN這種深度神經網路能夠識別出輸入樣本中的"空間紋理結構"。但我們希望能深入了幾個問題

1. 為什麼神經網路能夠對這些資料進行識別並分類?在中間層的內部,到底發生了什麼?
2. 神經網路能夠做到這件事,是不是因為輸入樣本本身就包含了一些"可分類資訊",只是這種可分類資訊隱藏在資料的高維空間層次中,而無法直觀發現,而深度神經網路做的就是同樣深入到高維空間中去識別這些"可分類資訊"的模式

為了達到這個目標,降維視覺化技術能夠幫助我們深入資料和網路權重向量的內部去觀察

Relevant Link:

https://en.wikipedia.org/wiki/Bag-of-words_model 
http://colah.github.io/posts/2015-01-Visualizing-Representations

 

2. Neural Networks Transform Space - 神經網路內部的空間結構

0x1: low-dimensional neural networks – networks which have only two neurons in each layer

我們先從一個[2維的輸入樣本(輸入層是2維的):只包含輸入層和輸出層且每層只有2個神經元(x/y分別對應輸入和輸出的座標點)的神經網路]入手來探查這個問題

這是一張二維平面圖(輸入層是由2維的點組成的集合),影象的2條曲線分別代表了2個類別,2條曲線上的點代表了我們的輸入資料集dataset,我們希望神經網路對這2類進行正確區分,即對分類建模

由於我們的神經網路只有輸入層和輸出層,每層的神經元都是2,分別代表了x和y,這樣神經網路只能盡力去"尋找"一個直線來進行迴歸分類,因為輸入層和輸出層之間是線性關係

很顯然,這無法得到一個"相對完美"的結果,因為圖上我們的輸入資料集是線性不可分的,但是如果我們在網路中增加一個hidden layer(3維),通過隱層把輸入向量的維度"拉伸"到3維空間,這種拉伸,本質上是給原資料集增加了一個維度的向量,也讓我們看到了原本看不到的資料本質含義,從2維空間看線性不可分的情況在3維空間得以解決

在3維空間,得到的有效資訊更多了,我們能做的決策也就更加準確

0x2: visualization classifying high-dimensional data

對於1/2/3維的神經網路,我們可以通過直觀的觀察來獲得神經網路工作過程的體驗,但是當超過4維後(多分類/複雜決策問題)後,我們就無法通過觀察和想象來探查神經網路內部的工作原理了,例如對於MNIST問題來說,輸入層的維度是影象的所有畫素點,即784維,這個時候,我們就需要藉助降維視覺化技術,來探查神經網路在不同的層中,是如何通過調整權重向量來逐漸擬合出輸入資料在高維空間的真實拓樸含義

我們可以看到,在輸入層(784維度)權重向量的空間拓樸分佈還是比較"混雜在一起的",但是中hidden layer的訓練過程中,由於梯度下降的"強迫調整",高維(至少要超過784維)空間的神經元權重向量將各自的向量調整為"儘量的離彼此遠遠的",因為距離越遠,在之後的啟用決策層(例如sigmore)就越容易進行,所得到的loss function也越小

這裡我插一句關於為什麼hidden layer的神經元個數要超過輸入層?的思考和理解

1. 首先我認為神經網路隱藏層的本質是把低維度的輸入層"拉伸"到高維度的空間中,去探查高維度空間中資料是否表現出一定的空間拓樸分佈
2. 但是我們同時也知道,從低緯度向高維度空間的拉伸是有條件的,高維度降維到低維後,高維度的資訊在一定程度上已經丟失了,而低緯度向高維度的拉伸則是需要逆向還原這個失真的部分
3. 聯想到數學裡常用的一個定理:亮點確定一條直線;兩線確定一個平面;兩個平面確定一個空間。這就告訴我們低緯度向高維度空間的拉伸需要冗餘資訊,即多個冗餘的低緯度向量可以推匯出一個高維度的向量

Relevant Link:

http://blog.csdn.net/unoboros/article/details/30451213
http://www.cnblogs.com/boostable/p/iage_high_space_sphere.html

 

3. Understand the data itself by visualizing high-dimensional input dataset - 輸入樣本內隱含的空間結構

降維視覺化不僅可以幫助我們探查神經網路的內部工作原理,同樣也可以幫助我們理解我們的輸入樣本集的"真實空間拓樸含義",舉一個我自己YY的例子,拿一條主機上或者閘道器產品捕獲到的攻擊日誌來說,把所有欄位都聚齊,例如url/ip/port/cmd_line/path/time時間,等等。這些欄位是我們看得到的東西,我們設定它們共同組成了一個"10維10元組"。但是允許我來做一個大膽的假設,假設所有的這些攻擊事件日誌本來都是1024維的,我們看到的10維是它們在1024維度上向下10維度的"投影",這其中已經丟失了一部分可能對我們判斷有價值的資訊了

想到這裡,我認為物理上的高維空間理論對我有一些啟發

1. 一個二維空間是包含所有的無限的一維的(例如一條直接上包含了無限的點),三維空間包含了無限所有的二維...等等,在10維空間上,時間的概念已經消失了,如果我們處在10維空間上,我們在某一個"時刻",我們可以同時向所有時間方向做出無限的決定,即我們在某一時刻同時決定畫畫,唱歌,打球,看書,洗腳
2. 反過來也說明,如果我們要把低維的輸入拉伸到高維空間中去進行決策,我們就不能只拿在低維空間中的某一條單獨的事件記錄,而是應該把整個時間視窗的所有事件都聚合起來,一起拉伸到高維空間中,這種概念常常出現在例如"概率統計"、“聚類”、"聚類分析"

如果一個異常事件/事物我們覺得它和其他所謂的正常事件不可分,這很有可能是因為我們在當前所觀察到的低緯度上確實是不可分的,需要把這些事件拉伸到更高維度的空間中,就有更大的可能性尋找到一個"超分類面"

Relevant Link: 

http://cs.stanford.edu/people/karpathy/cnnembed/

 

4. Example 1: Word Embeddings in NLP - text word文字詞語串內隱含的空間結構

emberdding是一個非常好的用於拉伸低緯度資料到高維空間的向量化表示方法,它特別適合於處理NLP相關領域問題,當然我覺得入侵檢測也可以應用emberdding的思想來解決

在NLP問題中,輸入網路的資料樣本往往是一個詞(word)或者詞列表(word list),樣本集中的每一個"詞"都可以看成一個在高維空間中的向量,向量的原始維度數是所有詞的總數,而高維空間的每一個方向就代表了vocabulary詞彙表中的一個詞,emberdding做的事是把這些高維向量壓縮對映到一個相對低維的空間上(一般是100 ~ 1000維),這個壓縮對映後的向量空間就叫做 embedding space

但是emberdding向量化中最關鍵的一個是vocabulary詞彙表,這個詞彙表是一個權重向量表,vocabulary的生成過程也是根據輸入樣本進行多輪的訓練,讓emberdding的權重向量組逐漸去擬合輸入資料,這麼說有些抽象,我們用一個簡單的例子來展示我們是怎麼應用emberdding到實際的場景中的

1. 我們有一份訓練樣本集,假設只有2條記錄(已經被切分為word list)
  1) the wall is blue
  2) the wall is red
2. emberdding的目標是從全域性的角度,根據最大似然概率的原理將每一個詞對映到一個高維空間,而且還要保留原始的語法詞義
3. 由於"the wall is blue""the wall is red"都出現在了訓練集中,因此在gradient-descent訓練過程中,emberdding網路會逐漸調整自己的W權重,使得blue和red在空間位置和方向上保持接近和同向
4. 從這個最簡單的例子出發不斷推廣,我們訓練樣本中的所有word都會被對映到各自所屬的"類別"中,同類相近且同向

在這個word emberdding空間中,每一個詞都是一個高維(和emberdding space同維)向量

我們單獨看一個詞或者一個片語,它只是單純的詞,但是我們轉換為emberdding vector之後再看,我們看到的每一個詞其實都是綜合整個全體資料集之後得到的巨集觀表現,emberdding vector代表了這個詞在整個資料集中和其他資料的相對關係

這種向量化表示帶來了一些好處

1. 一個在語義上相近的詞在空間分佈上也應該相近
2. 語義方向性:emberdding vector不僅有具備相關性,也具備方向相關性,即:v(‘‘woman")−v(‘‘man") ≃v(‘‘woman")−v(‘‘man") ≃ v(‘‘queen")−v(‘‘king")
  1) 語義上相近的詞,其方向也一致
  2) 語義上相近的片語,在不同的片語之間的相對距離是接近的

補充一句,emberdding的這種相對距離一致性可以被用來作為語法檢測,例如

she is a woman / he is a man -> she is a aunt / he is a uncle
# 這2句都是符合語法的,但是如果出現
he is a aunt,則可以推斷這句話不符合語法

0x1: word emberding的空間結構視覺化

我們可以基於t-SNE來探查word2vec emberdding內部的工作原理,word2vec emberdding將低維空間的資料對映到高維空間向量中(保持語法不變性),而t-SNE則可以將高維空間的向量在儘量少失真的情況下投影到2/3維空間

需要注意的是,visualization視覺化的物件是emberdding的詞彙表vocabulary,這個詞彙表中包含了word2vec中的所有詞的向量(Rn)

# -*- coding: utf-8 -*-

from gensim.models.word2vec import Word2Vec
from sklearn.manifold import TSNE
from sklearn.datasets import fetch_20newsgroups
import re
import matplotlib.pyplot as plt


def clean(text):
    """Remove posting header, split by sentences and words, keep only letters"""
    lines = re.split('[?!.:]\s', re.sub('^.*Lines: \d+', '', re.sub('\n', ' ', text)))
    return [re.sub('[^a-zA-Z]', ' ', line).lower().split() for line in lines]



if __name__ == '__main__':
    # download example data ( may take a while)
    train = fetch_20newsgroups()

    sentences = [line for text in train.data for line in clean(text)]

    model = Word2Vec(sentences, workers=4, size=100, min_count=50, window=10, sample=1e-3)

    print (model.most_similar('memory'))

    X = model[model.wv.vocab]

    tsne = TSNE(n_components=2)
    X_tsne = tsne.fit_transform(X)

    plt.scatter(X_tsne[:, 0], X_tsne[:, 1])
    plt.show()

可以看到,emberdding vocabulary詞彙表中,和memory這個計算機詞彙相近的詞是cpu和cache,符合實際意義的語法

把區域性放大看

Relevant Link:

http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
http://www.iro.umontreal.ca/~lisa/pointeurs/turian-wordrepresentations-acl10.pdf
https://stackoverflow.com/questions/43166762/what-is-relation-between-tsne-and-word2vec
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
http://learningaboutdata.blogspot.com/2014/06/plotting-word-embedding-using-tsne-with.html
https://stackoverflow.com/questions/40581010/how-to-run-tsne-on-word2vec-created-from-gensim
https://stackoverflow.com/questions/43776572/visualise-word2vec-generated-from-gensim
https://www.quora.com/How-do-I-visualise-word2vec-word-vectors
http://nlp.yvespeirsman.be/blog/visualizing-word-embeddings-with-tsne/
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/  
《word2vec_中的數學原理詳解》
http://blog.csdn.net/u014595019/article/details/51884529
http://download.csdn.net/detail/mzg12345678/7988741
https://www.tensorflow.org/versions/r0.12/tutorials/word2vec

 

5. Example 2: Paragraph Vectors in NLP - 片語短語向量化後內部隱含的空間結構

Paragraph/Sentence Vector Model的核心思想從一篇document/或者一大段話(sentences)中抽取一部分的短語(Paragraph),常常是文章的標題描述或者引導語之類的字串,並通過將這段Paragraph對映到詞向量空間中,來獲得document/sentences的向量表示

Paragraph/Sentence Vector Model模型並不是直接針對一個變長的文字計算vector,而是基於sentence詞向量的基礎之上,通過從原始sentence中摘取一段Paragraph(類似於摘要),把這段Paragraph看作是一個word(初始化一個權重向量),並通過它周圍的context上下文來進行無監督的訓練,權重引數的調整通過Gredien decend + BP來完成

0x1: Code

demo.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html


import logging
import sys
import os
from word2vec import Word2Vec, Sent2Vec, LineSentence

logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.info("running %s" % " ".join(sys.argv))

input_file = './modleTrain/test.txt'
# Emberdding dimension = 100
# The maximum distance between the current and predicted word within a sentence = 5
# Model = CBOW
# Ignore total frequency lower than = 5.
model = Word2Vec(LineSentence(input_file), size=100, window=5, sg=0, min_count=5, workers=8)
model.save(input_file + '.model')
# get word emberdding vector vocabulary
model.save_word2vec_format(input_file + '.vec')

sent_file = './modleTrain/sent.txt'
model = Sent2Vec(LineSentence(sent_file), model_file=input_file + '.model')
model.save_sent2vec_format(sent_file + '.vec')

program = os.path.basename(sys.argv[0])
logging.info("finished running %s" % program)

word2vec.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2013 Radim Rehurek <me@radimrehurek.com>
# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html


"""
Deep learning via word2vec's "skip-gram and CBOW models", using either
hierarchical softmax or negative sampling [1]_ [2]_.

The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/
and extended with additional functionality.

For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/

**Install Cython with `pip install cython` to use optimized word2vec training** (70x speedup [3]_).

Initialize a model with e.g.::

>>> model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)

Persist a model to disk with::

>>> model.save(fname)
>>> model = Word2Vec.load(fname)  # you can continue training with the loaded model!

The model can also be instantiated from an existing file on disk in the word2vec C format::

  >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False)  # C text format
  >>> model = Word2Vec.load_word2vec_format('/tmp/vectors.bin', binary=True)  # C binary format

You can perform various syntactic/semantic NLP word tasks with the model. Some of them
are already built-in::

  >>> model.most_similar(positive=['woman', 'king'], negative=['man'])
  [('queen', 0.50882536), ...]

  >>> model.doesnt_match("breakfast cereal dinner lunch".split())
  'cereal'

  >>> model.similarity('woman', 'man')
  0.73723527

  >>> model['computer']  # raw numpy vector of a word
  array([-0.00449447, -0.00310097,  0.02421786, ...], dtype=float32)

and so on.

If you're finished training a model (=no more updates, only querying), you can do

  >>> model.init_sims(replace=True)

to trim unneeded model memory = use (much) less RAM.

.. [1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013.
.. [2] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality.
       In Proceedings of NIPS, 2013.
.. [3] Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/
"""

import logging
import sys
import os
import heapq
import time
from copy import deepcopy
import threading

try:
    from queue import Queue
except ImportError:
    from Queue import Queue

from numpy import exp, dot, zeros, outer, random, dtype, get_include, float32 as REAL, \
    uint32, seterr, array, uint8, vstack, argsort, fromstring, sqrt, newaxis, ndarray, empty, sum as np_sum

# logger = logging.getLogger("gensim.models.word2vec")
logger = logging.getLogger("sent2vec")

# from gensim import utils, matutils  # utility fnc for pickling, common scipy operations etc
import utils, matutils  # utility fnc for pickling, common scipy operations etc
from six import iteritems, itervalues, string_types
from six.moves import xrange

try:
    from gensim_addons.models.word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
except ImportError:
    try:
        # try to compile and use the faster cython version
        import pyximport

        models_dir = os.path.dirname(__file__) or os.getcwd()
        pyximport.install(setup_args={"include_dirs": [models_dir, get_include()]})
        from word2vec_inner import train_sentence_sg, train_sentence_cbow, FAST_VERSION
    except:
        # failed... fall back to plain numpy (20-80x slower training than the above)
        FAST_VERSION = -1


        def train_sentence_sg(model, sentence, alpha, work=None):
            """
            Update skip-gram model by training on a single sentence.

            The sentence is a list of Vocab objects (or None, where the corresponding
            word is not in the vocabulary. Called internally from `Word2Vec.train()`.

            This is the non-optimized, Python version. If you have cython installed, gensim
            will use the optimized version from word2vec_inner instead.

            """
            if model.negative:
                # precompute negative labels
                labels = zeros(model.negative + 1)
                labels[0] = 1.0

            for pos, word in enumerate(sentence):
                if word is None:
                    continue  # OOV word in the input sentence => skip
                reduced_window = random.randint(model.window)  # `b` in the original word2vec code

                # now go over all words from the (reduced) window, predicting each one in turn
                start = max(0, pos - model.window + reduced_window)
                for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start):
                    # don't train on OOV words and on the `word` itself
                    if word2 and not (pos2 == pos):
                        l1 = model.syn0[word2.index]
                        neu1e = zeros(l1.shape)

                        if model.hs:
                            # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
                            l2a = deepcopy(model.syn1[word.point])  # 2d matrix, codelen x layer1_size
                            fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                            ga = (
                                 1 - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                            model.syn1[word.point] += outer(ga, l1)  # learn hidden -> output
                            neu1e += dot(ga, l2a)  # save error

                        if model.negative:
                            # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                            word_indices = [word.index]
                            while len(word_indices) < model.negative + 1:
                                w = model.table[random.randint(model.table.shape[0])]
                                if w != word.index:
                                    word_indices.append(w)
                            l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                            fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                            gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                            model.syn1neg[word_indices] += outer(gb, l1)  # learn hidden -> output
                            neu1e += dot(gb, l2b)  # save error

                        model.syn0[word2.index] += neu1e  # learn input -> hidden

            return len([word for word in sentence if word is not None])


        def train_sentence_cbow(model, sentence, alpha, work=None, neu1=None):
            """
            Update CBOW model by training on a single sentence.

            The sentence is a list of Vocab objects (or None, where the corresponding
            word is not in the vocabulary. Called internally from `Word2Vec.train()`.

            This is the non-optimized, Python version. If you have cython installed, gensim
            will use the optimized version from word2vec_inner instead.

            """
            if model.negative:
                # precompute negative labels
                labels = zeros(model.negative + 1)
                labels[0] = 1.

            for pos, word in enumerate(sentence):
                if word is None:
                    continue  # OOV word in the input sentence => skip
                reduced_window = random.randint(model.window)  # `b` in the original word2vec code
                start = max(0, pos - model.window + reduced_window)
                window_pos = enumerate(sentence[start: pos + model.window + 1 - reduced_window], start)
                word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
                l1 = np_sum(model.syn0[word2_indices], axis=0)  # 1 x layer1_size
                if word2_indices and model.cbow_mean:
                    l1 /= len(word2_indices)
                neu1e = zeros(l1.shape)

                if model.hs:
                    l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size
                    fa = 1. / (1. + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                    ga = (1. - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                    model.syn1[word.point] += outer(ga, l1)  # learn hidden -> output
                    neu1e += dot(ga, l2a)  # save error

                if model.negative:
                    # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                    word_indices = [word.index]
                    while len(word_indices) < model.negative + 1:
                        w = model.table[random.randint(model.table.shape[0])]
                        if w != word.index:
                            word_indices.append(w)
                    l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                    fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                    gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                    model.syn1neg[word_indices] += outer(gb, l1)  # learn hidden -> output
                    neu1e += dot(gb, l2b)  # save error

                model.syn0[word2_indices] += neu1e  # learn input -> hidden, here for all words in the window separately

            return len([word for word in sentence if word is not None])


class Vocab(object):
    """A single vocabulary item, used internally for constructing binary trees (incl. both word leaves and inner nodes)."""

    def __init__(self, **kwargs):
        self.count = 0
        self.__dict__.update(kwargs)

    def __lt__(self, other):  # used for sorting in a priority queue
        return self.count < other.count

    def __str__(self):
        vals = ['%s:%r' % (key, self.__dict__[key]) for key in sorted(self.__dict__) if not key.startswith('_')]
        return "<" + ', '.join(vals) + ">"


class Word2Vec(utils.SaveLoad):
    """
    Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/

    The model can be stored/loaded via its `save()` and `load()` methods, or stored/loaded in a format
    compatible with the original word2vec implementation via `save_word2vec_format()` and `load_word2vec_format()`.

    """

    def __init__(self, sentences=None, size=100, alpha=0.025, window=5, min_count=5,
                 sample=0, seed=1, workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0):
        """
        Initialize the model from an iterable of `sentences`. Each sentence is a
        list of words (unicode strings) that will be used for training.

        The `sentences` iterable can be simply a list, but for larger corpora,
        consider an iterable that streams the sentences directly from disk/network.
        See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in
        this module for such examples.

        If you don't supply `sentences`, the model is left uninitialized -- use if
        you plan to initialize it in some other way.

        `sg` defines the training algorithm. By default (`sg=1`), skip-gram is used. Otherwise, `cbow` is employed.
        `size` is the dimensionality of the feature vectors.
        `window` is the maximum distance between the current and predicted word within a sentence.
        `alpha` is the initial learning rate (will linearly drop to zero as training progresses).
        `seed` = for the random number generator.
        `min_count` = ignore all words with total frequency lower than this.
        `sample` = threshold for configuring which higher-frequency words are randomly downsampled;
                default is 0 (off), useful value is 1e-5.
        `workers` = use this many worker threads to train the model (=faster training with multicore machines)
        `hs` = if 1 (default), hierarchical sampling will be used for model training (else set to 0)
        `negative` = if > 0, negative sampling will be used, the int for negative
                specifies how many "noise words" should be drawn (usually between 5-20)
        `cbow_mean` = if 0 (default), use the sum of the context word vectors. If 1, use the mean.
                Only applies when cbow is used.
        """
        self.vocab = {}  # mapping from a word (string) to a Vocab object
        self.index2word = []  # map from a word's matrix index (int) to word (string)
        self.sg = int(sg)
        self.table = None  # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
        self.layer1_size = int(size)
        if size % 4 != 0:
            logger.warning("consider setting layer size to a multiple of 4 for greater performance")
        self.alpha = float(alpha)
        self.window = int(window)
        self.seed = seed
        self.min_count = min_count
        self.sample = sample
        self.workers = workers
        self.min_alpha = min_alpha
        self.hs = hs
        self.negative = negative
        self.cbow_mean = int(cbow_mean)
        if sentences is not None:
            self.build_vocab(sentences)
            self.train(sentences)

    def make_table(self, table_size=100000000, power=0.75):
        """
        Create a table using stored vocabulary word counts for drawing random words in the negative
        sampling training routines.

        Called internally from `build_vocab()`.

        """
        logger.info("constructing a table with noise distribution from %i words" % len(self.vocab))
        # table (= list of words) of noise distribution for negative sampling
        vocab_size = len(self.index2word)
        self.table = zeros(table_size, dtype=uint32)

        if not vocab_size:
            logger.warning("empty vocabulary in word2vec, is this intended?")
            return

        # compute sum of all power (Z in paper)
        train_words_pow = float(sum([self.vocab[word].count ** power for word in self.vocab]))
        # go through the whole table and fill it up with the word indexes proportional to a word's count**power
        widx = 0
        # normalize count^0.75 by Z
        d1 = self.vocab[self.index2word[widx]].count ** power / train_words_pow
        for tidx in xrange(table_size):
            self.table[tidx] = widx
            if 1.0 * tidx / table_size > d1:
                widx += 1
                d1 += self.vocab[self.index2word[widx]].count ** power / train_words_pow
            if widx >= vocab_size:
                widx = vocab_size - 1

    def create_binary_tree(self):
        """
        Create a binary Huffman tree using stored vocabulary word counts. Frequent words
        will have shorter binary codes. Called internally from `build_vocab()`.

        """
        logger.info("constructing a huffman tree from %i words" % len(self.vocab))

        # build the huffman tree
        heap = list(itervalues(self.vocab))
        heapq.heapify(heap)

        # 每次從當前所有word節點中取最小的兩個節點,組成左右子樹(左小右大),並將concat結果構成的新節點作為父節點插入huffman樹中(從下往上生長,詞頻越小,越靠近葉子) - 構造huffman二叉樹
        for i in xrange(len(self.vocab) - 1):
            min1, min2 = heapq.heappop(heap), heapq.heappop(heap)
            heapq.heappush(heap, Vocab(count=min1.count + min2.count, index=i + len(self.vocab), left=min1, right=min2))

        # recurse over the tree, assigning a binary code to each vocabulary word
        if heap:
            max_depth, stack = 0, [(heap[0], [], [])]
            while stack:
                node, codes, points = stack.pop()
                if node.index < len(self.vocab):
                    # leaf node => store its path from the root
                    node.code, node.point = codes, points
                    max_depth = max(len(codes), max_depth)
                else:
                    # inner node => continue recursion
                    points = array(list(points) + [node.index - len(self.vocab)], dtype=uint32)
                    stack.append((node.left, array(list(codes) + [0], dtype=uint8), points))
                    stack.append((node.right, array(list(codes) + [1], dtype=uint8), points))

            logger.info("built huffman tree with maximum node depth %i" % max_depth)

    def precalc_sampling(self):
        """Precalculate each vocabulary item's threshold for sampling"""
        if self.sample:
            logger.info(
                "frequent-word downsampling, threshold %g; progress tallies will be approximate" % (self.sample))
            total_words = sum(v.count for v in itervalues(self.vocab))
            threshold_count = float(self.sample) * total_words
        # 根據出現次數計算word節點的詞頻概率
        for v in itervalues(self.vocab):
            prob = (sqrt(v.count / threshold_count) + 1) * (threshold_count / v.count) if self.sample else 1.0
            v.sample_probability = min(prob, 1.0)
            # print v

    def build_vocab(self, sentences):
        """
        Build vocabulary from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        logger.info("collecting all words and their counts")
        sentence_no, vocab = -1, {}
        total_words = 0
        # 統計訓練集中每個詞的出現次數
        for sentence_no, sentence in enumerate(sentences):
            if sentence_no % 10000 == 0:
                logger.info("PROGRESS: at sentence #%i, processed %i words and %i word types" % (
                sentence_no, total_words, len(vocab)))
            for word in sentence:
                total_words += 1
                if word in vocab:
                    vocab[word].count += 1
                else:
                    vocab[word] = Vocab(count=1)
        logger.info("collected %i word types from a corpus of %i words and %i sentences" % (
        len(vocab), total_words, sentence_no + 1))

        # assign a unique index to each word
        # 按照出現的順序給每個詞index編碼(這裡沒有按照詞頻排序),使得詞彙表vocabulary中的word和index索引可以互相查詢轉換
        self.vocab, self.index2word = {}, []
        for word, v in iteritems(vocab):
            if v.count >= self.min_count:
                v.index = len(self.vocab)
                self.index2word.append(word)
                self.vocab[word] = v
                # print "word: ", word
                # print "v:", v
        logger.info("total %i word types after removing those with count<%s" % (len(self.vocab), self.min_count))
        # print self.vocab
        # print self.index2word

        # 分層抽樣
        if self.hs:
            # add info about each word's Huffman encoding
            self.create_binary_tree()
        if self.negative:
            # build the table for drawing random words (for negative sampling)
            self.make_table()
        # precalculate downsampling thresholds
        self.precalc_sampling()
        self.reset_weights()

    def train(self, sentences, total_words=None, word_count=0, chunksize=100):
        """
        Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        if FAST_VERSION < 0:
            import warnings
            warnings.warn(
                "Cython compilation failed, training will be slow. Do you have Cython installed? `pip install cython`")
        logger.info("training model with %i workers on %i vocabulary and %i features, "
                    "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
                    (self.workers, len(self.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative))

        if not self.vocab:
            raise RuntimeError("you must first build vocabulary before training the model")

        start, next_report = time.time(), [1.0]
        word_count = [word_count]
        total_words = total_words or int(sum(v.count * v.sample_probability for v in itervalues(self.vocab)))
        jobs = Queue(
            maxsize=2 * self.workers)  # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
        lock = threading.Lock()  # for shared state (=number of words trained so far, log reports...)

        def worker_train():
            """Train the model, lifting lists of sentences from the jobs queue."""
            work = zeros(self.layer1_size, dtype=REAL)  # each thread must have its own work memory
            neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL)

            while True:
                job = jobs.get()
                if job is None:  # data finished, exit
                    break
                # update the learning rate before every job
                alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words))
                # how many words did we train on? out-of-vocabulary (unknown) words do not count
                if self.sg:
                    job_words = sum(train_sentence_sg(self, sentence, alpha, work) for sentence in job)
                else:
                    job_words = sum(train_sentence_cbow(self, sentence, alpha, work, neu1) for sentence in job)
                with lock:
                    word_count[0] += job_words
                    elapsed = time.time() - start
                    if elapsed >= next_report[0]:
                        logger.info("PROGRESS: at %.2f%% words, alpha %.05f, %.0f words/s" %
                                    (100.0 * word_count[0] / total_words, alpha,
                                     word_count[0] / elapsed if elapsed else 0.0))
                        next_report[
                            0] = elapsed + 1.0  # don't flood the log, wait at least a second between progress reports

        workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
        for thread in workers:
            thread.daemon = True  # make interrupting the process with ctrl+c easier
            thread.start()

        def prepare_sentences():
            for sentence in sentences:
                # avoid calling random_sample() where prob >= 1, to speed things up a little:
                sampled = [self.vocab[word] for word in sentence
                           if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[
                        word].sample_probability >= random.random_sample())]
                yield sampled

        # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
        for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
            logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
            jobs.put(job)
        logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
        for _ in xrange(self.workers):
            jobs.put(None)  # give the workers heads up that they can finish -- no more work!

        for thread in workers:
            thread.join()

        elapsed = time.time() - start
        logger.info("training on %i words took %.1fs, %.0f words/s" %
                    (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0))

        return word_count[0]

    def reset_weights(self):
        """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
        logger.info("resetting layer weights")
        random.seed(self.seed)
        self.syn0 = empty((len(self.vocab), self.layer1_size), dtype=REAL)
        # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
        for i in xrange(len(self.vocab)):
            self.syn0[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size
        if self.hs:
            self.syn1 = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
        if self.negative:
            self.syn1neg = zeros((len(self.vocab), self.layer1_size), dtype=REAL)
        self.syn0norm = None

    def save_word2vec_format(self, fname, fvocab=None, binary=False):
        """
        Store the input-hidden weight matrix in the same format used by the original
        C word2vec-tool, for compatibility.

        """
        if fvocab is not None:
            logger.info("Storing vocabulary in %s" % (fvocab))
            with utils.smart_open(fvocab, 'wb') as vout:
                for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count):
                    vout.write(utils.to_utf8("%s %s\n" % (word, vocab.count)))
        logger.info("storing %sx%s projection weights into %s" % (len(self.vocab), self.layer1_size, fname))
        assert (len(self.vocab), self.layer1_size) == self.syn0.shape
        with utils.smart_open(fname, 'wb') as fout:
            fout.write(utils.to_utf8("%s %s\n" % self.syn0.shape))
            # store in sorted order: most frequent words at the top
            for word, vocab in sorted(iteritems(self.vocab), key=lambda item: -item[1].count):
                row = self.syn0[vocab.index]
                if binary:
                    fout.write(utils.to_utf8(word) + b" " + row.tostring())
                else:
                    fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join("%f" % val for val in row))))

    @classmethod
    def load_word2vec_format(cls, fname, fvocab=None, binary=False, norm_only=True):
        """
        Load the input-hidden weight matrix from the original C word2vec-tool format.

        Note that the information stored in the file is incomplete (the binary tree is missing),
        so while you can query for word similarity etc., you cannot continue training
        with a model loaded this way.

        `binary` is a boolean indicating whether the data is in binary word2vec format.
        `norm_only` is a boolean indicating whether to only store normalised word2vec vectors in memory.
        Word counts are read from `fvocab` filename, if set (this is the file generated
        by `-save-vocab` flag of the original C tool).
        """
        counts = None
        if fvocab is not None:
            logger.info("loading word counts from %s" % (fvocab))
            counts = {}
            with utils.smart_open(fvocab) as fin:
                for line in fin:
                    word, count = utils.to_unicode(line).strip().split()
                    counts[word] = int(count)

        logger.info("loading projection weights from %s" % (fname))
        with utils.smart_open(fname) as fin:
            header = utils.to_unicode(fin.readline())
            vocab_size, layer1_size = map(int, header.split())  # throws for invalid file format
            result = Word2Vec(size=layer1_size)
            result.syn0 = zeros((vocab_size, layer1_size), dtype=REAL)
            if binary:
                binary_len = dtype(REAL).itemsize * layer1_size
                for line_no in xrange(vocab_size):
                    # mixed text and binary: read text first, then binary
                    word = []
                    while True:
                        ch = fin.read(1)
                        if ch == b' ':
                            break
                        if ch != b'\n':  # ignore newlines in front of words (some binary files have newline, some don't)
                            word.append(ch)
                    word = utils.to_unicode(b''.join(word))
                    if counts is None:
                        result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
                    elif word in counts:
                        result.vocab[word] = Vocab(index=line_no, count=counts[word])
                    else:
                        logger.warning("vocabulary file is incomplete")
                        result.vocab[word] = Vocab(index=line_no, count=None)
                    result.index2word.append(word)
                    result.syn0[line_no] = fromstring(fin.read(binary_len), dtype=REAL)
            else:
                for line_no, line in enumerate(fin):
                    parts = utils.to_unicode(line).split()
                    if len(parts) != layer1_size + 1:
                        raise ValueError("invalid vector on line %s (is this really the text format?)" % (line_no))
                    word, weights = parts[0], map(REAL, parts[1:])
                    if counts is None:
                        result.vocab[word] = Vocab(index=line_no, count=vocab_size - line_no)
                    elif word in counts:
                        result.vocab[word] = Vocab(index=line_no, count=counts[word])
                    else:
                        logger.warning("vocabulary file is incomplete")
                        result.vocab[word] = Vocab(index=line_no, count=None)
                    result.index2word.append(word)
                    result.syn0[line_no] = weights
        logger.info("loaded %s matrix from %s" % (result.syn0.shape, fname))
        result.init_sims(norm_only)
        return result

    def most_similar(self, positive=[], negative=[], topn=10):
        """
        Find the top-N most similar words. Positive words contribute positively towards the
        similarity, negative words negatively.

        This method computes cosine similarity between a simple mean of the projection
        weight vectors of the given words, and corresponds to the `word-analogy` and
        `distance` scripts in the original word2vec implementation.

        Example::

          >>> trained_model.most_similar(positive=['woman', 'king'], negative=['man'])
          [('queen', 0.50882536), ...]

        """
        self.init_sims()

        if isinstance(positive, string_types) and not negative:
            # allow calls like most_similar('dog'), as a shorthand for most_similar(['dog'])
            positive = [positive]

        # add weights for each word, if not already present; default to 1.0 for positive and -1.0 for negative words
        positive = [(word, 1.0) if isinstance(word, string_types + (ndarray,))
                    else word for word in positive]
        negative = [(word, -1.0) if isinstance(word, string_types + (ndarray,))
                    else word for word in negative]

        # compute the weighted average of all words
        all_words, mean = set(), []
        for word, weight in positive + negative:
            if isinstance(word, ndarray):
                mean.append(weight * word)
            elif word in self.vocab:
                mean.append(weight * self.syn0norm[self.vocab[word].index])
                all_words.add(self.vocab[word].index)
            else:
                raise KeyError("word '%s' not in vocabulary" % word)
        if not mean:
            raise ValueError("cannot compute similarity with no input")
        mean = matutils.unitvec(array(mean).mean(axis=0)).astype(REAL)

        dists = dot(self.syn0norm, mean)
        if not topn:
            return dists
        best = argsort(dists)[::-1][:topn + len(all_words)]
        # ignore (don't return) words from the input
        result = [(self.index2word[sim], float(dists[sim])) for sim in best if sim not in all_words]
        return result[:topn]

    def doesnt_match(self, words):
        """
        Which word from the given list doesn't go with the others?

        Example::

          >>> trained_model.doesnt_match("breakfast cereal dinner lunch".split())
          'cereal'

        """
        self.init_sims()

        words = [word for word in words if word in self.vocab]  # filter out OOV words
        logger.debug("using words %s" % words)
        if not words:
            raise ValueError("cannot select a word from an empty list")
        vectors = vstack(self.syn0norm[self.vocab[word].index] for word in words).astype(REAL)
        mean = matutils.unitvec(vectors.mean(axis=0)).astype(REAL)
        dists = dot(vectors, mean)
        return sorted(zip(dists, words))[0][1]

    def __getitem__(self, word):
        """
        Return a word's representations in vector space, as a 1D numpy array.

        Example::

          >>> trained_model['woman']
          array([ -1.40128313e-02, ...]

        """
        return self.syn0[self.vocab[word].index]

    def __contains__(self, word):
        return word in self.vocab

    def similarity(self, w1, w2):
        """
        Compute cosine similarity between two words.

        Example::

          >>> trained_model.similarity('woman', 'man')
          0.73723527

          >>> trained_model.similarity('woman', 'woman')
          1.0

        """
        return dot(matutils.unitvec(self[w1]), matutils.unitvec(self[w2]))

    def init_sims(self, replace=False):
        """
        Precompute L2-normalized vectors.

        If `replace` is set, forget the original vectors and only keep the normalized
        ones = saves lots of memory!

        Note that you **cannot continue training** after doing a replace. The model becomes
        effectively read-only = you can call `most_similar`, `similarity` etc., but not `train`.

        """
        if getattr(self, 'syn0norm', None) is None or replace:
            logger.info("precomputing L2-norms of word weight vectors")
            if replace:
                for i in xrange(self.syn0.shape[0]):
                    self.syn0[i, :] /= sqrt((self.syn0[i, :] ** 2).sum(-1))
                self.syn0norm = self.syn0
                if hasattr(self, 'syn1'):
                    del self.syn1
            else:
                self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)

    def accuracy(self, questions, restrict_vocab=30000):
        """
        Compute accuracy of the model. `questions` is a filename where lines are
        4-tuples of words, split into sections by ": SECTION NAME" lines.
        See https://code.google.com/p/word2vec/source/browse/trunk/questions-words.txt for an example.

        The accuracy is reported (=printed to log and returned as a list) for each
        section separately, plus there's one aggregate summary at the end.

        Use `restrict_vocab` to ignore all questions containing a word whose frequency
        is not in the top-N most frequent words (default top 30,000).

        This method corresponds to the `compute-accuracy` script of the original C word2vec.

        """
        ok_vocab = dict(sorted(iteritems(self.vocab),
                               key=lambda item: -item[1].count)[:restrict_vocab])
        ok_index = set(v.index for v in itervalues(ok_vocab))

        def log_accuracy(section):
            correct, incorrect = section['correct'], section['incorrect']
            if correct + incorrect > 0:
                logger.info("%s: %.1f%% (%i/%i)" %
                            (section['section'], 100.0 * correct / (correct + incorrect),
                             correct, correct + incorrect))

        sections, section = [], None
        for line_no, line in enumerate(utils.smart_open(questions)):
            # TODO: use level3 BLAS (=evaluate multiple questions at once), for speed
            line = utils.to_unicode(line)
            if line.startswith(': '):
                # a new section starts => store the old section
                if section:
                    sections.append(section)
                    log_accuracy(section)
                section = {'section': line.lstrip(': ').strip(), 'correct': 0, 'incorrect': 0}
            else:
                if not section:
                    raise ValueError("missing section header before line #%i in %s" % (line_no, questions))
                try:
                    a, b, c, expected = [word.lower() for word in
                                         line.split()]  # TODO assumes vocabulary preprocessing uses lowercase, too...
                except:
                    logger.info("skipping invalid line #%i in %s" % (line_no, questions))
                if a not in ok_vocab or b not in ok_vocab or c not in ok_vocab or expected not in ok_vocab:
                    logger.debug("skipping line #%i with OOV words: %s" % (line_no, line))
                    continue

                ignore = set(self.vocab[v].index for v in [a, b, c])  # indexes of words to ignore
                predicted = None
                # find the most likely prediction, ignoring OOV words and input words
                for index in argsort(self.most_similar(positive=[b, c], negative=[a], topn=False))[::-1]:
                    if index in ok_index and index not in ignore:
                        predicted = self.index2word[index]
                        if predicted != expected:
                            logger.debug("%s: expected %s, predicted %s" % (line.strip(), expected, predicted))
                        break
                section['correct' if predicted == expected else 'incorrect'] += 1
        if section:
            # store the last section, too
            sections.append(section)
            log_accuracy(section)

        total = {'section': 'total', 'correct': sum(s['correct'] for s in sections),
                 'incorrect': sum(s['incorrect'] for s in sections)}
        log_accuracy(total)
        sections.append(total)
        return sections

    def __str__(self):
        return "Word2Vec(vocab=%s, size=%s, alpha=%s)" % (len(self.index2word), self.layer1_size, self.alpha)

    def save(self, *args, **kwargs):
        kwargs['ignore'] = kwargs.get('ignore', ['syn0norm'])  # don't bother storing the cached normalized vectors
        super(Word2Vec, self).save(*args, **kwargs)


class Sent2Vec(utils.SaveLoad):
    def __init__(self, sentences, model_file=None, alpha=0.025, window=5, sample=0, seed=1,
                 workers=1, min_alpha=0.0001, sg=1, hs=1, negative=0, cbow_mean=0, iteration=1):
        self.sg = int(sg)
        self.table = None  # for negative sampling --> this needs a lot of RAM! consider setting back to None before saving
        self.alpha = float(alpha)
        self.window = int(window)
        self.seed = seed
        self.sample = sample
        self.workers = workers
        self.min_alpha = min_alpha
        self.hs = hs
        self.negative = negative
        self.cbow_mean = int(cbow_mean)
        self.iteration = iteration

        if model_file and sentences:
            self.w2v = Word2Vec.load(model_file)
            self.vocab = self.w2v.vocab
            self.layer1_size = self.w2v.layer1_size
            self.reset_sent_vec(sentences)
            for i in range(iteration):
                self.train_sent(sentences)

    def reset_sent_vec(self, sentences):
        """Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary."""
        logger.info("resetting vectors for sentences")
        random.seed(self.seed)
        self.sents_len = 0
        for sent in sentences:
            self.sents_len += 1
        self.sents = empty((self.sents_len, self.layer1_size), dtype=REAL)
        # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once
        for i in xrange(self.sents_len):
            self.sents[i] = (random.rand(self.layer1_size) - 0.5) / self.layer1_size

    def train_sent(self, sentences, total_words=None, word_count=0, sent_count=0, chunksize=100):
        """
        Update the model's neural weights from a sequence of sentences (can be a once-only generator stream).
        Each sentence must be a list of unicode strings.

        """
        logger.info("training model with %i workers on %i sentences and %i features, "
                    "using 'skipgram'=%s 'hierarchical softmax'=%s 'subsample'=%s and 'negative sampling'=%s" %
                    (self.workers, self.sents_len, self.layer1_size, self.sg, self.hs, self.sample, self.negative))

        if not self.vocab:
            raise RuntimeError("you must first build vocabulary before training the model")

        start, next_report = time.time(), [1.0]
        word_count = [word_count]
        sent_count = [sent_count]
        total_words = total_words or sum(v.count for v in itervalues(self.vocab))
        total_sents = self.sents_len * self.iteration
        jobs = Queue(
            maxsize=2 * self.workers)  # buffer ahead only a limited number of jobs.. this is the reason we can't simply use ThreadPool :(
        lock = threading.Lock()  # for shared state (=number of words trained so far, log reports...)

        def worker_train():
            """Train the model, lifting lists of sentences from the jobs queue."""
            work = zeros(self.layer1_size, dtype=REAL)  # each thread must have its own work memory
            neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL)

            while True:
                job = jobs.get()
                if job is None:  # data finished, exit
                    break
                    # update the learning rate before every job
                alpha = max(self.min_alpha, self.alpha * (1 - 1.0 * word_count[0] / total_words))
                if self.sg:
                    job_words = sum(self.train_sent_vec_sg(self.w2v, sent_no, sentence, alpha, work)
                                    for sent_no, sentence in job)
                else:
                    job_words = sum(self.train_sent_vec_cbow(self.w2v, sent_no, sentence, alpha, work, neu1)
                                    for sent_no, sentence in job)
                with lock:
                    word_count[0] += job_words
                    sent_count[0] += chunksize
                    elapsed = time.time() - start
                    if elapsed >= next_report[0]:
                        logger.info("PROGRESS: at %.2f%% sents, alpha %.05f, %.0f words/s" %
                                    (100.0 * sent_count[0] / total_sents, alpha,
                                     word_count[0] / elapsed if elapsed else 0.0))
                        next_report[
                            0] = elapsed + 1.0  # don't flood the log, wait at least a second between progress reports

        workers = [threading.Thread(target=worker_train) for _ in xrange(self.workers)]
        for thread in workers:
            thread.daemon = True  # make interrupting the process with ctrl+c easier
            thread.start()

        def prepare_sentences():
            for sent_no, sentence in enumerate(sentences):
                # avoid calling random_sample() where prob >= 1, to speed things up a little:
                # sampled = [self.vocab[word] for word in sentence
                #            if word in self.vocab and (self.vocab[word].sample_probability >= 1.0 or self.vocab[word].sample_probability >= random.random_sample())]
                sampled = [self.vocab.get(word, None) for word in sentence]
                yield (sent_no, sampled)

        # convert input strings to Vocab objects (eliding OOV/downsampled words), and start filling the jobs queue
        for job_no, job in enumerate(utils.grouper(prepare_sentences(), chunksize)):
            logger.debug("putting job #%i in the queue, qsize=%i" % (job_no, jobs.qsize()))
            jobs.put(job)
        logger.info("reached the end of input; waiting to finish %i outstanding jobs" % jobs.qsize())
        for _ in xrange(self.workers):
            jobs.put(None)  # give the workers heads up that they can finish -- no more work!

        for thread in workers:
            thread.join()

        elapsed = time.time() - start
        logger.info("training on %i words took %.1fs, %.0f words/s" %
                    (word_count[0], elapsed, word_count[0] / elapsed if elapsed else 0.0))

        return word_count[0]

    def train_sent_vec_cbow(self, model, sent_no, sentence, alpha, work=None, neu1=None):
        """
        Update CBOW model by training on a single sentence.

        The sentence is a list of Vocab objects (or None, where the corresponding
        word is not in the vocabulary. Called internally from `Word2Vec.train()`.

        This is the non-optimized, Python version. If you have cython installed, gensim
        will use the optimized version from word2vec_inner instead.

        """
        sent_vec = self.sents[sent_no]
        if self.negative:
            # precompute negative labels
            labels = zeros(self.negative + 1)
            labels[0] = 1.

        for pos, word in enumerate(sentence):
            if word is None:
                continue  # OOV word in the input sentence => skip
            reduced_window = random.randint(self.window)  # `b` in the original word2vec code
            start = max(0, pos - self.window + reduced_window)
            window_pos = enumerate(sentence[start: pos + self.window + 1 - reduced_window], start)
            word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)]
            l1 = np_sum(model.syn0[word2_indices], axis=0)  # 1 x layer1_size
            l1 += sent_vec
            if word2_indices and self.cbow_mean:
                l1 /= len(word2_indices)
            neu1e = zeros(l1.shape)

            if self.hs:
                l2a = model.syn1[word.point]  # 2d matrix, codelen x layer1_size
                fa = 1. / (1. + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                ga = (1. - word.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                # model.syn1[word.point] += outer(ga, l1) # learn hidden -> output
                neu1e += dot(ga, l2a)  # save error

            if self.negative:
                # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                word_indices = [word.index]
                while len(word_indices) < self.negative + 1:
                    w = model.table[random.randint(model.table.shape[0])]
                    if w != word.index:
                        word_indices.append(w)
                l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
                neu1e += dot(gb, l2b)  # save error

            # model.syn0[word2_indices] += neu1e # learn input -> hidden, here for all words in the window separately
            self.sents[sent_no] += neu1e  # learn input -> hidden, here for all words in the window separately

        return len([word for word in sentence if word is not None])

    def train_sent_vec_sg(self, model, sent_no, sentence, alpha, work=None):
        """
        Update skip-gram model by training on a single sentence.

        The sentence is a list of Vocab objects (or None, where the corresponding
        word is not in the vocabulary. Called internally from `Word2Vec.train()`.

        This is the non-optimized, Python version. If you have cython installed, gensim
        will use the optimized version from word2vec_inner instead.

        """
        if self.negative:
            # precompute negative labels
            labels = zeros(self.negative + 1)
            labels[0] = 1.0

        for pos, word in enumerate(sentence):
            if word is None:
                continue  # OOV word in the input sentence => skip
            reduced_window = random.randint(model.window)  # `b` in the original word2vec code

            # now go over all words from the (reduced) window, predicting each one in turn
            start = max(0, pos - model.window + reduced_window)
            for pos2, word2 in enumerate(sentence[start: pos + model.window + 1 - reduced_window], start):
                # don't train on OOV words and on the `word` itself
                if word2:
                    # l1 = model.syn0[word.index]
                    l1 = self.sents[sent_no]
                    neu1e = zeros(l1.shape)

                    if self.hs:
                        # work on the entire tree at once, to push as much work into numpy's C routines as possible (performance)
                        l2a = deepcopy(model.syn1[word2.point])  # 2d matrix, codelen x layer1_size
                        fa = 1.0 / (1.0 + exp(-dot(l1, l2a.T)))  # propagate hidden -> output
                        ga = (1 - word2.code - fa) * alpha  # vector of error gradients multiplied by the learning rate
                        # model.syn1[word2.point] += outer(ga, l1)  # learn hidden -> output
                        neu1e += dot(ga, l2a)  # save error

                    if self.negative:
                        # use this word (label = 1) + `negative` other random words not from this sentence (label = 0)
                        word_indices = [word2.index]
                        while len(word_indices) < model.negative + 1:
                            w = model.table[random.randint(model.table.shape[0])]
                            if w != word2.index:
                                word_indices.append(w)
                        l2b = model.syn1neg[word_indices]  # 2d matrix, k+1 x layer1_size
                        fb = 1. / (1. + exp(-dot(l1, l2b.T)))  # propagate hidden -> output
                        gb = (labels - fb) * alpha  # vector of error gradients multiplied by the learning rate
                        # model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output
                        neu1e += dot(gb, l2b)  # save error

                    # model.syn0[word.index] += neu1e  # learn input -> hidden
                    self.sents[sent_no] += neu1e  # learn input -> hidden

        return len([word for word in sentence if word is not None])

    def save_sent2vec_format(self, fname):
        """
        Store the input-hidden weight matrix in the same format used by the original
        C word2vec-tool, for compatibility.

        """
        logger.info("storing %sx%s projection weights into %s" % (self.sents_len, self.layer1_size, fname))
        assert (self.sents_len, self.layer1_size) == self.sents.shape
        with utils.smart_open(fname, 'wb') as fout:
            fout.write(utils.to_utf8("%s %s\n" % self.sents.shape))
            # store in sorted order: most frequent words at the top
            for sent_no in xrange(self.sents_len):
                row = self.sents[sent_no]
                fout.write(utils.to_utf8("sent_%d %s\n" % (sent_no, ' '.join("%f" % val for val in row))))

    def similarity(self, sent1, sent2):
        """
        Compute cosine similarity between two sentences. sent1 and sent2 are
        the indexs in the train file.

        Example::

          >>> trained_model.similarity(0, 0)
          1.0

          >>> trained_model.similarity(1, 3)
          0.73

        """
        return dot(matutils.unitvec(self.sents[sent1]), matutils.unitvec(self.sents[sent2]))


class BrownCorpus(object):
    """Iterate over sentences from the Brown corpus (part of NLTK data)."""

    def __init__(self, dirname):
        self.dirname = dirname

    def __iter__(self):
        for fname in os.listdir(self.dirname):
            fname = os.path.join(self.dirname, fname)
            if not os.path.isfile(fname):
                continue
            for line in utils.smart_open(fname):
                line = utils.to_unicode(line)
                # each file line is a single sentence in the Brown corpus
                # each token is WORD/POS_TAG
                token_tags = [t.split('/') for t in line.split() if len(t.split('/')) == 2]
                # ignore words with non-alphabetic tags like ",", "!" etc (punctuation, weird stuff)
                words = ["%s/%s" % (token.lower(), tag[:2]) for token, tag in token_tags if tag[:2].isalpha()]
                if not words:  # don't bother sending out empty sentences
                    continue
                yield words


class Text8Corpus(object):
    """Iterate over sentences from the "text8" corpus, unzipped from http://mattmahoney.net/dc/text8.zip ."""

    def __init__(self, fname):
        self.fname = fname

    def __iter__(self):
        # the entire corpus is one gigantic line -- there are no sentence marks at all
        # so just split the sequence of tokens arbitrarily: 1 sentence = 1000 tokens
        sentence, rest, max_sentence_length = [], b'', 1000
        with utils.smart_open(self.fname) as fin:
            while True:
                text = rest + fin.read(8192)  # avoid loading the entire file (=1 line) into RAM
                if text == rest:  # EOF
                    sentence.extend(rest.split())  # return the last chunk of words, too (may be shorter/longer)
                    if sentence:
                        yield sentence
                    break
                last_token = text.rfind(
                    b' ')  # the last token may have been split in two... keep it for the next iteration
                words, rest = (
                utils.to_unicode(text[:last_token]).split(), text[last_token:].strip()) if last_token >= 0 else (
                [], text)
                sentence.extend(words)
                while len(sentence) >= max_sentence_length:
                    yield sentence[:max_sentence_length]
                    sentence = sentence[max_sentence_length:]


class LineSentence(object):
    """Simple format: one sentence = one line; words already preprocessed and separated by whitespace."""

    def __init__(self, source):
        """
        `source` can be either a string or a file object.

        Example::

            sentences = LineSentence('myfile.txt')

        Or for compressed files::

            sentences = LineSentence('compressed_text.txt.bz2')
            sentences = LineSentence('compressed_text.txt.gz')

        """
        self.source = source

    def __iter__(self):
        """Iterate through the lines in the source."""
        try:
            # Assume it is a file-like object and try treating it as such
            # Things that don't have seek will trigger an exception
            self.source.seek(0)
            for line in self.source:
                yield utils.to_unicode(line).split()
        except AttributeError:
            # If it didn't work like a file, use it as a string filename
            with utils.smart_open(self.source) as fin:
                for line in fin:
                    yield utils.to_unicode(line).split()


# Example: ./word2vec.py ~/workspace/word2vec/text8 ~/workspace/word2vec/questions-words.txt ./text8
if __name__ == "__main__":
    logging.basicConfig(format='%(asctime)s : %(threadName)s : %(levelname)s : %(message)s', level=logging.INFO)
    logging.info("running %s" % " ".join(sys.argv))
    logging.info("using optimization %s" % FAST_VERSION)

    # check and process cmdline input
    program = os.path.basename(sys.argv[0])
    if len(sys.argv) < 2:
        print(globals()['__doc__'] % locals())
        sys.exit(1)

    seterr(all='raise')  # don't ignore numpy errors

    if len(sys.argv) > 3:
        input_file = sys.argv[1]
        model_file = sys.argv[2]
        out_file = sys.argv[3]
        model = Sent2Vec(LineSentence(input_file), model_file=model_file, iteration=100)
        model.save_sent2vec_format(out_file)
    elif len(sys.argv) > 1:
        input_file = sys.argv[1]
        model = Word2Vec(LineSentence(input_file), size=100, window=5, min_count=5, workers=8)
        model.save(input_file + '.model')
        model.save_word2vec_format(input_file + '.vec')
    else:
        pass

    program = os.path.basename(sys.argv[0])
    logging.info("finished running %s" % program)

Relevant Link:

https://www.zhihu.com/question/21661274
https://fb56552f-a-62cb3a1a-s-sites.googlegroups.com/site/deeplearningworkshopnips2014/68.pdf?attachauth=ANoY7cq83cA2A-ZgTWKF9vIxGRQs96O5OGXbt8n_GqRuU_4IellDNS17z_56Wa6aafihhDHuNHM_7d_jitkT27Cy_RnspiY8Dms5w_eBXFrVBFoFqSdzPmUbHaAblYPGHNA3mCAYn4whKO5w9uk7w9BLyMIX-QNco591gprLzPTM_XHLYa5U2YtIBhVptFj4LMedeKki_hxk2UkHCN0_MwrLwAgZneBihpOAWSX8GgRb5-uqUWpq3CI%3D&attredirects=2
https://www.zhihu.com/question/27689129
https://github.com/hassyGo/paragraph-vector
https://arxiv.org/pdf/1405.4053.pdf 
https://github.com/jiyfeng/ParagraphVector/tree/master/ParaVector
https://github.com/JonathanRaiman/PVDM
https://github.com/thunlp/paragraph2vec
https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/distributed-representations-of-sentences-and-documents.md
https://github.com/klb3713/sentence2vec

 

6. Example 3: Visualization images as a compact (4096-dim) feature vector derived from a trained convolutional neural network In t-SNE - 影象CNN抽象後的高維度視覺化

我們知道,在NLP問題中詞向量emberdding vector是淺層神經網路訓練的副產物,但是我們可以把這個副產物看作是word在高維空間的對映

對於image影象來說也存在類似的情形,我們構建出VGG這種多層神經卷積網路(這個神經網路不需要啟用層,因為我們需要直接獲取準備輸入啟用層的權重向量),將影象輸入其中進行訓練,在網路卷積層的最後一層輸出的啟用值本質上是一個權重向量,但是我們可以將其看作是該影象在高維空間上的向量化表示,這個向量就是我們需要的image拉伸到高維空間的高維空間向量

def VGG_16():
    model = Sequential()
    model.add(ZeroPadding2D((1, 1), input_shape=(3, 224, 224)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(64, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(128, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(256, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(ZeroPadding2D((1, 1)))
    model.add(Convolution2D(512, 3, 3, activation='relu'))
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
    model.add(Flatten())
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(4096, activation='relu')) return model

最後一層輸入的activation權重向量,可以直接被當成一個高維向量,輸入給t-SNE用於視覺化展示,這裡我們使用pre-train好的影象vector,以及對應的Caltech-101 dataset

# -*- coding: utf-8 -*-

import os
import random
import numpy as np
import json
import matplotlib.pyplot
import cPickle as pickle
from matplotlib.pyplot import imshow,show
from PIL import Image
from sklearn.manifold import TSNE
from tqdm import tqdm

if __name__ == '__main__':
    images, pca_features = pickle.load(open('../data/features_caltech101.p', 'r'))
    for i, f in zip(images, pca_features):
        print("image: %s, features: %0.2f,%0.2f,%0.2f,%0.2f... " % (i, f[0], f[1], f[2], f[3]))
    # Although in principle, t-SNE works with any number of images, it's difficult to place that many tiles in a single image. So instead, we will take a random subset of 1000 images and plot those on a t-SNE instead. This step is optional.
    num_images_to_plot = 6000

    '''
    It is usually a good idea to first run the vectors through a faster dimensionality reduction technique like principal component analysis to project your data into an intermediate lower-dimensional space before using t-SNE.
    This improves accuracy, and cuts down on runtime since PCA is more efficient than t-SNE. Since we have already projected our data down with PCA in the previous notebook, we can proceed straight to running the t-SNE on the feature vectors.
    '''
    if len(images) > num_images_to_plot:
        sort_order = sorted(random.sample(xrange(len(images)), num_images_to_plot))
        images = [images[i] for i in sort_order]
        pca_features = [pca_features[i] for i in sort_order]

    # Internally, t-SNE uses an iterative approach, making small (or sometimes large) adjustments to the points. By default, t-SNE will go a maximum of 1000 iterations, but in practice, it often terminates early because it has found a locally optimal (good enough) embedding.
    X = np.array(pca_features)
    tsne = TSNE(n_components=2, learning_rate=150, perplexity=30, angle=0.2, verbose=2).fit_transform(X)

    # The variable tsne contains an array of unnormalized 2d points, corresponding to the embedding. In the next cell, we normalize the embedding so that lies entirely in the range (0,1).
    tx, ty = tsne[:, 0], tsne[:, 1]
    tx = (tx - np.min(tx)) / (np.max(tx) - np.min(tx))
    ty = (ty - np.min(ty)) / (np.max(ty) - np.min(ty))

    # Finally, we will compose a new RGB image where the set of images have been drawn according to the t-SNE results.
    # Adjust width and height to set the size in pixels of the full image, and set max_dim to the pixel size (on the largest size) to scale images to.
    width = 4000
    height = 3000
    max_dim = 100

    full_image = Image.new('RGB', (width, height))
    for img, x, y in tqdm(zip(images, tx, ty)):
        tile = Image.open(img)
        rs = max(1, tile.width / max_dim, tile.height / max_dim)
        tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
        full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y)))

    matplotlib.pyplot.figure(figsize=(16, 12))
    imshow(full_image)
    #show()

    # we can save the image to disk:
    full_image.save("../assets/example-tSNE-caltech101.jpg")

從圖片上可以看到,摩托車、椅子、飛機、大象被聚類在了一起,這體現了VGG CNN捕獲到了這些事物的高維細節資訊,t-SNE將這種原理直觀地展示出來了

0x2: handwritten digits MNIST t-SNE

# -*- coding: utf-8 -*-

import numpy as np
from skdata.mnist.views import OfficialImageClassification
from matplotlib import pyplot as plt
from tsne import bh_sne

# load up data
data = OfficialImageClassification(x_dtype="float32")
x_data = data.all_images
y_data = data.all_labels

# convert image data to float64 matrix. float64 is need for bh_sne
x_data = np.asarray(x_data).astype('float64')
x_data = x_data.reshape((x_data.shape[0], -1))

# For speed of computation, only run on a subset
n = 2000
x_data = x_data[:n]
y_data = y_data[:n]

# perform t-SNE embedding
vis_data = bh_sne(x_data)

# plot the result
vis_x = vis_data[:, 0]
vis_y = vis_data[:, 1]

plt.scatter(vis_x, vis_y, c=y_data, cmap=plt.cm.get_cmap("jet", 10))
plt.colorbar(ticks=range(10))
plt.clim(-0.5, 9.5)
plt.show()

用10種顏色區分了【0-9】共10個mnist手寫數字,t-SNE向我們展示了不同的手寫數字在高維空間上存在的空間結構可區分型,這從某種程度上解釋了為什麼用CNN這種演算法能很好地對Mnist手寫數字問題進行準確分類的內在原因,即:

只有資料本身內部包含了可區分的因素,應用對應的模型才能完成準確分類這個任務;分類任務中如何對輸入資料進行抽象表示,有時候和選取什麼模型同樣甚至更重要

Relevant Link:

https://github.com/genekogan/image-tSNE
https://indico.io/blog/visualizing-with-t-sne/
https://github.com/oreillymedia/t-SNE-tutorial
https://drive.google.com/drive/folders/0B3WXSfqxKDkFYm9GMzlnemdEbEE
http://www.vision.caltech.edu/Image_Datasets/Caltech101/#Download
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
https://github.com/genekogan/ofxTSNE
http://ml4a.github.io/guides/ImageTSNEViewer/
https://github.com/ml4a/ml4a-guides/blob/master/notebooks/image-tsne.ipynb
http://ml4a.github.io/guides/ImageTSNELive/
https://github.com/ml4a/ml4a-ofx

 

7. Example 4: 對網路上AV女星進行聚類分析

import argparse
import sys
import numpy as np
import json
import os
from os.path import isfile, join
import keras
from keras.preprocessing import image
from keras.applications.imagenet_utils import decode_predictions, preprocess_input
from keras.models import Model
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from scipy.spatial import distance

from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

def process_arguments(args):
    parser = argparse.ArgumentParser(description='tSNE on audio')
    parser.add_argument('--images_path', action='store', help='path to directory of images')
    parser.add_argument('--output_path', action='store', help='path to where to put output json file')
    parser.add_argument('--num_dimensions', action='store', default=2, help='dimensionality of t-SNE points (default 2)')
    parser.add_argument('--perplexity', action='store', default=30, help='perplexity of t-SNE (default 30)')
    parser.add_argument('--learning_rate', action='store', default=150, help='learning rate of t-SNE (default 150)')
    params = vars(parser.parse_args(args))
    return params

def get_image(path, input_shape):
    img = image.load_img(path, target_size=input_shape)
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    return x

def analyze_images(images_path):
    # make feature_extractor
    model = keras.applications.VGG16(weights='imagenet', include_top=True)
    feat_extractor = Model(input=model.input, output=model.get_layer("fc2").output)
    input_shape = model.input_shape[1:3]
    # get images
    candidate_images = [f for f in os.listdir(images_path) if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]
    # analyze images and grab activations
    activations = []
    images = []
    for idx,image_path in enumerate(candidate_images):
        file_path = join(images_path,image_path)
        img = get_image(file_path, input_shape);
        if img is not None:
            print("getting activations for %s %d/%d" % (image_path,idx,len(candidate_images)))
            acts = feat_extractor.predict(img)[0]
            activations.append(acts)
            images.append(image_path)
    # run PCA firt
    print("Running PCA on %d images..." % len(activations))
    features = np.array(activations)
    pca = PCA(n_components=300)
    pca.fit(features)
    pca_features = pca.transform(features)
    return images, pca_features

def run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate):
    images, pca_features = analyze_images(images_path)
    print("Running t-SNE on %d images..." % len(images))
    X = np.array(pca_features)
    tsne = TSNE(n_components=tsne_dimensions, learning_rate=tsne_learning_rate, perplexity=tsne_perplexity, verbose=2).fit_transform(X)
    # save data to json
    data = []
    for i,f in enumerate(images):
        point = [ (tsne[i,k] - np.min(tsne[:,k]))/(np.max(tsne[:,k]) - np.min(tsne[:,k])) for k in range(tsne_dimensions) ]
        data.append({"path":os.path.abspath(join(images_path,images[i])), "point":point})
    with open(output_path, 'w') as outfile:
        json.dump(data, outfile)


if __name__ == '__main__':
    params = process_arguments(sys.argv[1:])
    images_path = params['images_path']
    output_path = params['output_path']
    tsne_dimensions = int(params['num_dimensions'])
    tsne_perplexity = int(params['perplexity'])
    tsne_learning_rate = int(params['learning_rate'])
    run_tsne(images_path, output_path, tsne_dimensions, tsne_perplexity, tsne_learning_rate)
    print("finished saving %s" % output_path)

從百度上搜尋"***",下載1000張圖片

python tSNE-images.py --images_path ../data/av/ --output_path ../module/ImageTSNEViewer/av_points.json

用matplotlib展示在一張大圖上

# -*- coding: utf-8 -*-
import json
from matplotlib.pyplot import imshow
from matplotlib.pyplot import imshow
import matplotlib.pyplot
from PIL import Image

if __name__ == '__main__':
    # show on Display Board
    width = 4000
    height = 3000
    max_dim = 100

    full_image = Image.new('RGB', (width, height))

    # reading pre-trained image pointer
    with open('../module/ImageTSNEViewer/av_points.json', 'r') as f:
        data = json.load(f)
    for line in data:
        img = line['path']
        x, y = line['point'][0], line['point'][1]
        print img, x, y
        tile = Image.open(img)
        rs = max(1, tile.width / max_dim, tile.height / max_dim)
        tile = tile.resize((int(tile.width / rs), int(tile.height / rs)), Image.ANTIALIAS)
        full_image.paste(tile, (int((width - max_dim) * x), int((height - max_dim) * y)))

    matplotlib.pyplot.figure(figsize=(16, 12))
    imshow(full_image)

    # we can save the image to disk:
    full_image.save("../assets/example-tSNE-av.jpg")

放大區域性細節

可以看到,VGGnet把影象裡高維空間的細節資訊捕獲到了

 

8. 對多詞的長文字的emberdding向量化表示

git clone https://github.com/torch/distro.git --recursive
bash install-deps;
./install.sh
source ~/.bashrc

pytorch
pip install http://download.pytorch.org/whl/cu75/torch-0.1.12.post2-cp27-none-linux_x86_64.whl 

0x1: A Structured Self-attentive Sentence Embedding

Attention機制最早是在視覺影象領域提出來的,研究的動機其實也是受到人類注意力機制的啟發。人們在進行觀察影象的時候,其實並不是一次就把整幅影象的每個位置畫素都看過,大多是根據需求將注意力集中到影象的特定部分。最具代表性的這篇論文《Recurrent Models of Visual Attention》,下圖是這篇論文的核心模型示意圖

該模型是在傳統的RNN上加入了attention機制(即紅圈圈出來的部分),通過attention去學習一幅影象要處理的部分,每次當前狀態,都會根據前一個狀態學習得到的要關注的位置l和當前輸入的影象,去處理注意力部分畫素,而不是影象的全部畫素。這樣的好處就是更少的畫素需要處理,減少了任務的複雜度。可以看到影象中應用attention和人類的注意力機制是很類似的 

https://openreview.net/pdf?id=BJC_jUqxe

大致流程如下

1. Running the sentence through an RNN: 
    1) 基於word emberdding將輸入sentence進行word向量化編碼,得到: S = (w1, w2, . . ., wn) 
    2) 使用雙向LSTM處理變長的word vector序列輸入,得到hidden state序列: H = (h1, h2, · · · hn),每一個h都由雙向的hidden state合併而成

2. learning multiple attention values for each RNN state: 
這裡的n是輸入資料的變長長度,而我們的目標是生成一個定長的emberdding向量,為達到這個目的,我們引入了self-attention mechanism,
M hidden vectors in H. Computing the linear combination requires the self-attention mechanism. The attention mechanism takes the whole LSTM hidden states H as input, and outputs a vector of weights,1維張量(權重行向量)

3. encouraging each attention vector to focus on different parts of the sentence by adding a penalty term
在一段sentence中,往往不只一個"重點特徵點",因此將hidden state擴充套件到MLP
we need multiple m’s that focus on different parts of the sentence. Thus we need to perform multiple hops of attention. Say we want r different parts to be extracted from the sentence

該方法提出了將一段sentence文字對映為一個emberdding矩陣

Relevant Link:

https://openreview.net/pdf?id=BJC_jUqxe
http://pytorch.org/docs/master/tensors.html
https://github.com/torch/ezinstall
http://torch.ch/docs/getting-started.html
http://www.cnblogs.com/robert-dlut/p/5952032.html
https://github.com/yufengm/SelfAttentive
https://arxiv.org/pdf/1703.03130.pdf
https://nlp.stanford.edu/projects/glove/
https://github.com/Diego999/SelfSent
https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/self_attention_embedding.md

0x2: context2vec
Relevant Link:

https://github.com/orenmel/context2vec
http://u.cs.biu.ac.il/~nlp/resources/downloads/context2vec/
https://www.slideshare.net/BhaskarMitra3/neural-text-embeddings-for-information-retrieval-wsdm-2017
https://github.com/jhlau/doc2vec
https://u.cs.biu.ac.il/~melamuo/publications/context2vec_conll16.pdf

Copyright (c) 2017 LittleHann All rights reserved

相關文章