用幾十行程式碼實現python中英文分詞

pythontab發表於2013-01-05

說到分詞大家肯定一般認為是很高深的技術,但是今天作者用短短几十行程式碼就搞定了,感嘆python很強大啊!作者也很強大。不過這個只是正向最大匹配,沒有機器學習能力

注意:使用前先要下載搜狗詞庫

# -*- coding:utf-8 -*-
  
#寫了一個簡單的支援中文的正向最大匹配的機械分詞,其它不用解釋了,就幾十行程式碼
#附:搜狗詞庫下載地址:http://vdisk.weibo.com/s/7RlE5
  
import string
__dict = {} 
  
def load_dict(dict_file='words.dic'):
    #載入詞庫,把詞庫載入成一個key為首字元,value為相關詞的列表的字典
  
    words = [unicode(line, 'utf-8').split() for line in open(dict_file)]
  
    for word_len, word in words:
        first_char = word[0]
        __dict.setdefault(first_char, [])
        __dict[first_char].append(word)
     
    #按詞的長度倒序排列
    for first_char, words in __dict.items():
        __dict[first_char] = sorted(words, key=lambda x:len(x), reverse=True)
  
def __match_ascii(i, input):
    #返回連續的英文字母,數字,符號
    result = ''
    for i in range(i, len(input)):
        if not input[i] in string.ascii_letters: break
        result += input[i]
    return result
  
  
def __match_word(first_char, i , input):
    #根據當前位置進行分詞,ascii的直接讀取連續字元,中文的讀取詞庫
  
    if not __dict.has_key(first_char): 
        if first_char in string.ascii_letters:
            return __match_ascii(i, input)
        return first_char
  
    words = __dict[first_char]
    for word in words:
        if input[i:i+len(word)] == word:
            return word
  
    return first_char
  
def tokenize(input):
    #對input進行分詞,input必須是uncode編碼
  
    if not input: return []
  
    tokens = []
    i = 0
    while i < len(input):
        first_char = input[i] 
        matched_word = __match_word(first_char, i, input)
        tokens.append(matched_word)
        i += len(matched_word)
  
    return tokens
  
  
if __name__ == '__main__':
    def get_test_text():
        import urllib2
        url = "http://news.baidu.com/n?cmd=4&class=rolling&pn=1&from=tab&sub=0"
        text = urllib2.urlopen(url).read()
        return unicode(text, 'gbk')
  
    def load_dict_test():
        load_dict()
        for first_char, words in __dict.items():
            print '%s:%s' % (first_char, ' '.join(words))
  
    def tokenize_test(text):
        load_dict()
        tokens = tokenize(text)
        for token in tokens:
            print token
  
    tokenize_test(unicode(u'美麗的花園裡有各種各樣的小動物'))
    tokenize_test(get_test_text())
我也學習啦~~~

相關文章