解压后取出以下文件:
训练数据:icwb2-data/training/pku_ training.utf8
测试数据:icwb2-data/testing/pku_ test.utf8
正确分词结果:icwb2-data/gold/pku_ test_ gold.utf8
评分工具:icwb2-data/script/socre
2 算法描述
算法是最简单的正向最大匹配(fmm):
用训练数据生成一个字典
对测试数据从左到右扫描,遇到一个最长的词,就切分下来,直到句子结束
注:这是最初的算法,这样做代码可以控制在60行内,后来看测试结果发现没有很好地处理数字问题, 才又增加了对数字的处理。
3 源代码及注释
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# author: minix
# date: 2013-03-20
import codecs
import sys
# 由规则处理的一些特殊符号
nummath = [u’0′, u’1′, u’2′, u’3′, u’4′, u’5′, u’6′, u’7′, u’8′, u’9′]
nummath_suffix = [u’.’, u’%’, u’亿’, u’万’, u’千’, u’百’, u’十’, u’个’]
numcn = [u’一’, u’二’, u’三’, u’四’, u’五’, u’六’, u’七’, u’八’, u’九’, u’〇’, u’零’]
numcn_suffix_date = [u’年’, u’月’, u’日’]
numcn_suffix_unit = [u’亿’, u’万’, u’千’, u’百’, u’十’, u’个’]
special_char = [u'(‘, u’)’]
def proc_num_math(line, start):
“”” 处理句子中出现的数学符号 “””
oldstart = start
while line[start] in nummath or line[start] in nummath_suffix:
start = start + 1
if line[start] in numcn_suffix_date:
start = start + 1
return start – oldstart
def proc_num_cn(line, start):
“”” 处理句子中出现的中文数字 “””
oldstart = start
while line[start] in numcn or line[start] in numcn_suffix_unit:
start = start + 1
if line[start] in numcn_suffix_date:
start = start + 1
return start – oldstart
def rules(line, start):
“”” 处理特殊规则 “””
if line[start] in nummath:
return proc_num_math(line, start)
elif line[start] in numcn:
return proc_num_cn(line, start)
def gendict(path):
“”” 获取词典 “””
f = codecs.open(path,’r’,’utf-8′)
contents = f.read()
contents = contents.replace(u’\r’, u”)
contents = contents.replace(u’\n’, u”)
# 将文件内容按空格分开
mydict = contents.split(u’ ‘)
# 去除词典list中的重复
newdict = list(set(mydict))
newdict.remove(u”)
# 建立词典
# key为词首字,value为以此字开始的词构成的list
truedict = {}
for item in newdict:
if len(item)>0 and item[0] in truedict:
value = truedict[item[0]]
value.append(item)
truedict[item[0]] = value
else:
truedict[item[0]] = [item]
return truedict
def print_unicode_list(uni_list):
for item in uni_list:
print item,
def pidewords(mydict, sentence):
“””
根据词典对句子进行分词,
使用正向匹配的算法,从左到右扫描,遇到最长的词,
就将它切下来,直到句子被分割完闭
“””
rulechar = []
rulechar.extend(numcn)
rulechar.extend(nummath)
result = []
start = 0
senlen = len(sentence)
while start < senlen:
curword = sentence[start]
maxlen = 1
# 首先查看是否可以匹配特殊规则
if curword in numcn or curword in nummath:
maxlen = rules(sentence, start)
# 寻找以当前字开头的最长词
if curword in mydict:
words = mydict[curword]
for item in words:
itemlen = len(item)
if sentence[start:start+itemlen] == item and itemlen > maxlen:
maxlen = itemlen
result.append(sentence[start:start+maxlen])
start = start + maxlen
return result
def main():
args = sys.argv[1:]
if len(args) < 3:
print 'usage: python dw.py dict_path test_path result_path'
exit(-1)
dict_path = args[0]
test_path = args[1]
result_path = args[2]
dicts = gendict(dict_path)
fr = codecs.open(test_path,'r','utf-8')
test = fr.read()
result = pidewords(dicts,test)
fr.close()
fw = codecs.open(result_path,'w','utf-8')
for item in result:
fw.write(item + ' ')
fw.close()
if __name__ == "__main__":
main()
4 测试及评分结果
使用 dw.py 训练数据 测试数据, 生成结果文件
使用 score 根据训练数据,正确分词结果,和我们生成的结果进行评分
使用 tail 查看结果文件最后几行的总体评分,另外socre.utf8中还提供了大量的比较结果, 可以用于发现自己的分词结果在哪儿做的不够好
注:整个测试过程都在ubuntu下完成
$ python dw.py pku_training.utf8 pku_test.utf8 pku_result.utf8
$ perl score pku_training.utf8 pku_test_gold.utf8 pku_result.utf8 > score.utf8
$ tail -22 score.utf8
insertions: 0
deletions: 0
substitutions: 0
nchange: 0
ntruth: 27
ntest: 27
true words recall: 1.000
test words precision: 1.000
=== summary:
=== total insertions: 4623
=== total deletions: 1740
=== total substitutions: 6650
=== total nchange: 13013
=== total true word count: 104372
=== total test word count: 107255
=== total true words recall: 0.920
=== total test words precision: 0.895
=== f measure: 0.907
=== oov rate: 0.940
=== oov recall rate: 0.917
=== iv recall rate: 0.966
基于词典的fmm算法是非常基础的分词算法,效果没那么好,不过足够简单,也易于入手,随着学习的深入,我可能还会用python实现其它的分词算法。另外一个感受是,看书的时候尽量多去实现,这样会让你有足够的热情去关注理论的每一个细节,不会感到那么枯燥无力。