首页 | 本学科首页   官方微博 | 高级检索  
     

一种基于N-gram模型和机器学习的汉语分词算法
引用本文:吴应良, 韦岗, 李海洲. 一种基于N-gram模型和机器学习的汉语分词算法[J]. 电子与信息学报, 2001, 23(11): 1148-1153.
作者姓名:吴应良  韦岗  李海洲
作者单位:1. 华南理工大学工商管理学院,
2. 华南理工大学电子与通信工程系,
摘    要:汉语的自动分词,是计算机中文信息处理领域中一个基础而困难的课题。该文提出了一种将汉语文本句子切分成词的新方法,这种方法以N-gram模型为基础,并结合有效的Viterbi搜索算法来实现汉语句子的切词。由于采用了基于机器学习的自组词算法,无需人工编制领域词典。该文还讨论了评价分词算法的两个定量指标,即查准率和查全率的定义,在此基础上,用封闭语料库和开放语料库对该文提出的汉语分词模型进行了实验测试,表明该模型和算法具有较高的查准率和查全率。

关 键 词:汉语分词   N-gram模型   机器学习   查准率   查全率
收稿时间:1999-09-29
修稿时间:1999-09-29

A WORD SEGMENTATION ALGORITHM FOR CHINESE LANGUAGE BASED ON N-GRAM MODELS AND MACHINE LEARNING
Wu Yingliang, Wei Gang, Li Haizhou. A WORD SEGMENTATION ALGORITHM FOR CHINESE LANGUAGE BASED ON N-GRAM MODELS AND MACHINE LEARNING[J]. Journal of Electronics & Information Technology, 2001, 23(11): 1148-1153.
Authors:Wu Yingliang  Wei Gang  Li Haizhou
Affiliation:School of Business Administration South China Univ. of Tech., Guangzhou 510641 China;Dept. of Electron and Info. Eng., Guangzhou 510641 China
Abstract:Automatic word segmentation for the Chinese language is a fundamental and difficult problem in the field of computer Chinese language information processing. This paper presents a new method for segmenting the input Chinese language text sentence into words, which consists of a character-based N-gram model and an efficient Viterbi search algorithm. In addition, two performance evaluation ration targets, i.e. Recall and Precision for word segmentation algorithm are discussed, The effectiveness has been confirmed by evaluation experiments using the closed texts and open texts corpus.
Keywords:Chinese language word segmentation   N-gram model   Machine learning   Precision   Recall
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号