共查询到19条相似文献,搜索用时 109 毫秒
1.
2.
3.
中文领域术语自动抽取是中文信息处理中的一项基础性课题,并在自然语言生成、信息检索、文本摘要等领域中有广泛的应用。针对领域术语抽取问题,采用基于规则和多种统计策略相融合的方法,从词语度和领域度两个角度出发,提出一种领域术语的抽取算法并构建出相应的抽取系统。系统流程包括基于左右信息熵扩展的候选领域术语获取、基于词性搭配规则与边界信息出现概率知识库相结合的词语度筛选策略以及基于词频-逆文档频率(TF?IDF)的领域度筛选策略。运用此算法不但能抽取出领域的常见用词,还可以挖掘出领域新词。实验结果显示,基于如上方法构建的领域术语抽取系统的准确率为84.33%,所提方法能够有效支持中文领域术语的自动抽取。 相似文献
4.
介绍从平行语料库中如何抽取双语短语翻译对。首先用统计模型正则期望从汉语专利语料库中抽取汉语短语。抽取的短语利用统计知识和语言学知识来过滤,使得过滤后汉语短语的正确率较高;其次,利用词对齐工具Giza++从汉英平行语料库中抽取词汇对齐,在词汇对齐的基础上利用开源工具Moses抽取汉英短语对齐,根据短语对齐与抽取出的高质量汉语短语的交集来抽取候选的汉英互译的源语言短语;接着使用停用词、对数似然估计法LLR和上下文熵来对英语短语翻译进行过滤。实验结果表明,过滤后,抽取的汉语短语准确率为97.6%,汉英短语翻译对的准确率为92.4%。 相似文献
5.
6.
复述抽取是自然语言处理任务中的一个重要分支,高质量的复述资源对于提升信息检索、问答系统、机器翻译等任务的效果有很大帮助。该文将任务限定在中文短语复述抽取,提出了基于2BiLSTM+CNN+CRF的序列标注模型,用于单语中文语料短语划分,通过若干过滤规则获取优质中文短语。之后又提出了基于表示学习的候选复述获取方法,通过BattRAE模型获取中文短语向量表示,并使用余弦相似度计算短语间的语义距离。该文根据语义距离对短语对进行过滤,将语义距离相近的短语视作候选的复述短语,再通过规则过滤掉错误的候选复述。在最后的结果中,随机抽取出500条短语复述资源进行人工评价,达到了0.814的精确度以及0.826的MRR值。 相似文献
7.
林晓庆 《计算机工程与设计》2010,31(8)
提出了一种基于句法分析与词语相关性相结合的方法实现英文专利文献中名词短语的翻译,建立了一个面向专利文献的名词短语双语实例库,形成名词短语(NP)树库.对待翻译的术语NP,先进行句法分析,再在NP树库中搜索与该术语NP匹配的NP树,对匹配的NP树,用<知网>计算词语间语义相似度,找到最相似NP树,然后计算词语的翻译候选之间的相关性找到词语翻译,最后调整语序生成译文;若不存在匹配的NP树,搜索与该NP树的子NP相匹配的NP树,递归生成译文.使用BLEU作为机器评价准则,实验结果表明,该方法优于基于短语的统计翻译系统(Pharaoh). 相似文献
8.
9.
10.
基于多策略的专业领域术语抽取器的设计 总被引:9,自引:0,他引:9
设计了一个将统计方法与规则方法相结合的专业领域内术语抽取算法。针对专业领域术语的特点,利用多种衡量字符串中各字之间结合“紧密程度”的统计量,先使用阈值分类器抽取出双字候选项;然后再对这些候选项向左右进行一定程度的扩充,从中筛选出符合要求的多字候选项;最后将所得候选项进行过滤,得到最终结果。据此实现了一个以未切分标注的生语料为输入、以专业领域术语为输出的抽取程序,在对多个领域内的语料进行测试后对实验结果进行分析,指出其中存在的问题,对未来的工作作出了展望。 相似文献
11.
朴素贝叶斯算法在垃圾邮件过滤领域得到了广泛应用,该算法中,特征提取是一个必不可少的环节。过去针对中文的垃圾邮件过滤方法都以词作为文本的特征项单位进行提取,面对大规模的邮件训练样本,这种算法的时间效率会成为邮件过滤技术中的一个瓶颈。对此,提出一种基于短语的贝叶斯中文垃圾邮件过滤方法,在特征项提取阶段结合文本分类领域提出的新的短语分析方法,按照基本名词短语、基本动词短语、基本语义分析规则,以短语为单位进行提取。通过分别以词和短语为单位进行垃圾邮件过滤的对比测试实验证实了所提出方法的有效性。 相似文献
12.
Chinese word segmentation is a difficult and challenging job because Chinese has no white space to mark word boundaries. Its result largely depends on the quality of the segmentation dictionary. Many domain phrases are cut into single words for they are not contained in the general dictionary. This paper demonstrates a Chinese domain phrase identification algorithm based on atomic word formation. First, atomic word formation algorithm is used to extract candidate strings from corpus after pretreatment. These extracted strings are stored as the candidate domain phrase set. Second, a lot of strategies such as repeated substring screening, part of speech (POS) combination filtering, and prefix and suffix filtering and so on are used to filter the candidate domain phrases. Third, a domain phrase refining method is used to determine whether a string is a domain phrase or not by calculating the domain relevance of this string. Finally, sort all the identified strings and then export them to users. With the help of morphological rules, this method uses the combination of statistical information and rules instead of corpus machine learning. Experiments proved that this method can obtain better results than traditional n-gram methods. 相似文献
13.
14.
该文提出一种融入简单名词短语信息的介词短语识别方法。该方法首先使用CRF模型识别语料中的简单名词短语,并使用转换规则对识别结果进行校正,使其更符合介词短语的内部短语形式;然后依据简单名词短语识别结果对语料进行分词融合;最后,通过多层CRFs模型对测试语料进行介词短语识别,并使用规则进行校正。介词短语识别的精确率、召回率及F-值分别为: 93.02%、92.95%、92.99%,比目前发表的最好结果高1.03个百分点。该实验结果表明基于简单名词短语的介词短语识别算法的有效性。
相似文献
相似文献
15.
16.
传统的TF*PDF方法提取的关键短语可精确地描述话题并进行新闻报道的追踪, 但存在误将噪声数据识别为关键短语的情况。提出了一种基于位置权重TF*PDF的两段式关键短语提取方法滤除噪声数据。该方法将传统的TF*PDF算法与位置权重相结合, 计算词汇与短语的权重, 获取候选关键短语列表, 关键短语的脉冲值则用于过滤列表中的噪声。通过关键短语识别进程根据位置信息、频率信息等将热点词汇组合成短语。TF*PDF位置权重算法同时也用于为短语分配权重, 排名前K的短语被认为是热点关键短语。以真实网络数据为基础的实验结果表明, 该提取方法与传统的TF*PDF提取方法相比, 可更好地去除关键词短语中的绝对噪声, 较好地改善了热点话题检测的准确度。 相似文献
17.
《Pattern recognition》2002,35(1):245-252
Research in handwriting recognition has thus far been primarily focused on recognizing words and phrases. In fact, phrases are usually treated as a concatenation of the constituent words making it in essence an enhanced word recognizer. In this paper we present a methodology that will take advantage of the spacing between the words in a phrase to aid the recognition process. The novelty of our approach lies in the fact that the determination of word breaks is made in a manner that adapts to the writing style of the individual. The parameters that decide whether a particular gap between components is an inter-word gap or an inter-character gap are computed without the necessity of generalizing over a large training set. Rather, it is tuned to the distribution of the gaps within the instance of the phrase image being examined. We compare our approach to the methods described in the literature that simply ignore the significance of gaps in a phrase. Our experiments show an improvement of about 5% in recognition rates. On a test set of about 1400 phrase images the segmentation method “misses” only 2% of the true word break points. 相似文献
18.
Mehrnoosh Sadrzadeh Dimitri Kartsaklis Esma Balkır 《Annals of Mathematics and Artificial Intelligence》2018,82(4):189-218
Distributional semantic models provide vector representations for words by gathering co-occurrence frequencies from corpora of text. Compositional distributional models extend these from words to phrases and sentences. In categorical compositional distributional semantics, phrase and sentence representations are functions of their grammatical structure and representations of the words therein. In this setting, grammatical structures are formalised by morphisms of a compact closed category and meanings of words are formalised by objects of the same category. These can be instantiated in the form of vectors or density matrices. This paper concerns the applications of this model to phrase and sentence level entailment. We argue that entropy-based distances of vectors and density matrices provide a good candidate to measure word-level entailment, show the advantage of density matrices over vectors for word level entailments, and prove that these distances extend compositionally from words to phrases and sentences. We exemplify our theoretical constructions on real data and a toy entailment dataset and provide preliminary experimental evidence. 相似文献