首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
双语词典是跨语言自然语言处理中一项非常重要的资源.目前提取双语词典的方法主要是基于平行语料库和基于可比语料库,但是这两种方法在提取新词或者某些技术术语时都存在双语资源匮乏的问题.相比之下,基于部分双语语料的方法由于利用的是新闻或者百科知识,故可以很好地解决这个问题,然而目前基于部分双语语料的方法主要集中在对文本内容的提...  相似文献   

2.
双语词典是跨语言信息检索以及机器翻译等自然语言处理应用中的一项重要资源。现有的基于可比语料库的双语词典提取算法不够成熟,抽取效果有待提高,而且大多数研究都集中在特定领域的专业术语抽取。针对此不足,提出了一种基于词向量与可比语料库的双语词典提取算法。首先给出了该算法的基本假设以及相关的研究方法,然后阐述了基于词向量利用词间关系矩阵从可比语料库中提取双语词典的具体步骤,最后将该抽取方法与经典的向量空间模型做对比,通过实验分析了上下文窗口大小、种子词典大小、词频等因素对两种模型抽取效果的影响。实验表明,与基于向量空间模型的方法相比,本算法的抽取效果有着明显的提升,尤其是对于高频词语其准确率提升最为显著。  相似文献   

3.
One of the deficiencies of mutual information is its poor capacity to measure association of words with unsymmetrical co-occurrence, which has large amounts for multi-word expression in texts. Moreover, threshold setting, which is decisive for success of practical implementation of mutual information for multi-word extraction, brings about many parameters to be predefined manually in the process of extracting multiword expressions with different number of individual words. In this paper, we propose a new method as EMICO (Enhanced Mutual Information and Collocation Optimization) to extract substantival multiword expression from text. Specifically, enhanced mutual information is proposed to measure the association of words and collocation optimization is proposed to automatically determine the number of individual words contained in a multiword expression when the multiword expression occurs in a candidate set. Our experiments showed that EMICO significantly improves the performance of substantival multiword expression extraction in comparison with a classic extraction method based on mutual information.  相似文献   

4.
在机器译文自动评价中,匹配具有相同语义、不同表达方式的词或短语是其中一个很大的挑战。许多研究工作提出从双语平行语料或可比语料中抽取复述来增强机器译文和人工译文的匹配。然而双语平行语料或可比语料不仅构建成本高,而且对少数语言对难以大量获取。我们提出通过构建词的Markov网络,从目标语言的单语文本中抽取复述的方法,并利用该复述提高机器译文自动评价方法与人工评价方法的相关性。在WMT14 Metrics task上的实验结果表明,我们从单语文本中提取复述方法的性能与从双语平行语料中提取复述方法的性能具有很强的可比性。因此,该文提出的方法可在保证复述质量的同时,降低复述抽取的成本。
  相似文献   

5.
双语库是翻译记忆系统最重要的组成部分之一。从有限规模的双语库中提取更多的符合用户当前翻译需要的关联实例是翻译记忆技术研究的主要内容,本文首先对当前基于单一方法的实例检索算法存在的局限性进行了分析,并在对双语库进行知识化表示的基础上,提出了基于多策略的关联实例提取机制,即综合运用句子句法结构匹配、句子编辑距离计算、句子短语片段匹配、词汇语义泛化、基于扩展信息(如: 句子来源、所属专业、应用频度等信息)的优选等策略进行关联实例提取。试验结果表明,该方法有效提高了关联实例的召回数量和质量,明显改善了对用户的辅助效果。  相似文献   

6.
Liu  Yu  Hua  Wen  Zhou  Xiaofang 《World Wide Web》2021,24(1):135-156

Knowledge, in practice, is time-variant and many relations are only valid for a certain period of time. This phenomenon highlights the importance of harvesting temporal-aware knowledge, i.e., the relational facts coupled with their valid temporal interval. Inspired by pattern-based information extraction systems, we resort to temporal patterns to extract time-aware knowledge from free text. However, pattern design is extremely laborious and time consuming even for a single relation, and free text is usually ambiguous which makes temporal instance extraction extremely difficult. Therefore, in this work, we study the problem of temporal knowledge extraction with two steps: (1) temporal pattern extraction by automatically analysing a large-scale text corpus with a small number of seed temporal facts, (2) temporal instance extraction by applying the identified temporal patterns. For pattern extraction, we introduce various techniques, including corpus annotation, pattern generation, scoring and clustering, to improve both accuracy and coverage of the extracted patterns. For instance extraction, we propose a double-check strategy to improve the accuracy and a set of node-extension rules to improve the coverage. We conduct extensive experiments on real world datasets and compared with state-of-the-art systems. Experimental results verify the effectiveness of our proposed methods for temporal knowledge harvesting.

  相似文献   

7.
韩汉双语语料库短语对齐对于基于实例的韩汉机器翻译系统具有重要意义,该文从韩国语名词短语结构特点出发,在基于统计和基于词典的词对齐方法进行试验分析的基础上,提出了基于词对齐位置信息的韩汉双语语料库名词短语对齐方法。该方法通过基于统计的方法获得词对齐位置信息,在此基础上利用基于词典方法的相似度计算进行词对齐校正;根据以上结果,该文通过韩国语名词短语左右边界规则抽取名词短语及其汉语译文,利用关联度度量方法进行过滤,实现名词短语对齐。实验结果表明,在较大规模语料库情况下,该方法取得了较好的短语对齐结果。  相似文献   

8.
A person's speaking style, consisting of such attributes as voice, choice of vocabulary, and the physical motions employed, not only expresses the speaker's identity but also emphasizes the content of an utterance. Speech combining these aspects of speaking style becomes more vivid and expressive to listeners. Recent research on speaking style modeling has paid more attention to speech signal processing. This approach focuses on text processing for idiolect extraction and generation to model a specific person's speaking style for the application of text-to-speech (TTS) conversion. The first stage of this study adopts a statistical method to automatically detect the candidate idiolects from a personalized, transcribed speech corpus. Based on the categorization of the detected candidate idiolects, superfluous idiolects are extracted using the fluency measure while the remaining candidates are regarded as the nonsuperfluous idiolects. In idiolect generation, the input text is converted into a target text with a particular speaker's speaking style via the insertion of superfluous idiolect or synonym substitution of nonsuperfluous idiolect. To evaluate the performance of the proposed methods, experiments were conducted on a Chinese corpus collected and transcribed from the speech files of three Taiwanese politicians. The results show that the proposed method can effectively convert a source text into a target text with a personalized speaking style.  相似文献   

9.
Bilingual termbanks are important for many natural language processing applications, especially in translation workflows in industrial settings. In this paper, we apply a log-likelihood comparison method to extract monolingual terminology from the source and target sides of a parallel corpus. The initial candidate terminology list is prepared by taking all arbitrary n-gram word sequences from the corpus. Then, a well-known statistical measure (the Dice coefficient) is employed in order to remove any multi-word terms with weak associations from the candidate term list. Thereafter, the log-likelihood comparison method is applied to rank the phrasal candidate term list. Then, using a phrase-based statistical machine translation model, we create a bilingual terminology with the extracted monolingual term lists. We integrate an external knowledge source—the Wikipedia cross-language link databases—into the terminology extraction (TE) model to assist two processes: (a) the ranking of the extracted terminology list, and (b) the selection of appropriate target terms for a source term. First, we report the performance of our monolingual TE model compared to a number of the state-of-the-art TE models on English-to-Turkish and English-to-Hindi data sets. Then, we evaluate our novel bilingual TE model on an English-to-Turkish data set, and report the automatic evaluation results. We also manually evaluate our novel TE model on English-to-Spanish and English-to-Hindi data sets, and observe excellent performance for all domains.  相似文献   

10.
统计机器翻译中的非连续短语模板抽取及其应用   总被引:1,自引:0,他引:1  
孙越恒  段楠  侯越先 《计算机科学》2009,36(10):192-196
目前基于短语的统计机器翻译模型很少将非连续短语的情况考虑在内,由此造成翻译结果在目标语言中的意义变化或缺失。以非连续介词短语为例,提供了一种短语模板抽取算法。首先采用基于规则的方法,抽取出中文非连续介词短语模板,而后借助双语对齐语料和介词_方位词翻译表,获得模板对应的英文翻译。最终形成的双语模板被加入短语翻译表中。在标准测试语料上的对比实验表明,加入非连续短语模板后,译文更加符合语法规范,而翻译结果也取得了相对稳定的提高。  相似文献   

11.
Key concept extraction is a major step for ontology learning that aims to build an ontology by identifying relevant domain concepts and their semantic relationships from a text corpus. The success of ontology development using key concept extraction strongly relies on the degree of relevance of the key concepts identified. If the identified key concepts are not closely relevant to the domain, the constructed ontology will not be able to correctly and fully represent the domain knowledge. In this paper, we propose a novel method, named CFinder, for key concept extraction. Given a text corpus in the target domain, CFinder first extracts noun phrases using their linguistic patterns based on Part-Of-Speech (POS) tags as candidates for key concepts. To calculate the weights (or importance) of these candidates within the domain, CFinder combines their statistical knowledge and domain-specific knowledge indicating their relative importance within the domain. The calculated weights are further enhanced by considering an inner structural pattern of the candidates. The effectiveness of CFinder is evaluated with a recently developed ontology for the domain of ‘emergency management for mass gatherings’ against the state-of-the-art methods for key concept extraction including—Text2Onto, KP-Miner and Moki. The comparative evaluation results show that CFinder statistically significantly outperforms all the three methods in terms of F-measure and average precision.  相似文献   

12.
问题分类是问答系统研究的一项基本任务。先前的研究仅仅是在单语语料上训练得到问题分类模型,存在语料不足和问题文本较短的问题。为了解决这些问题,该文提出了融合双语语料的双通道LSTM问题分类方法。首先,利用翻译语料分别扩充中文和英文语料;其次,将两种语言语料中的样本都分别用问题文本和翻译文本表示;最后,提出了双通道LSTM分类方法用于充分利用这两组特征,构建问题分类器。实验结果表明,该文提出的方法能有效提高问题分类的性能。  相似文献   

13.
汉语动词-动词搭配规则与分布特征   总被引:4,自引:0,他引:4  
搭配是汉语自动句法分析的重要知识源,而动词是句法分析的核心和前提。论文面向中文信息处理,通过对真实文本的统计分析归纳了用于自动获取搭配的规则,研究了动词-动词搭配中各关系类型的分布特征以及搭配词语的位置分布特征,在此基础上提出了抽取动宾、动补、连谓和并列四种关系的动词-动词搭配的适宜观察窗口。  相似文献   

14.
Automatic extraction of multiword expressions (MWEs) presents a tough challenge for the NLP community and corpus linguistics. Indeed, although numerous knowledge-based symbolic approaches and statistically driven algorithms have been proposed, efficient MWE extraction still remains an unsolved issue. In this paper, we evaluate the Lancaster UCREL Semantic Analysis System (henceforth USAS (Rayson, P., Archer, D., Piao, S., McEnery, T., 2004. The UCREL semantic analysis system. In: Proceedings of the LREC-04 Workshop, Beyond Named Entity Recognition Semantic labelling for NLP tasks, Lisbon, Portugal. pp. 7–12)) for MWE extraction, and explore the possibility of improving USAS by incorporating a statistical algorithm. Developed at Lancaster University, the USAS system automatically annotates English corpora with semantic category information. Employing a large-scale semantically classified multi-word expression template database, the system is also capable of detecting many multiword expressions, as well as assigning semantic field information to the MWEs extracted. Whilst USAS therefore offers a unique tool for MWE extraction, allowing us to both extract and semantically classify MWEs, it can sometimes suffer from low recall. Consequently, we have been comparing USAS, which employs a symbolic approach, to a statistical tool, which is based on collocational information, in order to determine the pros and cons of these different tools, and more importantly, to examine the possibility of improving MWE extraction by combining them. As we report in this paper, we have found a highly complementary relation between the different tools: USAS missed many domain-specific MWEs (law/court terms in this case), and the statistical tool missed many commonly used MWEs that occur in low frequencies (lower than three in this case). Due to their complementary relation, we are proposing that MWE coverage can be significantly increased by combining a lexicon-based symbolic approach and a collocation-based statistical approach.  相似文献   

15.
回顾了语料库分类及可比语料库中翻译等价对抽取方法研究的历史。根据从可比语料库中提取翻译等价对所依据的基本假设:一个语言中一个词在对应到另外一种语言时其与周围词之间的共现搭配关系仍然被保持,采用双向等价对获取计算然后求交集、词加权因数TF(iw)*IDF(i)值计算、上下文词的词性信息利用的方法来提高翻译等价对提取正确率。描述了翻译等价对抽取实验步骤,并对实验结果进行了简要分析。实验结果表明上述方法可以有效提高翻译等价对计算结果的正确率。最后提出了需要进一研究的问题。  相似文献   

16.
从构建大规模维吾尔文语料库的角度出发,归纳总结各类网页正文抽取技术,提出一种基于文本句长特征的网页正文抽取方法.该方法定义一系列过滤和替换规则对网页源码进行预处理,根据文本句长特征来判断文本段是否为网页正文.整个处理过程不依赖DOM树型结构,克服了基于DOM树结构进行正文抽取方法的性能缺陷.实验结果表明,对于维文各类型的网页正文提取,该方法均具有较高的准确度度和较好通用性.  相似文献   

17.
A Linguistic Approach to Extracting Acronym Expansions from Text   总被引:1,自引:1,他引:0  
We propose a linguistically aided approach for the extraction of acronyms from text, with implications at the discourse, lexicon, morphologic and phonologic levels. A system implemented using this approach achieves excellent performance ($f=95.22$%) on a limited corpus. Our work indicates the adequacy of linguistic methods for acronym discovery.  相似文献   

18.
从双语语料库中提取的机译单元能更好地覆盖真实语言文本,本文提供了一个通过找出两个双语句对之间非全部为高频功能词的“相同和差异”部分,并且利用翻译词典和动态规划算法对齐“相同和差异”部分来获取机译单元的算法。对于获取的候选机译单元,本算法设计了三个过滤器来考察其正确性:双语词串相似度过滤考察其语义对应性,词性相似度过滤考察其语法对应性,首尾禁用词过滤考察其搭配正确性。通过抽样检验,最后提取的机译单元的正确率为86% ,召回率约为61.34% ,该算法对于获取机译单元提供了一种新的实用的方法。  相似文献   

19.
A novel approach is introduced in this paper for the implementation of a question–answering based tool for the extraction of information and knowledge from texts. This effort resulted in the computer implementation of a system answering bilingual questions directly from a text using Natural Language Processing. The system uses domain knowledge concerning categories of actions and implicit semantic relations. The present state of the art in information extraction is based on the template approach which relies on a predefined user model. The model guides the extraction of information and the instantiation of a template that is similar to a frame or set of attribute value pairs as the result of the extraction process. Our question–answering based approach aims to create flexible information extraction tools accepting natural language questions and generating answers that contain information extracted from text either directly or after applying deductive inference. Our approach also addresses the problem of implicit semantic relations occurring either in the questions or in the texts from which information is extracted. These relations are made explicit with the use of domain knowledge. Examples of application of our methods are presented in this paper concerning four domains of quite different nature. These domains are: oceanography, medical physiology, aspirin pharmacology and ancient Greek law. Questions are expressed both in Greek and English. Another important point of our method is to process text directly avoiding any kind of formal representation when inference is required for the extraction of facts not mentioned explicitly in the text. This idea of using text as knowledge base was first presented in Kontos [7] and further elaborated in [9,11,12] as the ARISTA method. This is a new method for knowledge acquisition from texts that is based on using natural language itself for knowledge representation.  相似文献   

20.
在当前的基于统计的翻译方法中,双语语料库的规模、词对齐的准确率对于翻译系统的性能有很大的影响。虽然大规模语料库可以改善词语对齐的准确度,提高系统的性能,但同时会以增加系统的负载为代价,因此目前对于统计机器翻译方法的研究在使用大规模语料库的基础上,同时寻求其他可以提高系统性能的方法。针对以上问题,提出一种把双语词典应用在统计机器翻译中的方法,不仅优化了词对齐的准确率,而且得出质量更高的翻译结果,在一定程度上缓解了数据稀疏问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号