首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We report experimental results on automatic extraction of an English-Chinese translation lexicon, by statistical analysis of a large parallel corpus, using limited amounts of linguistic knowledge. To our knowledge, these are the first empirical results of the kind between an Indo-European and non-Indo-European language for any significant vocabulary and corpus size. The learned vocabulary size is about 6,500 English words, achieving translation precision in the 86–96% range, with alignment proceeding at paragraph, sentence, and word levels. Specifically, we report (1) progress on the HKUST English-Chinese Parallel Bilingual Corpus, (2) experiments supporting the usefulness of restricted lexical cues for statistical paragraph and sentence alignment, and (3) experiments that question the role of hand-derived monolingual lexicons for automatic word translation acquisition. Using a hand-derived monolingual lexicon, the learned translation lexicon averages 2.33 Chinese translations per English entry, with a manually-filtered precision of 95.1%, and an automatically-filtered weighted precision of 86.0%. We then introduce a fully automatic two-stage statistical methodology that is able to learn translations for collocations. A statistically-learned monolingual Chinese lexicon is first used to segment the Chinese text, before applying bilingual training to produce 6,429 English entries with 2.25 Chinese translations per entry. This method improves the manually-filtered precision to 96.0% and the automatically-filtered weighted precision to 91.0%, an error rate reduction of 35.7% from using a hand-derived monolingual lexicon.  相似文献   

2.
文本情感分析是目前自然语言处理领域的一个热点研究问题,具有广泛的实用价值和理论研究意义。情感词典构建则是文本情感分析的一项基础任务,即将词语按照情感倾向分为褒义、中性或者贬义。然而,中文情感词典构建存在两个主要问题 1)许多情感词存在多义、歧义的现象,即一个词语在不同语境中它的语义倾向也不尽相同,这给词语的情感计算带来困难;2)由国内外相关研究现状可知,中文情感字典建设的可用资源相对较少。考虑到英文情感分析研究中存在大量语料和词典,该文借助机器翻译系统,结合双语言资源的约束信息,利用标签传播算法(LP)计算词语的情感信息。在四个领域的实验结果显示我们的方法能获得一个分类精度高、覆盖领域语境的中文情感词典。  相似文献   

3.
The lexicon is a major part of any Machine Translation (MT) system. If the lexicon of an MT system is not adequate, this will affect the quality of the whole system. Building a comprehensive lexicon, i.e., one with a high lexical coverage, is a major activity in the process of developing a good MT system. As such, the evaluation of the lexicon of an MT system is clearly a pivotal issue for the process of evaluating MT systems. In this paper, we introduce a new methodology that was devised to enable developers and users of MT Systems to evaluate their lexicons semi-automatically. This new methodology is based on the idea of the importance of a specific word or, more precisely, word sense, to a given application domain. This importance, or weight, determines how the presence of such a word in, or its absence from, the lexicon affects the MT system's lexical quality, which in turn will naturally affect the overall output quality. The method, which adopts a black-box approach to evaluation, was implemented and applied to evaluating the lexicons of three commercialEnglish–Arabic MT systems. A specific domain was chosen in which the various word-sense weights were determined by feeding sample texts from the domain into a system developed specifically for that purpose. Once this database of word senses and weights was built, test suites were presented to each of the MT systems under evaluation and their output rated by a human operator as either correct or incorrect. Based on this rating, an overall automated evaluation of the lexicons of the systems was deduced.  相似文献   

4.
该文旨在探索一种面向微博的社会情绪词典构建方法,并将其应用于社会公共事件的情绪分析中。首先通过手工方法建立小规模的基准情绪词典,然后利用深度学习工具Word2vec对社会热点事件的微博语料通过增量式学习方法来扩展基准词典,并结合HowNet词典匹配和人工筛选生成最终的情绪词典。接下来,分别利用基于情绪词典和基于SVM的情绪方法对实验标注语料进行情绪分析,结果对比分析表明基于词典的情绪分析方法优于基于SVM的情绪分析方法,前者的平均准确率和召回率比后者分别高13.9%和1.5%。最后运用所构建的情绪词典对热点公共事件进行情绪分析,实验结果表明该方法是有效的。  相似文献   

5.
Learning finite-state models for machine translation   总被引:1,自引:0,他引:1  
In formal language theory, finite-state transducers are well-know models for simple “input-output” mappings between two languages. Even if more powerful, recursive models can be used to account for more complex mappings, it has been argued that the input-output relations underlying most usual natural language pairs can essentially be modeled by finite-state devices. Moreover, the relative simplicity of these mappings has recently led to the development of techniques for learning finite-state transducers from a training set of input-output sentence pairs of the languages considered. In the last years, these techniques have lead to the development of a number of machine translation systems. Under the statistical statement of machine translation, we overview here how modeling, learning and search problems can be solved by using stochastic finite-state transducers. We also review the results achieved by the systems we have developed under this paradigm. As a main conclusion of this review we argue that, as task complexity and training data scarcity increase, those systems which rely more on statistical techniques tend produce the best results. This work was partially supported by the European Union project TT2 (IST-2001-32091) and by the Spanish project ITEFTE (TIC 2003-08681-C02-02). Editor: Georgios Paliouras and Yasubumi Sakakibara  相似文献   

6.
Sentiment analysis is an active research area in today’s era due to the abundance of opinionated data present on online social networks. Semantic detection is a sub-category of sentiment analysis which deals with the identification of sentiment orientation in any text. Many sentiment applications rely on lexicons to supply features to a model. Various machine learning algorithms and sentiment lexicons have been proposed in research in order to improve sentiment categorization. Supervised machine learning algorithms and domain specific sentiment lexicons generally perform better as compared to the unsupervised or semi-supervised domain independent lexicon based approaches. The core hindrance in the application of supervised algorithms or domain specific sentiment lexicons is the unavailability of sentiment labeled training datasets for every domain. On the other hand, the performance of algorithms based on general purpose sentiment lexicons needs improvement. This research is focused on building a general purpose sentiment lexicon in a semi-supervised manner. The proposed lexicon defines word semantics based on Expected Likelihood Estimate Smoothed Odds Ratio that are then incorporated with supervised machine learning based model selection approach. A comprehensive performance comparison verifies the superiority of our proposed approach.  相似文献   

7.
Concrete concepts are often easier to understand than abstract concepts. The notion of abstractness is thus closely tied to the organisation of our semantic memory, and more specifically our internal lexicon, which underlies our word sense disambiguation (WSD) mechanisms. State-of-the-art automatic WSD systems often draw on a variety of contextual cues and assign word senses by an optimal combination of statistical classifiers. The validity of various lexico-semantic resources as models of our internal lexicon and the cognitive aspects pertinent to the lexical sensitivity of WSD are seldom questioned. We attempt to address these issues by examining psychological evidence of the internal lexicon and its compatibility with the information available from computational lexicons. In particular, we compare the responses from a word association task against existing lexical resources, WordNet and SUMO, to explore the relation between sense abstractness and semantic activation, and thus the implications on semantic network models and the lexical sensitivity of WSD. Our results suggest that concrete senses are more readily activated than abstract senses, and broad associations are more easily triggered than narrow paradigmatic associations. The results are expected to inform the construction of lexico-semantic resources and WSD strategies.  相似文献   

8.
This article presents statistical language translation models,called dependency transduction models, based on collectionsof head transducers. Head transducers are middle-out finite-state transducers which translate a head word in a source stringinto its corresponding head in the target language, and furthertranslate sequences of dependents of the source head into sequencesof dependents of the target head. The models are intended to capturethe lexical sensitivity of direct statistical translation models,while at the same time taking account of the hierarchical phrasalstructure of language. Head transducers are suitable for directrecursive lexical translation, and are simple enough to be trainedfully automatically. We present a method for fully automatictraining of dependency transduction models for which the only inputis transcribed and translated speech utterances. The method has beenapplied to create English–Spanish and English–Japanese translationmodels for speech translation applications. The dependencytransduction model gives around 75% accuracy for an English–Spanishtranslation task (using a simple string edit-distance measure) and70% for an English–Japanese translation task. Enhanced with targetn-grams and a case-based component, English–Spanish accuracy is over76%; for English–Japanese it is 73% for transcribed speech, and60% for translation from recognition word lattices.  相似文献   

9.
目前比较流行的中文分词方法为基于统计模型的机器学习方法。基于统计的方法一般采用人工标注的句子级的标注语料进行训练,但是这种方法往往忽略了已有的经过多年积累的人工标注的词典信息。这些信息尤其是在面向跨领域时,由于目标领域句子级别的标注资源稀少,从而显得更加珍贵。因此如何充分而且有效的在基于统计的模型中利用词典信息,是一个非常值得关注的工作。最近已有部分工作对它进行了研究,按照词典信息融入方式大致可以分为两类:一类是在基于字的序列标注模型中融入词典特征,而另一类是在基于词的柱搜索模型中融入特征。对这两类方法进行比较,并进一步进行结合。实验表明,这两类方法结合之后,词典信息可以得到更充分的利用,最终无论是在同领域测试和还是在跨领域测试上都取得了更优的性能。  相似文献   

10.
On the dependence of handwritten word recognizers on lexicons   总被引:1,自引:0,他引:1  
The performance of any word recognizer depends on the lexicon presented. Usually, large lexicons or lexicons containing similar entries pose difficulty for recognizers. However, the literature lacks any quantitative methodology of capturing the precise dependence between word recognizers and lexicons. This paper presents a performance model that views word recognition as a function of character recognition and statistically "discovers" the relation between a word recognizer and the lexicon. It uses model parameters that capture a recognizer's ability of distinguishing characters (of the alphabet) and its sensitivity to lexicon size. These parameters are determined by a multiple regression model which is derived from the performance model. Such a model is very useful in comparing word recognizers by predicting their performance based on the lexicon presented. We demonstrate the performance model with extensive experiments on five different word recognizers, thousands of images, and tens of lexicons. The results show that the model is a good fit not only on the training data but also in predicting the recognizers' performance on testing data.  相似文献   

11.
EBMT系统中的多词单元翻译词典获取研究   总被引:2,自引:0,他引:2  
EBMT系统是一种基于语料库的机器翻译方法,其主要思想是通过类比原理进行翻译。如何从语料库中提取出一个实用的翻译词典进行系统的辅助翻译已经越来越多的引起关注。本文探讨了如何结合阈值和关联度提取的方法获取多词单元翻译词典,在这两种方法中,阈值提取受主观影响太大,关联值提取效率太低,都不能很好的满足翻译词典提取的要求。本文提出的算法利用阈值提取出备选多词单元,其中提出了四点规则弱化主观影响且保证全面覆盖所有多词单元,降低了阈值本身所带来的不精确度的影响,然后对计算结果进行三层过滤,进一步提高了准确率;该算法还合并了单词译成多词单元和多词单元互译两部分词典的提取,提高了工作效率。  相似文献   

12.
Statistical approaches in speech technology, whether used for statistical language models, trees, hidden Markov models or neural networks, represent the driving forces for the creation of language resources (LR), e.g., text corpora, pronunciation and morphology lexicons, and speech databases. This paper presents a system architecture for the rapid construction of morphologic and phonetic lexicons, two of the most important written language resources for the development of ASR (automatic speech recognition) and TTS (text-to-speech) systems. The presented architecture is modular and is particularly suitable for the development of written language resources for inflectional languages. In this paper an implementation is presented for the Slovenian language. The integrated graphic user interface focuses on the morphological and phonetic aspects of language and allows experts to produce good performances during analysis. In multilingual TTS systems, many extensive external written language resources are used, especially in the text processing part. It is very important, therefore, that representation of these resources is time and space efficient. It is also very important that language resources for new languages can be easily incorporated into the system, without modifying the common algorithms developed for multiple languages. In this regard the use of large external language resources (e.g., morphology and phonetic lexicons) represent an important problem because of the required space and slow look-up time. This paper presents a method and its results for compiling large lexicons, using examples for compiling German phonetic and morphology lexicons (CISLEX), and Slovenian phonetic (SIflex) and morphology (SImlex) lexicons, into corresponding finite-state transducers (FSTs). The German lexicons consisted of about 300,000 words, SIflex consisted of about 60,000 and SImlex of about 600,000 words (where 40,000 words were used for representation using finite-state transducers). Representation of large lexicons using finite-state transducers is mainly motivated by considerations of space and time efficiency. A great reduction in size and optimal access time was achieved for all lexicons. The starting size for the German phonetic lexicon was 12.53 MB and 18.49 MB for the morphology lexicon. The starting size for the Slovenian phonetic lexicon was 1.8 MB and 1.4 MB for the morphology lexicon. The final size of the corresponding FSTs was 2.78 MB for the German phonetic lexicon, 6.33 MB for the German morphology lexicon, 253 KB for SIflex and 662 KB for the SImlex lexicon. The achieved look-up time is optimal, since it only depends on the length of the input word and not on the size of the lexicon. Integration of lexicons for new languages into the multilingual TTS system is easy when using such representations and does not require any changes in the algorithms used for such lexicons.  相似文献   

13.
In this paper, we describe a first version of a system for statisticaltranslation and present experimental results. The statistical translationapproach uses two types of information: a translation model and a languagemodel. The language model used is a standard bigram model. The translationmodel is decomposed into lexical and alignment models. After presenting the details of the alignment model, we describe the search problem and present a dynamic programming-based solution for the special case of monotone alignments.So far, the system has been tested on two limited-domain tasks for which abilingual corpus is available: the EuTrans traveller task (Spanish–English,500-word vocabulary) and the Verbmobil task (German–English, 3000-wordvocabulary). We present experimental results on these tasks. In addition to the translation of text input, we also address the problem of speech translation and suitable integration of the acoustic recognition process and the translation process.  相似文献   

14.
We propose a novel approach to cross-lingual language model and translation lexicon adaptation for statistical machine translation (SMT) based on bilingual latent semantic analysis. Bilingual LSA enables latent topic distributions to be efficiently transferred across languages by enforcing a one-to-one topic correspondence during training. Using the proposed bilingual LSA framework, model adaptation can be performed by, first, inferring the topic posterior distribution of the source text and then applying the inferred distribution to an n-gram language model of the target language and translation lexicon via marginal adaptation. The background phrase table is enhanced with the additional phrase scores computed using the adapted translation lexicon. The proposed framework also features rapid bootstrapping of LSA models for new languages based on a source LSA model of another language. Our approach is evaluated on the Chinese–English MT06 test set using the medium-scale SMT system and the GALE SMT system measured in BLEU and NIST scores. Improvement in both scores is observed on both systems when the adapted language model and the adapted translation lexicon are applied individually. When the adapted language model and the adapted translation lexicon are applied simultaneously, the gain is additive. At the 95% confidence interval of the unadapted baseline system, the gain in both scores is statistically significant using the medium-scale SMT system, while the gain in the NIST score is statistically significant using the GALE SMT system.  相似文献   

15.
We present a unified probabilistic framework for statistical language modeling which can simultaneously incorporate various aspects of natural language, such as local word interaction, syntactic structure and semantic document information. Our approach is based on a recent statistical inference principle we have proposed—the latent maximum entropy principle—which allows relationships over hidden features to be effectively captured in a unified model. Our work extends previous research on maximum entropy methods for language modeling, which only allow observed features to be modeled. The ability to conveniently incorporate hidden variables allows us to extend the expressiveness of language models while alleviating the necessity of pre-processing the data to obtain explicitly observed features. We describe efficient algorithms for marginalization, inference and normalization in our extended models. We then use these techniques to combine two standard forms of language models: local lexical models (Markov N-gram models) and global document-level semantic models (probabilistic latent semantic analysis). Our experimental results on the Wall Street Journal corpus show that we obtain a 18.5% reduction in perplexity compared to the baseline tri-gram model with Good-Turing smoothing.Editors: Dan Roth and Pascale Fung  相似文献   

16.
Use of lexicon density in evaluating word recognizers   总被引:1,自引:0,他引:1  
We have developed the notion of lexicon density as a metric to measure the expected accuracy of handwritten word recognizers. Thus far, researchers have used the size of the lexicon as a gauge for the difficulty of the handwritten word recognition task. For example, the literature mentions recognizers with accuracies for lexicons of sizes 10, 100, 1000, and so forth, implying that the difficulty of the task increases (and hence recognition accuracy decreases) with increasing lexicon size across recognizers. Lexicon density is an alternate measure which is quite dependent on the recognizer. There are many applications, such as address interpretation, where such a recognizer-dependent measure can be useful. We have conducted experiments with two different types of recognizers. A segmentation-based and a grapheme-based recognizer have been selected to show how the measure of lexicon density can be developed in general for any recognizer. Experimental results show that the lexicon density measure described is more suitable than lexicon size or a simple string edit distance  相似文献   

17.
Despite several decades of research in document analysis, recognition of unconstrained handwritten documents is still considered a challenging task. Previous research in this area has shown that word recognizers perform adequately on constrained handwritten documents which typically use a restricted vocabulary (lexicon). But in the case of unconstrained handwritten documents, state-of-the-art word recognition accuracy is still below the acceptable limits. The objective of this research is to improve word recognition accuracy on unconstrained handwritten documents by applying a post-processing or OCR correction technique to the word recognition output. In this paper, we present two different methods for this purpose. First, we describe a lexicon reduction-based method by topic categorization of handwritten documents which is used to generate smaller topic-specific lexicons for improving the recognition accuracy. Second, we describe a method which uses topic-specific language models and a maximum-entropy based topic categorization model to refine the recognition output. We present the relative merits of each of these methods and report results on the publicly available IAM database.  相似文献   

18.
平行语料库中双语术语词典的自动抽取   总被引:7,自引:5,他引:2  
本文提出了一种从英汉平行语料库中自动抽取术语词典的算法。首先采用基于字符长度的改进的统计方法对平行语料进行句子级的对齐,并对英文语料和中文语料分别进行词性标注和切分与词性标注。统计已对齐和标注的双语语料中的名词和名词短语生成候选术语集。然后对每个英文候选术语计算与其相关的中文翻译之间的翻译概率。最后通过设定随词频变化的阈值来选取中文翻译。在对真实语料的术语抽取实验中取得了较好的结果。  相似文献   

19.
Sentiment lexicons and word embeddings constitute well-established sources of information for sentiment analysis in online social media. Although their effectiveness has been demonstrated in state-of-the-art sentiment analysis and related tasks in the English language, such publicly available resources are much less developed and evaluated for the Greek language. In this paper, we tackle the problems arising when analyzing text in such an under-resourced language. We present and make publicly available a rich set of such resources, ranging from a manually annotated lexicon, to semi-supervised word embedding vectors and annotated datasets for different tasks. Our experiments using different algorithms and parameters on our resources show promising results over standard baselines; on average, we achieve a 24.9% relative improvement in F-score on the cross-domain sentiment analysis task when training the same algorithms with our resources, compared to training them on more traditional feature sources, such as n-grams. Importantly, while our resources were built with the primary focus on the cross-domain sentiment analysis task, they also show promising results in related tasks, such as emotion analysis and sarcasm detection.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号