首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work, divided into Part I and II, describes the development of GorUP a Semantic Speech Recognition System in the Basque context. Part I analyses cross-lingual approaches oriented to under-resourced languages and Part II the development of the Language Identification system. During the development, data optimization methods and Soft Computing methodologies oriented to complex environment are used in order to overcome the lack of resources. Moreover, in this context three languages coexist: French, Spanish and Basque. Indeed our main goal is the development of robust Automatic Speech Recognition (ASR) systems for Basque, but all language variability has to be analyzed. In this regard, Basque speakers mix during the speech not only sounds but also words of the three languages which results in a strong presence of cross-lingual elements. Besides, Basque is an agglutinative language with a special morpho-syntactic structure inside the words that may lead to intractable vocabularies. Nowadays, our work is oriented to Information Retrieval and mainly to small internet mass-media. In these cases the available resources for Basque in general, and for this task in particular, are very few and complex to process because of the noisy environment. Thus, the methods employed in this development (ontology-based approach or cross-lingual methodologies oriented to profit from more powerful languages) could suit the requirements of many under-resourced languages.  相似文献   

2.
当今句子摘要研究主要针对单语,即源端句子和目标端摘要短语属于同种语言,然而单语句子摘要严重制约了不同语言文本信息的快速获取。为解决该问题,提出一种跨语言句子摘要系统。借鉴回译思想,将单语句子摘要平行语料中的源端通过神经机器翻译系统翻译成另一种语言,将其与句子摘要平行语料中目标端的摘要短语共同构成跨语言的伪平行语料。在此基础上,利用对比注意力机制,实现目标端与源端序列中不相关信息的获取,解决了传统注意力机制中源端和目标端句子长度不匹配的问题。实验结果表明,与基于管道方法的单语句子摘要系统相比,该跨语言系统生成的摘要短语更流畅且符合人类语言表述方式,可达到接近单语的句子摘要水平。  相似文献   

3.
向露  朱军楠  周玉  宗成庆 《自动化学报》2021,47(8):1855-1866
跨语言对话系统是当前国际研究的热点和难点. 在实际的应用系统搭建中, 通常需要翻译引擎作为不同语言之间对话的桥梁. 然而, 翻译引擎往往是基于不同训练样本构建的, 无论是所在领域, 还是擅长处理语言的特性, 均与对话系统的实际应用需求存在较大的差异, 从而导致整个对话系统的鲁棒性差、响应性能低. 因此, 如何增强跨语言对话系统的鲁棒性对于提升其实用性具有重要的意义. 提出了一种基于多粒度对抗训练的鲁棒跨语言对话系统构建方法. 该方法首先面向机器翻译构建多粒度噪声数据, 分别在词汇、短语和句子层面生成相应的对抗样本, 之后利用多粒度噪声数据和干净数据进行对抗训练, 从而更新对话系统的参数, 进而指导对话系统学习噪声无关的隐层向量表示, 最终达到提升跨语言对话系统性能的目的. 在公开对话数据集上对两种语言的实验表明, 所提出的方法能够显著提升跨语言对话系统的性能, 尤其提升跨语言对话系统的鲁棒性.  相似文献   

4.
VerbNet—the most extensive online verb lexicon currently available for English—has proved useful in supporting a variety of NLP tasks. However, its exploitation in multilingual NLP has been limited by the fact that such classifications are available for few languages only. Since manual development of VerbNet is a major undertaking, researchers have recently translated VerbNet classes from English to other languages. However, no systematic investigation has been conducted into the applicability and accuracy of such a translation approach across different, typologically diverse languages. Our study is aimed at filling this gap. We develop a systematic method for translation of VerbNet classes from English to other languages which we first apply to Polish and subsequently to Croatian, Mandarin, Japanese, Italian, and Finnish. Our results on Polish demonstrate high translatability with all the classes (96% of English member verbs successfully translated into Polish) and strong inter-annotator agreement, revealing a promising degree of overlap in the resultant classifications. The results on other languages are equally promising. This demonstrates that VerbNet classes have strong cross-lingual potential and the proposed method could be applied to obtain gold standards for automatic verb classification in different languages. We make our annotation guidelines and the six language-specific verb classifications available with this paper.  相似文献   

5.
无监督神经机器翻译仅利用大量单语数据,无需平行数据就可以训练模型,但是很难在2种语系遥远的语言间建立联系。针对此问题,提出一种新的不使用平行句对的神经机器翻译训练方法,使用一个双语词典对单语数据进行替换,在2种语言之间建立联系,同时使用词嵌入融合初始化和双编码器融合训练2种方法强化2种语言在同一语义空间的对齐效果,以提高机器翻译系统的性能。实验表明,所提方法在中-英与英-中实验中比基线无监督翻译系统的BLEU值分别提高2.39和1.29,在英-俄和英-阿等单语实验中机器翻译效果也显著提高了。  相似文献   

6.
跨语言短文本情感分析作为自然语言处理领域的一项重要的任务, 近年来备受关注. 跨语言情感分析能够利用资源丰富的源语言标注数据对资源匮乏的目标语言数据进行情感分析, 建立语言之间的联系是该任务的核心.与传统的机器翻译建立联系方法相比, 迁移学习更胜一筹, 而高质量的跨语言文本向量则会提升迁移效果. 本文提出LAAE网络模...  相似文献   

7.
随着人们对互联网多语言信息需求的日益增长,跨语言词向量已成为一项重要的基础工具,并成功应用到机器翻译、信息检索、文本情感分析等自然语言处理领域。跨语言词向量是单语词向量的一种自然扩展,词的跨语言表示通过将不同的语言映射到一个共享的低维向量空间,在不同语言间进行知识转移,从而在多语言环境下对词义进行准确捕捉。近几年跨语言词向量模型的研究成果比较丰富,研究者们提出了较多生成跨语言词向量的方法。该文通过对现有的跨语言词向量模型研究的文献回顾,综合论述了近年来跨语言词向量模型、方法、技术的发展。按照词向量训练方法的不同,将其分为有监督学习、无监督学习和半监督学习三类方法,并对各类训练方法的原理和代表性研究进行总结以及详细的比较;最后概述了跨语言词向量的评估及应用,并分析了所面临的挑战和未来的发展方向。  相似文献   

8.
In this paper we present results of unsupervised cross-lingual speaker adaptation applied to text-to-speech synthesis. The application of our research is the personalisation of speech-to-speech translation in which we employ a HMM statistical framework for both speech recognition and synthesis. This framework provides a logical mechanism to adapt synthesised speech output to the voice of the user by way of speech recognition. In this work we present results of several different unsupervised and cross-lingual adaptation approaches as well as an end-to-end speaker adaptive speech-to-speech translation system. Our experiments show that we can successfully apply speaker adaptation in both unsupervised and cross-lingual scenarios and our proposed algorithms seem to generalise well for several language pairs. We also discuss important future directions including the need for better evaluation metrics.  相似文献   

9.
针对传统跨语言词嵌入方法在汉越等差异较大的低资源语言上对齐效果不佳的问题,提出一种融合词簇对齐约束的汉越跨语言词嵌入方法。通过独立的单语语料训练获取汉越单语词嵌入,使用近义词、同类词和同主题词3种不同类型的关联关系,充分挖掘双语词典中的词簇对齐信息以融入到映射矩阵的训练过程中,使映射矩阵进一步学习到不同语言相近词间具有的一些共性特征及映射关系,根据跨语言映射将两种语言的单语词嵌入映射至同一共享空间中对齐,令具有相同含义的汉语与越南语词嵌入在空间中彼此接近,并利用余弦相似度为空间中每一个未经标注的汉语单词查找对应的越南语翻译构建汉越对齐词对,实现跨语言词嵌入。实验结果表明,与传统有监督及无监督的跨语言词嵌入方法Multi_w2v、Orthogonal、VecMap、Muse相比,该方法能有效提升映射矩阵在非标注词上的泛化性,改善汉越低资源场景下模型对齐效果较差的问题,其在汉越双语词典归纳任务P@1和P@5上的对齐准确率相比最好基线模型提升了2.2个百分点。  相似文献   

10.
This article describes experiments on speech segmentation using long short-term memory recurrent neural networks. The main part of the paper deals with multi-lingual and cross-lingual segmentation, that is, it is performed on a language different from the one on which the model was trained. The experimental data involves large Czech, English, German, and Russian speech corpora designated for speech synthesis. For optimal multi-lingual modeling, a compact phonetic alphabet was proposed by sharing and clustering phones of particular languages. Many experiments were performed exploring various experimental conditions and data combinations. We proposed a simple procedure that iteratively adapts the inaccurate default model to the new voice/language. The segmentation accuracy was evaluated by comparison with reference segmentation created by a well-tuned hidden Markov model-based framework with additional manual corrections. The resulting segmentation was also employed in a unit selection text-to-speech system. The generated speech quality was compared with the reference segmentation by a preference listening test.  相似文献   

11.
语音识别模型需要大量带标注语音语料进行训练,作为少数民族语言的藏语,由于语音标注专家十分匮乏,人工标注语音语料是一件非常费时费力的工作。然而,主动学习方法可以根据语音识别的目标从大量未标注的语音数据中挑选一些具有价值的样本交给用户进行标注,以便利用少量高质量的训练样本构建与大数据量训练方式一样精准的识别模型。研究了基于主动学习的藏语拉萨话语音语料选择方法,提出了一种临近最优的批量样本选择目标函数,并验证了其具有submodular函数性质。通过实验验证,该方法能够使用较少的训练数据保证语音识别模型的精度,从而减少了人工标注语料的工作量。  相似文献   

12.
依赖于大规模的平行语料库,神经机器翻译在某些语言对上已经取得了巨大的成功。无监督神经机器翻译UNMT又在一定程度上解决了高质量平行语料库难以获取的问题。最近的研究表明,跨语言模型预训练能够显著提高UNMT的翻译性能,其使用大规模的单语语料库在跨语言场景中对深层次上下文信息进行建模,获得了显著的效果。进一步探究基于跨语言预训练的UNMT,提出了几种改进模型训练的方法,针对在预训练之后UNMT模型参数初始化质量不平衡的问题,提出二次预训练语言模型和利用预训练模型的自注意力机制层优化UNMT模型的上下文注意力机制层2种方法。同时,针对UNMT中反向翻译方法缺乏指导的问题,尝试将Teacher-Student框架融入到UNMT的任务中。实验结果表明,在不同语言对上与基准系统相比,本文的方法最高取得了0.8~2.08个百分点的双语互译评估(BLEU)值的提升。  相似文献   

13.
This paper suggests a method for generating the rules of cross-lingual transliteration. The method is based on analysis of a parallel corpus of proper names. The method is language-independent and does not require large training corpora. The results of experiments on different languages are provided.  相似文献   

14.
Speech translation is a technology that helps people communicate across different languages. The most commonly used speech translation model is composed of automatic speech recognition, machine translation and text-to-speech synthesis components, which share information only at the text level. However, spoken communication is different from written communication in that it uses rich acoustic cues such as prosody in order to transmit more information through non-verbal channels. This paper is concerned with speech-to-speech translation that is sensitive to this paralinguistic information. Our long-term goal is to make a system that allows users to speak a foreign language with the same expressiveness as if they were speaking in their own language. Our method works by reconstructing input acoustic features in the target language. From the many different possible paralinguistic features to handle, in this paper we choose duration and power as a first step, proposing a method that can translate these features from input speech to the output speech in continuous space. This is done in a simple and language-independent fashion by training an end-to-end model that maps source-language duration and power information into the target language. Two approaches are investigated: linear regression and neural network models. We evaluate the proposed methods and show that paralinguistic information in the input speech of the source language can be reflected in the output speech of the target language.  相似文献   

15.
Recent research on English word sense subjectivity has shown that the subjective aspect of an entity is a characteristic that is better delineated at the sense level, instead of the traditional word level. In this paper, we seek to explore whether senses aligned across languages exhibit this trait consistently, and if this is the case, we investigate how this property can be leveraged in an automatic fashion. We first conduct a manual annotation study to gauge whether the subjectivity trait of a sense can be robustly transferred across language boundaries. An automatic framework is then introduced that is able to predict subjectivity labeling for unseen senses using either cross-lingual or multilingual training enhanced with bootstrapping. We show that the multilingual model consistently outperforms the cross-lingual one, with an accuracy of over 73% across all iterations.  相似文献   

16.
The development of high-performance statistical machine translation (SMT) systems is contingent on the availability of substantial, in-domain parallel training corpora. The latter, however, are expensive to produce due to the labor-intensive nature of manual translation. We propose to alleviate this problem with a novel, semi-supervised, batch-mode active learning strategy that attempts to maximize in-domain coverage by selecting sentences, which represent a balance between domain match, translation difficulty, and batch diversity. Simulation experiments on an English-to-Pashto translation task show that the proposed strategy not only outperforms the random selection baseline, but also traditional active selection techniques based on dissimilarity to existing training data.  相似文献   

17.
The availability of machine-readable bilingual linguistic resources is crucial not only for rule-based machine translation but also for other applications such as cross-lingual information retrieval. However, the building of such resources (bilingual single-word and multi-word correspondences, translation rules) demands extensive manual work, and, as a consequence, bilingual resources are usually more difficult to find than “shallow” monolingual resources such as morphological dictionaries or part-of-speech taggers, especially when they involve a less-resourced language. This paper describes a methodology to build automatically both bilingual dictionaries and shallow-transfer rules by extracting knowledge from word-aligned parallel corpora processed with shallow monolingual resources (morphological analysers, and part-of-speech taggers). We present experiments for Brazilian Portuguese–Spanish and Brazilian Portuguese–English parallel texts. The results show that the proposed methodology can enable the rapid creation of valuable computational resources (bilingual dictionaries and shallow-transfer rules) for machine translation and other natural language processing tasks).  相似文献   

18.
会议场景下通过语音识别和机器翻译技术实现从演讲人语音到另外一种语言文字的翻译,对于跨语言信息交流具有重要意义,成为当前研究热点之一。该文针对由于会议行业属性带来的专业术语和行业用语的翻译问题,提出了一种融合外部词典知识的领域个性化方法。具体而言,首先采用联合占位符和拼接融合的编码策略,通过引入外部词典知识,在提升实体词、专业术语词翻译准确率的同时,兼顾翻译结果的流畅性。其次提出基于分类的领域旁支参数个性化自适应策略,在保持通用领域翻译效果的情况下实现会议相关领域翻译质量的提升。最后基于上述方案,设计了一套领域个性化自动训练系统。实验结果表明,在中英体育、商务和医学会议翻译任务上,该系统在不影响通用翻译的情况下,平均提升9.22个BLEU,获得较好翻译效果。  相似文献   

19.
Translated or cross-lingual plagiarism is defined as the translation of someone else’s work or words without marking it as such or without giving credit to the original author. The existence of cross-lingual plagiarism is not new, but only in recent years, due to the rapid development of the natural language processing, appeared the first algorithms which tackled the difficult task of detecting it. Most of these algorithms utilize machine translation to compare texts written in different languages. We propose a different method, which can effectively detect translations between language-pairs where machine translations still produce low quality results. Our new algorithm presented in this paper is based on information retrieval (IR) and a dictionary based similarity metric. The preprocessing of the candidate documents for the IR is computationally intensive, but easily parallelizable. We propose a desktop Grid solution for this task. As the application is time sensitive and the desktop Grid peers are unreliable, a resubmission mechanism is used which assures that all jobs of a batch finish within a reasonable time period without dramatically increasing the load on the whole system.  相似文献   

20.
The paper describes our recent developments in automatic extraction of translation equivalents from parallel corpora. We describe three increasingly complex algorithms: a simple baseline iterative method, and two non-iterative more elaborated versions. While the baseline algorithm is mainly described for illustrative purposes, the non-iterative algorithms outline the use of different working hypotheses which may be motivated by different kinds of applications and to some extent by the languages concerned. The first two algorithms rely on cross-lingual POS preservation, while with the third one POS invariance is not an extraction condition. The evaluation of the algorithms was conducted on three different corpora and several pairs of languages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号