首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we present a speech-to-speech (S2S) translation system called the BBN TransTalk that enables two-way communication between speakers of English and speakers who do not understand or speak English. The BBN TransTalk has been configured for several languages including Iraqi Arabic, Pashto, Dari, Farsi, Malay, Indonesian, and Levantine Arabic. We describe the key components of our system: automatic speech recognition (ASR), machine translation (MT), text-to-speech (TTS), dialog manager, and the user interface (UI). In addition, we present novel techniques for overcoming specific challenges in developing high-performing S2S systems. For ASR, we present techniques for dealing with lack of pronunciation and linguistic resources and effective modeling of ambiguity in pronunciations of words in these languages. For MT, we describe techniques for dealing with data sparsity as well as modeling context. We also present and compare different user confirmation techniques for detecting errors that can cause the dialog to drift or stall.  相似文献   

2.
This paper describes the preparation, recording, analyzing, and evaluation of a new speech corpus for Modern Standard Arabic (MSA). The speech corpus contains a total of 415 sentences recorded by 40 (20 male and 20 female) Arabic native speakers from 11 different Arab countries representing three major regions (Levant, Gulf, and Africa). Three hundred and sixty seven sentences are considered as phonetically rich and balanced, which are used for training Arabic Automatic Speech Recognition (ASR) systems. The rich characteristic is in the sense that it must contain all phonemes of Arabic language, whereas the balanced characteristic is in the sense that it must preserve the phonetic distribution of Arabic language. The remaining 48 sentences are created for testing purposes, which are mostly foreign to the training sentences and there are hardly any similarities in words. In order to evaluate the speech corpus, Arabic ASR systems were developed using the Carnegie Mellon University (CMU) Sphinx 3 tools at both training and testing/decoding levels. The speech engine uses 3-emitting state Hidden Markov Models (HMM) for tri-phone based acoustic models. Based on experimental analysis of about 8?h of training speech data, the acoustic model is best using continuous observation’s probability model of 16 Gaussian mixture distributions and the state distributions were tied to 500 senones. The language model contains uni-grams, bi-grams, and tri-grams. For same speakers with different sentences, Arabic ASR systems obtained average Word Error Rate (WER) of 9.70%. For different speakers with same sentences, Arabic ASR systems obtained average WER of 4.58%, whereas for different speakers with different sentences, Arabic ASR systems obtained average WER of 12.39%.  相似文献   

3.

Question answering is a subfield of information retrieval. It is a task of answering a question posted in a natural language. A question answering system (QAS) may be considered a good alternative to search engines that return a set of related documents. The QAS system is composed of three main modules; question analysis, passage retrieval, and answer extraction. Over the years, numerous QASs have been presented for use in different languages. However, the the development of Arabic QASs has been slowed by linguistic challenges and the lack of resources and tools available to researchers. In this survey, we start with the challenges due to the language and how these challenges make the development of new Arabic QAS more difficult. Next, we do a detailed review of several Arabic QASs. This is followed by an in-depth analysis of the techniques and approaches in the three modules of a QAS. We present an overview of important and recent tools that were developed to help the researchers in this field. We also cover the available Arabic and multilingual datasets, and a look at the different measures used to assess QASs. Finally, the survey delves into the future direction of Arabic QAS systems based on the current state-of-the-art techniques developed for question answering in other languages.

  相似文献   

4.
This paper presents a historical Arabic corpus named HAC. At this early embryonic stage of the project, we report about the design, the architecture and some of the experiments which we have conducted on HAC. The corpus, and accordingly the search results, will be represented using a primary XML exchange format. This will serve as an intermediate exchange tool within the project and will allow the user to process the results offline using some external tools. HAC is made up of Classical Arabic texts that cover 1600 years of language use; the Quranic text, Modern Standard Arabic texts, as well as a variety of monolingual Arabic dictionaries. The development of this historical corpus assists linguists and Arabic language learners to effectively explore, understand, and discover interesting knowledge hidden in millions of instances of language use. We used techniques from the field of natural language processing to process the data and a graph-based representation for the corpus. We provided researchers with an export facility to render further linguistic analysis possible.  相似文献   

5.
As the number of Arabic corpora is constantly increasing, there is an obvious and growing need for concordancing software for corpus search and analysis that supports as many features as possible of the Arabic language, and provides users with a greater number of functions. This paper evaluates six existing corpus search and analysis tools based on eight criteria which seem to be the most essential for searching and analysing Arabic corpora, such as displaying Arabic text in its right-to-left direction, normalising diacritics and Hamza, and providing an Arabic user interface. The results of the evaluation revealed that three tools: Khawas, Sketch Engine, and aConCorde, have met most of the evaluation criteria and achieved the highest benchmark scores. The paper concluded that developers’ conscious consideration of the linguistic features of Arabic when designing these three tools was the most significant factor behind their superiority.  相似文献   

6.
The task of building an ontology from a textual corpus starts with the conceptualization phase, which extracts ontology concepts. These concepts are linked by semantic relationships. In this paper, we describe an approach to the construction of an ontology from an Arabic textual corpus, starting first with the collection and preparation of the corpus through normalization, removing stop words and stemming; then, to extract terms of our ontology, a statistical method for extracting simple and complex terms, called “the repeated segments method” are applied. To select segments with sufficient weight we apply the weighting method term frequency–inverse document frequency (TF–IDF), and to link these terms by semantic relationships we apply an automatic method of learning linguistic markers from text. This method requires a dataset of relationship pairs, which are extracted from two external resources: an Arabic dictionary of synonyms and antonyms and the lexical database Arabic WordNet. Finally, we present the results of our experimentation using our textual corpus. The evaluation of our approach shows encouraging results in terms of recall and precision.  相似文献   

7.
8.
The Quranic Arabic Corpus (http://corpus.quran.com) is a collaboratively constructed linguistic resource initiated at the University of Leeds, with multiple layers of annotation including part-of-speech tagging, morphological segmentation (Dukes and Habash 2010) and syntactic analysis using dependency grammar (Dukes and Buckwalter 2010). The motivation behind this work is to produce a resource that enables further analysis of the Quran, the 1,400 year-old central religious text of Islam. This project contrasts with other Arabic treebanks by providing a deep linguistic model based on the historical traditional grammar known as i′rāb (?????). By adapting this well-known canon of Quranic grammar into a familiar tagset, it is possible to encourage online annotation by Arabic linguists and Quranic experts. This article presents a new approach to linguistic annotation of an Arabic corpus: online supervised collaboration using a multi-stage approach. The different stages include automatic rule-based tagging, initial manual verification, and online supervised collaborative proofreading. A popular website attracting thousands of visitors per day, the Quranic Arabic Corpus has approximately 100 unpaid volunteer annotators each suggesting corrections to existing linguistic tagging. To ensure a high-quality resource, a small number of expert annotators are promoted to a supervisory role, allowing them to review or veto suggestions made by other collaborators. The Quran also benefits from a large body of existing historical grammatical analysis, which may be leveraged during this review. In this paper we evaluate and report on the effectiveness of the chosen annotation methodology. We also discuss the unique challenges of annotating Quranic Arabic online and describe the custom linguistic software used to aid collaborative annotation.  相似文献   

9.
10.
There is not a widely amount of available annotated Arabic corpora. This leads us to contribute to the enrichment of Arabic corpora resources. In this regard, we have decided to start working with correct and carefully selected texts. Thus, beginning with the Quranic Arabic text is the best way to start for such an effort. Furthermore, the annotating linguistic resources, such as Quranic Corpus, are important for researchers working in all Arabic natural language processing fields. To the best of our knowledge, the only available Quranic Arabic corpora are from the University of Leeds, University of Jordan and the University of Haifa. Unfortunately, these corpora have several problems and they do not contain enough grammatical and syntactical information. To build a new Corpus of the Quran, the work used a semi-automatic technique, which consists in using the morphsyntactic of standard Arabic words “AlKhalil Morpho Sys” followed by a manual treatment. As a result of this work, we have built a new Quranic Corpus rich in morphosyntactical information.  相似文献   

11.
语音识别赋予了计算机能够识别出语音内容的功能,是人机交互技术领域的重要研究内容。随着计算机技术的发展,语音识别已经得到了成熟的发展。但是关于方言的语音识别还有很大的发展空间。中国是一个幅员辽阔、人口众多的国家,因此方言种类繁多,其中有3000多万人交流使用的重庆方言就是其中之一。采集了重庆方言的部分词语的文本文件和对应的语音文件建立语料库,根据重庆方言的发音特点,选取重庆方言的声韵母作为声学建模基元,选取隐马尔可夫模型(Hidden Markov Model, HMM)为声学模型设计了一个基于HMM的重庆方言语音识别系统。在训练过程利用语料库中训练集语料对声学模型进行训练,形成HMM模型库;在识别过程利用语料库中的测试集语料进行识别测试。实验结果表明,该系统能够实现重庆方言的语音识别,并且识别的正确率为100%。  相似文献   

12.
The various tools dedicated to Arabic natural language processing have undergone significant development during recent years. Among these tools, Arabic morphological analyzers are of great importance because they are often used within other projects that are more advanced such as syntactic parsers, search engines, machine translation systems, etc. Thus, researchers are forced to make a decision concerning which morphological analyzer to use in their research projects, and this task is very difficult since there are many criteria to take into account. In order to facilitate this choice, we considered the problem of benchmarking morphological analyzers in a previous work by proposing a solution that allows returning a set of metrics of each analyzer that are: accuracy, precision, recall, F-measure and the execution time. In this article, we present two new major improvements to our solution: the establishment of the first version of our corpus that is dedicated to the evaluation of morphological analyzers, as well as the introduction of a new metric, which combines all metrics related to results as well as the execution time of the analyzers.  相似文献   

13.
The success of using Hidden Markov Models (HMMs) for speech recognition application has motivated the adoption of these models for handwriting recognition especially the online handwriting that has large similarity with the speech signal as a sequential process. Some languages such as Arabic, Farsi and Urdo include large number of delayed strokes that are written above or below most letters and usually written delayed in time. These delayed strokes represent a modeling challenge for the conventional left-right HMM that is commonly used for Automatic Speech Recognition (ASR) systems. In this paper, we introduce a new approach for handling delayed strokes in Arabic online handwriting recognition using HMMs. We also show that several modeling approaches such as context based tri-grapheme models, speaker adaptive training and discriminative training that are currently used in most state-of-the-art ASR systems can provide similar performance improvement for Hand Writing Recognition (HWR) systems. Finally, we show that using a multi-pass decoder that use the computationally less expensive models in the early passes can provide an Arabic large vocabulary HWR system with practical decoding time. We evaluated the performance of our proposed Arabic HWR system using two databases of small and large lexicons. For the small lexicon data set, our system achieved competing results compared to the best reported state-of-the-art Arabic HWR systems. For the large lexicon, our system achieved promising results (accuracy and time) for a vocabulary size of 64k words with the possibility of adapting the models for specific writers to get even better results.  相似文献   

14.
The goal of this article is the application of genetic algorithms (GAs) to the automatic speech recognition (ASR) domain at the acoustic sequences classification level. Speech recognition has been cast as a pattern classification problem where we would like to classify an input acoustic signal into one of all possible phonemes. Also, the supervised classification has been formulated as a function optimization problem. Thus, we have attempted to recognize Standard Arabic (SA) phonemes of continuous, naturally spoken speech by using GAs, which have several advantages in resolving complicated optimization problems. In SA, there are 40 sounds. We have analyzed a corpus that contains several sentences composed of the whole SA phoneme types in the initial, medium, and final positions, recorded by several male speakers. Furthermore, the acoustic segments classification and the GAs have been explored. Among a set of classifiers such as Bayesian, likelihood, and distance classifier, we have used the distance classifier. It is based on the classification measure criterion. Therefore, we have used the decision rule Manhattan distance as the fitness functions for our GA evaluations. The corpus phonemes were extracted and classified successfully with an overall accuracy of 90.20%.  相似文献   

15.
INTEX is a linguistic development environment that includes large-coverage dictionaries and grammars, and parses texts of several million words in real time. INTEX has tools to create and maintain large-coverage lexical resources as well as morphological and syntactic grammars. Dictionaries and grammars are applied to texts in order to locate morphological, lexical and syntactic patterns, remove ambiguities, and tag simple and compound words. INTEX can build lemmatized concordances and indices of large texts with respect to all types of Finite State patterns. INTEX is used as a corpus processor, to analyze literary, journalistic and technical texts. I describe here the subset of tools used to perform advanced search requests on large texts. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

16.
INTEX is a linguistic development environment that includes large-coverage dictionaries and grammars, and parses texts of several million words in real time. INTEX has tools to create and maintain large-coverage lexical resources as well as morphological and syntactic grammars. Dictionaries and grammars are applied to texts in order to locate morphological, lexical and syntactic patterns, remove ambiguities, and tag simple and compound words. INTEX can build lemmatized concordances and indices of large texts with respect to all types of Finite State patterns. INTEX is used as a corpus processor, to analyze literary, journalistic and technical texts. I describe here the subset of tools used to perform advanced search requests on large texts. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

17.
Many natural language processing areas use semantic roles in order to improve the applications of the extracted information, the question answering and the machine translation, etc. In Arabic, the work of constructing the semantic role labeling system or the annotated corpus is extremely limited compared to their speaker’s number and to English language as well. In this paper, we present a supervised method for the semantic role labeling of Arabic sentences. Hence, we use the feedback capacity of the case-based reasoning to annotate new sentences from already annotated ones besides the use of the Arabic PropBank as a reference to the semantic labels. We test our method under a wide range corpus that contains 2332 attributes and 5291 arguments. Accordingly, an Arabic semantic role labeling system is tested, for the first time, in that corpus. As a result, our method shows the ability to annotate new sentences from the labeled sentences or the construction of the annotated corpus.  相似文献   

18.
In this paper, we present a pilot Arabic morphological Pattern Net study based on a lexical semantic resource. During this study, a limited number of Arabic Morphological Patterns have been selected in order to analyze the structure and the behavior of the verbs in the Arabic PropBank, which is a semantically annotated corpus of newswire text from the Annahar Journal. Our goal is twofold: (a) to study whether there is a direct relationship between morphological patterns and verbal semantic roles; and, (b) to verify that this direct relationship is a pervasive component of Arabic verb morphology. The approach to building our morphological Patterns database is based on linguistic generalization of the semantic roles of the verbal predicates. The results obtained show promising outcome for a future, more comprehensive study.  相似文献   

19.
20.
This paper investigates the feed forward back propagation neural network (FFBPNN) and the support vector machine (SVM) for the classification of two Maghrebian dialects: Tunisian and Moroccan. The dialect used by the Moroccan speakers is called “La Darijja” and that of Tunisians is called “Darija”. An Automatic Speech Recognition System is implemented in order to identify ten Arabic digits (from zero to nine). The implementation of our present system consists of two phases: The features extraction using a variety of popular hybrid techniques and the classification phase using separately the FFBPNN and the SVM. The experimental results showed that the recognition rates with both approaches have reached 98.3 % with FFBPNN and 97.5 % with SVM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号