首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Cambridge University Multimedia Document Retrieval (CU-MDR) Demo System is a web-based application that allows the user to query a database of radio broadcasts that are available on the Internet. The audio from several radio stations is downloaded and transcribed automatically. This gives a collection of text and audio documents that can be searched by a user. The paper describes how speech recognition and information retrieval techniques are combined in the CU-MDR Demo System and shows how the user can interact with it.  相似文献   

2.
广播语音的自动识别、标注、检索等是涉及到语音技术、自然语言处理、信息检索等多个领域的综合性课题。在介绍了广播语音的自动标注与检索的研究概况并分析了其中涉及的关键技术基础上,提出了面向普通话广播语音的多层次自动标注框架以及基于多层次标注的语音检索方案,对文档层、句子层和词语层的标注属性进行了探讨,采用了递归标注方法对属性逐层细化,并讨论了对语音自动标注至关重要的语音识别引擎和语音流分割等问题。基于本文提出的方法,对10 小时的普通话广播语音资料进行了标注和检索,得到了比较满意的实验结果。  相似文献   

3.
Text Retrieval from Document Images Based on Word Shape Analysis   总被引:2,自引:1,他引:2  
In this paper, we propose a method of text retrieval from document images using a similarity measure based on word shape analysis. We directly extract image features instead of using optical character recognition. Document images are segmented into word units and then features called vertical bar patterns are extracted from these word units through local extrema points detection. All vertical bar patterns are used to build document vectors. Lastly, we obtain the pair-wise similarity of document images by means of the scalar product of the document vectors. Four corpora of news articles were used to test the validity of our method. During the test, the similarity of document images using this method was compared with the result of ASCII version of those documents based on the N-gram algorithm for text documents.  相似文献   

4.
This paper proposes an efficient speech data selection technique that can identify those data that will be well recognized. Conventional confidence measure techniques can also identify well-recognized speech data. However, those techniques require a lot of computation time for speech recognition processing to estimate confidence scores. Speech data with low confidence should not go through the time-consuming recognition process since they will yield erroneous spoken documents that will eventually be rejected. The proposed technique can select the speech data that will be acceptable for speech recognition applications. It rapidly selects speech data with high prior confidence based on acoustic likelihood values and using only speech and monophone models. Experiments show that the proposed confidence estimation technique is over 50 times faster than the conventional posterior confidence measure while providing equivalent data selection performance for speech recognition and spoken document retrieval.  相似文献   

5.
6.
7.
广电总局的电视监测业务已经实现了设备控制自动化和卫星信号采集的数字化、信息化和网络化,但基于内容的异态事件监测和信息处理还是完全依赖人工完成.语音处理、语音识别和关联检索等技术的发展,为电视监测业务智能化提供了可能.本文介绍了电视监测业务智能辅助系统的架构,该系统能够自动定位电视节目,把电视新闻语音转化为文字,对敏感语言内容预警,并关联聚类相关信息,方便人工后续处理.  相似文献   

8.
A typical spoken content retrieval solution integrates multiple technologies that belong to the areas of automatic speech recognition and information retrieval. Due to the rich set of challenges – many of them language specific – as well as widespread impact, numerous research sites in the world are actively engaged in this research area. This special issue highlights some of the recent advances in spoken content retrieval.  相似文献   

9.
Speech and speaker recognition systems are rapidly being deployed in real-world applications. In this paper, we discuss the details of a system and its components for indexing and retrieving multimedia content derived from broadcast news sources. The audio analysis component calls for real-time speech recognition for converting the audio to text and concurrent speaker analysis consisting of the segmentation of audio into acoustically homogeneous sections followed by speaker identification. The output of these two simultaneous processes is used to abstract statistics to automatically build indexes for text-based and speaker-based retrieval without user intervention. The real power of multimedia document processing is the possibility of Boolean queries in the form of combined text- and speaker-based user queries. Retrieval for such queries entails combining the results of individual text and speaker based searches. The underlying techniques discussed here can easily be extended to other speech-centric applications and transactions.  相似文献   

10.
11.
This paper investigates speech prosody for automatic story segmentation in Mandarin broadcast news. Prosodic cues effectively used in English story segmentation deserve a re-investigation since the lexical tones of Mandarin may complicate the expressions of pitch declination and reset. Our data-oriented study shows that story boundaries cannot be clearly discriminated from utterance boundaries by speaker normalized pitch features due to their large variations across different Mandarin syllable tones. We thus propose to use speaker- and tone-normalized pitch features that can provide clear separations between utterance and story boundaries. Our study also shows that speaker-normalized pause duration is quite effective to separate between story and utterance boundaries, while speaker-normalized speech energy and syllable duration are not effective. Experiments using decision trees for story boundary detection reinforce the difference between English and Chinese, i.e., speaker- and tone-normalized pitch features should be favorably adopted in Mandarin story segmentation. We show that the combination of different prosodic cues can achieve a very high F-measure of 93.04% due to the complementarity between pause, pitch and energy. Analysis of the decision tree uncovered five major heuristics that show how speakers jointly utilize pause duration and pitch to separate speech into stories.  相似文献   

12.
1.引言面对日益庞大的信息量,如何有效地检索到感兴趣的内容是至关重要的。新闻视频、音频(包括电视、广播)与文字报道相比,更为生动,表达更为丰富,但也有数据量大、难以组织、索引、检索等缺点。这主要体现在两方面; 文本有标题、段等明显的辅助标记,而视频、音频则没有。一般的浏览工具只有播放、快进、快退、拖动定位等简单手段。这对于几十、几百小时,而且还在日益增长的视频、音频数据库,是远远不能满足要求的。  相似文献   

13.
There is significant lexical difference—words and usage of words-between spontaneous/colloquial language and the written language. This difference affects the performance of spoken language recognition systems that use statistical language models or context-free-grammars because these models are based on the written language rather than the spoken form. There are many filler phrases and colloquial phrases that appear solely or more often in spontaneous and colloquial speech. Chinese languages perhaps exemplify such a difference as many colloquial forms of the language, such as Cantonese, exist strictly in spoken forms and are different from the written standard Chinese, which is based on Mandarin. A conventional way of dealing with this issue is to add colloquial terms manually to the lexicon. However, this is time-consuming and expensive. Meanwhile, supervised learning requires manual tagging of large corpuses, which is also time-consuming. We propose an unsupervised learning method to find colloquial terms and classify filler and content phrases in spontaneous and colloquial Chinese, including Cantonese. We propose using frequency strength, and spread measures of character pairs and groups to extract automatically frequent, out-of-vocabulary colloquial terms to add to a standard Chinese lexicon. An unsegmented, and unannotated corpus is segmented with the augmented lexicon. We then propose a Markov classifier to classify Chinese characters into either content or filler phrases in an iterative training method. This method is task-independent and can extract even mixed language terms. We show the effectiveness of our method by both a natural language query processing task and an adaptive Cantonese language-modeling task. The precision for content phrase extraction and classification is around 80%, with a recall of 99%, and the precision for filler phrase extraction and classification is around 99.5% with a recall of approximately 89%. The web search precision using these extracted content words is comparable to that of the search results with content phrases selected by humans. We adapt a language model trained from written texts with the Hong Kong Newsgroup corpus. It outperforms both the standard Chinese language model and also the Cantonese language model. It also performs better than the language model trained a simply by concatenating two sets of standard and colloquial texts.  相似文献   

14.
Web news provides a quick and convenient means to create collections of large documents. The creation of a web news corpus has typically required the construction of a set of HTML parsing rules to identify content text. In general, these parsing rules are written manually and treat different web pages differently. We address this issue and propose a news content recognition algorithm that is language and layout independent. Our method first scans a given HTML document and roughly localizes a set of candidate news areas. Next, we apply a designed scoring function to rank the best content. To validate this approach, we evaluate the systems performance using 1092 items of multilingual web news data covering 17 global regions and 11 distinct languages. We compare these data with nine published content extraction systems using standard settings. The results of this empirical study show that our method outperforms the second-best approach (Boilerpipe) by 6.04 and 10.79 % with regard to the relative micro and macro F-measures, respectively. We also apply our system to monitor online RSS news distribution. It collected 0.4 million news articles from 200 RSS channels in 20 days. This sample quality test shows that our method achieved 93 % extraction accuracy for large news streams.  相似文献   

15.
It is suggested that algorithms capable of estimating and characterizing accent knowledge would provide valuable information in the development of more effective speech systems such as speech recognition, speaker identification, audio stream tagging in spoken document retrieval, channel monitoring, or voice conversion. Accent knowledge could be used for selection of alternative pronunciations in a lexicon, engage adaptation for acoustic modeling, or provide information for biasing a language model in large vocabulary speech recognition. In this paper, we propose a text-independent automatic accent classification system using phone-based models. Algorithm formulation begins with a series of experiments focused on capturing the spectral evolution information as potential accent sensitive cues. Alternative subspace representations using principal component analysis and linear discriminant analysis with projected trajectories are considered. Finally, an experimental study is performed to compare the spectral trajectory model framework to a traditional hidden Markov model recognition framework using an accent sensitive word corpus. System evaluation is performed using a corpus representing five English speaker groups with native American English, and English spoken with Mandarin Chinese, French, Thai, and Turkish accents for both male and female speakers.  相似文献   

16.
个人计算机中存在大量无结构文档,从无结构文档中提取有效信息是实现语义桌面管理的一个重点和难点。而实体的识别和提取又是信息提取技术中的一个重要前提和关键步骤。本文首先提出一种利用文本线索和本体元数据来识别无结构文档中实体的方法,然后手工建立一个文档集合,在该集合上验证新方法在特定领域内的实体识别效果。  相似文献   

17.
This paper proposes a new, efficient algorithm for extracting similar sections between two time sequence data sets. The algorithm, called Relay Continuous Dynamic Programming (Relay CDP), realizes fast matching between arbitrary sections in the reference pattern and the input pattern and enables the extraction of similar sections in a frame synchronous manner. In addition, Relay CDP is extended to two types of applications that handle spoken documents. The first application is the extraction of repeated utterances in a presentation or a news speech because repeated utterances are assumed to be important parts of the speech. These repeated utterances can be regarded as labels for information retrieval. The second application is flexible spoken document retrieval. A phonetic model is introduced to cope with the speech of different speakers. The new algorithm allows a user to query by natural utterance and searches spoken documents for any partial matches to the query utterance. We present herein a detailed explanation of Relay CDP and the experimental results for the extraction of similar sections and report results for two applications using Relay CDP. Yoshiaki Itoh has been an associate professor in the Faculty of Software and Information Science at Iwate Prefectural University, Iwate, Japan, since 2001. He received the B.E. degree, M.E. degree, and Dr. Eng. from Tokyo University, Tokyo, in 1987, 1989, and 1999, respectively. From 1989 to 2001 he was a researcher and a staff member of Kawasaki Steel Corporation, Tokyo and Okayama. From 1992 to 1994 he transferred as a researcher to Real World Computing Partnership, Tsukuba, Japan. Dr. Itoh's research interests include spoken document processing without recognition, audio and video retrieval, and real-time human communication systems. He is a member of ISCA, Acoustical Society of Japan, Institute of Electronics, Information and Communication Engineers, Information Processing Society of Japan, and Japan Society of Artificial Intelligence. Kazuyo Tanaka has been a professor at the University of Tsukuba, Tsukuba, Japan, since 2002. He received the B.E. degree from Yokohama National University, Yokohama, Japan, in 1970, and the Dr. Eng. degree from Tohoku University, Sendai, Japan, in 1984. From 1971 to 2002 he was research officer of Electrotechnical Laboratory (ETL), Tsukuba, Japan, and the National Institute of Advanced Science and Technology (AIST), Tsukuba, Japan, where he was working on speech analysis, synthesis, recognition, and understanding, and also served as chief of the speech processing section. His current interests include digital signal processing, spoken document processing, and human information processing. He is a member of IEEE, ISCA, Acoustical Society of Japan, Institute of Electronics, Information and Communication Engineers, and Japan Society of Artificial Intelligence. Shi-Wook Lee received the B.E. degree and M.E. degree from Yeungnam University, Korea and Ph.D. degree from the University of Tokyo in 1995, 1997, and 2001, respectively. Since 2001 he has been working in the Research Group of Speech and Auditory Signal Processing, the National Institute of Advanced Science and Technology (AIST), Tsukuba, Japan, as a postdoctoral fellow. His research interests include spoken document processing, speech recognition, and understanding.  相似文献   

18.
19.
20.
在分析应用视频数据的过程中,视频分段是分析,组织,应用视频数据的基础。由于视频数据的多样性,传统的分段方法不能给出令人满意的结果,一般需要通过人机交互来进行。文中将较为成熟的文本分析、语音处理、图像处理三种技术进行综合,互为补充,对视频流进行分割。文本分析的对象是语音转换成的文本、标题、注释等。语音处理包括语音识别和语音信号分析。语音识别将视频中的自然语言转换为文字。语音信号分析对视频材料中的语音成分进行基础分析。图像处理主要用来处理视频中的图像部分。文章阐述了视频流的分段层次,文本分析,语音处理算法以及镜头突变,镜头渐变识别算法的思想。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号