首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Large vocabulary recognition of on-line handwritten cursive words   总被引:1,自引:0,他引:1  
This paper presents a writer independent system for large vocabulary recognition of on-line handwritten cursive words. The system first uses a filtering module, based on simple letter features, to quickly reduce a large reference dictionary (lexicon) to a more manageable size; the reduced lexicon is subsequently fed to a recognition module. The recognition module uses a temporal representation of the input, instead of a static two-dimensional image, thereby preserving the sequential nature of the data and enabling the use of a Time-Delay Neural Network (TDNN); such networks have been previously successful in the continuous speech recognition domain. Explicit segmentation of the input words into characters is avoided by sequentially presenting the input word representation to the neural network-based recognizer. The outputs of the recognition module are collected and converted into a string of characters that is matched against the reduced lexicon using an extended Damerau-Levenshtein function. Trained on 2,443 unconstrained word images (11 k characters) from 55 writers and using a 21 k lexicon we reached a 97.9% and 82.4% top-5 word recognition rate on a writer-dependent and writer-independent test, respectively  相似文献   

2.
Abstract. This paper describes a method for the correction of optically read Devanagari character strings using a Hindi word dictionary. The word dictionary is partitioned in order to reduce the search space besides preventing forced matching to an incorrect word. The dictionary partitioning strategy takes into account the underlying OCR process. The dictionary words at the top level have been divided into two partitions, namely: a short-words partition and the remaining words partition. The short-word partition is sub-partitioned using the envelope information of the words. The envelope consists of the number of top, lower, core modifiers along with the number of core charactersp. Devanagari characters are written in three strips. Most of the characters referred to as core characters are written in the middle strip. The remaining words are further partitioned using tags. A tag is a string of fixed length associated with each partition. The correction process uses a distance matrix for a assigning penalty for a mismatch. The distance matrix is based on the information about errors that the classification process is known to make and the confidence figure that the classification process associates with its output. An improvement of approximately 20% in recognition performance is obtained. For a short word, 590 words are searched on average from 14 sub-partitions of the short-words partition before an exact match is found. The average number of partitions and the average number of words increase to 20 and 1585, respectively, when an exact match is not found. For tag-based partitions, on an average, 100 words from 30 partitions are compared when either an exact match is found or a word within the preset threshold distance is found. If an exact match or a match within a preset threshold is not found, the average number of partitions becomes 75 and 450 words on an average are compared. To the best of our knowledge this is the first work on the use of a Hindi word dictionary for OCR post-processing. Received August 6, 2001 / Accepted August 22, 2001  相似文献   

3.
辅助汉语学习研究作为一个重要的研究领域,已经在自然语言处理领域激发起越来越多人的兴趣。文中提出一个基于字分析单元的辅助阅读系统,它可以为汉语学习者提供即时的辅助翻译和学习功能。系统首先提出基于字信息的汉语词法分析方法,对汉语网页中文本进行分词处理,然后利用基于组成字结构信息的方法发现新词。对于通用词典未收录的新词(例如: 专业术语、专有名词和固定短语),系统提出了基于语义预测和反馈学习的方法在Web上挖掘出地道的译文。对于常用词,系统通过汉英(或汉日)词典提供即时的译文显示,用户也可通过词用法检索模块在网络上检索到该词的具体用法实例。该系统关键技术包括: 基于字信息的汉语词法分析,基于组成字结构信息的新词发现,基于语义预测和反馈学习的新词译文获取,这些模块均以字分析单元的方法为主线,并始终贯穿着整个系统。实验表明该系统在各方面都具有良好的性能。  相似文献   

4.
Stop word location and identification for adaptive text recognition   总被引:2,自引:0,他引:2  
Abstract. We propose a new adaptive strategy for text recognition that attempts to derive knowledge about the dominant font on a given page. The strategy uses a linguistic observation that over half of all words in a typical English passage are contained in a small set of less than 150 stop words. A small dictionary of such words is compiled from the Brown corpus. An arbitrary text page first goes through layout analysis that produces word segmentation. A fast procedure is then applied to locate the most likely candidates for those words, using only widths of the word images. The identity of each word is determined using a word shape classifier. Using the word images together with their identities, character prototypes can be extracted using a previously proposed method. We describe experiments using simulated and real images. In an experiment using 400 real page images, we show that on average, eight distinct characters can be learned from each page, and the method is successful on 90% of all the pages. These can serve as useful seeds to bootstrap font learning. Received October 8, 1999 / Revised March 29, 2000  相似文献   

5.
This paper presents the online handwriting recognition system NPen++ developed at the University of Karlsruhe and Carnegie Mellon University. The NPen++ recognition engine is based on a multi-state time delay neural network and yields recognition rates from 96% for a 5,000 word dictionary to 93.4% on a 20,000 word dictionary and 91.2% for a 50,000 word dictionary. The proposed tree search and pruning technique reduces the search space considerably without losing too much recognition performance compared to an exhaustive search. This enables the NPen++ recognizer to be run in real-time with large dictionaries. Initial recognition rates for whole sentences are promising and show that the MS-TDNN architecture is suited to recognizing handwritten data ranging from single characters to whole sentences. Received September 3, 2000 / Revised October 9, 2000  相似文献   

6.
7.
面向信息检索的自适应中文分词系统   总被引:16,自引:0,他引:16  
新词的识别和歧义的消解是影响信息检索系统准确度的重要因素.提出了一种基于统计模型的、面向信息检索的自适应中文分词算法.基于此算法,设计和实现了一个全新的分词系统BUAASEISEG.它能够识别任意领域的各类新词,也能进行歧义消解和切分任意合理长度的词.它采用迭代式二元切分方法,对目标文档进行在线词频统计,使用离线词频词典或搜索引擎的倒排索引,筛选候选词并进行歧义消解.在统计模型的基础上,采用姓氏列表、量词表以及停词列表进行后处理,进一步提高了准确度.通过与著名的ICTCLAS分词系统针对新闻和论文进行对比评测,表明BUAASEISEG在新词识别和歧义消解方面有明显的优势.  相似文献   

8.
Pronunciation variation is a major obstacle in improving the performance of Arabic automatic continuous speech recognition systems. This phenomenon alters the pronunciation spelling of words beyond their listed forms in the pronunciation dictionary, leading to a number of out of vocabulary word forms. This paper presents a direct data-driven approach to model within-word pronunciation variations, in which the pronunciation variants are distilled from the training speech corpus. The proposed method consists of performing phoneme recognition, followed by a sequence alignment between the observation phonemes generated by the phoneme recognizer and the reference phonemes obtained from the pronunciation dictionary. The unique collected variants are then added to dictionary as well as to the language model. We started with a Baseline Arabic speech recognition system based on Sphinx3 engine. The Baseline system is based on a 5.4 hours speech corpus of modern standard Arabic broadcast news, with a pronunciation dictionary of 14,234 canonical pronunciations. The Baseline system achieves a word error rate of 13.39%. Our results show that while the expanded dictionary alone did not add appreciable improvements, the word error rate is significantly reduced by 2.22% when the variants are represented within the language model.  相似文献   

9.
Hybrid contextural text recognition with string matching   总被引:1,自引:0,他引:1  
The hybrid contextural algorithm for reading real-life documents printed in varying fonts of any size is presented. Text is recognized progressively in three passes. The first pass is used to generate character hypothesis, the second to generate word hypothesis, and the third to verify the word hypothesis. During the first pass, isolated characters are recognized using a dynamic contour warping classifier. Transient statistical information is collected to accelerate the recognition process and to verify hypotheses in later processing. A transient dictionary consisting of high confidence nondictionary words is constructed in this pass. During the second pass, word-level hypotheses are generated using hybrid contextual text processing. Nondictionary words are recognized using a modified Viterbi algorithm, a string matching algorithm utilizing n grams, special handlers for touching characters, and pragmatic handlers for numerals, punctuation, hyphens, apostrophes, and a prefix/suffix handler. This processing usually generates several word hypothesis. During the third pass, word-level verification occurs  相似文献   

10.
For on-line handwriting recognition, a hybrid approach that combines the discrimination power of neural networks with the temporal structure of hidden Markov models is presented. Initially, all plausible letter components of an input pattern are detected by using a letter spotting technique based on hidden Markov models. A word hypothesis lattice is generated as a result of the letter spotting. All letter hypotheses in the lattice are evaluated by a neural network character recognizer in order to reinforce letter discrimination power. Then, as a new technique, an island-driven lattice search algorithm is performed to find the optimal path on the word hypothesis lattice which corresponds to the most probable word among the dictionary words. The results of this experiment suggest that the proposed framework works effectively in recognizing English cursive words. In a word recognition test, on average 88.5% word accuracy was obtained.  相似文献   

11.
论藏文的序性及排序方法   总被引:7,自引:10,他引:7  
为解决藏文排序问题,本文提出藏文的构造序和字符序概念,并在此基础上提出解决藏文词典序的计算机方案。文章对各类藏文构造及字符进行了分析和赋值,给出了藏文计算机排序的技术流程图。  相似文献   

12.
Slant estimation algorithm for OCR systems   总被引:1,自引:0,他引:1  
A slant removal algorithm is presented based on the use of the vertical projection profile of word images and the Wigner–Ville distribution. The slant correction does not affect the connectivity of the word and the resulting words are natural. The evaluation of our algorithm was equally made by subjective and objective means. The algorithm has been tested in English and Modern Greek samples of more than 500 writers, taken from the databases IAM-DB and GRUHD. The extracted results are natural, and almost always improved with respect to the original image, even in the case of variant-slanted writing. The performance of an existed character recognition system showed an increase of up to 9% for the same data, while the training time cost was significantly reduced. Due to its simplicity, this algorithm can be easily incorporated into any optical character recognition system.  相似文献   

13.
针对俄语语音合成和语音识别系统中发音词典规模有限的问题,提出一种基于长短时记忆(LSTM)序列到序列模型的俄语词汇标音算法,同时设计实现了标音原型系统。首先,对基于SAMPA的俄语音素集进行了改进设计,使标音结果能够反映俄语单词的重音位置及元音弱化现象,并依据改进的新音素集构建了包含20 000词的俄语发音词典;然后利用TensorFlow框架实现了这一算法,该算法通过编码LSTM将俄语单词转换为固定维数的向量,再通过解码LSTM将向量转换为目标发音序列;最后,设计实现了具有交互式单词标音等功能的俄语词汇标音系统。实验结果表明,该算法在集外词测试集上的词形正确率达到了74.8%,音素正确率达到了94.5%,均高于Phonetisaurus方法。该系统能够有效为俄语发音词典的构建提供支持。  相似文献   

14.
随着社交网络的发展,新的词汇不断出现。新词的出现往往表征了一定的社会热点,同时也代表了一定的公众情绪,新词的识别与情感倾向判定为公众情绪预测提供了一种新的思路。通过构建深层条件随机场模型进行序列标记,引入词性、单字位置和构词能力等特征,结合众包网络词典等第三方词典。传统的基于情感词典的方法难以对新词情感进行判定,基于神经网络的语言模型将单词表示为一个K维的词义向量,通过寻找新词词义向量空间中距离该新词最近的词,根据这些词的情感倾向以及与新词的词义距离,判断新词的情感倾向。通过在北京大学语料上的新词发现和情感倾向判定实验,验证了所提模型及方法的有效性,其中新词判断的F值为0.991,情感识别准确率为70%。  相似文献   

15.
建立公开、权威的蒙古文手写数据库是研究和开发蒙古文手写识别系统的基础。该文在蒙古文编码、构词和语法的研究基础上,公开了一个蒙古文大词汇量脱机手写数据库MHW,其中训练集由5 000个单词构成,每个词采集了20个样本,共包含10万样本,测试集Ⅰ包含5 000样本,测试集Ⅱ包含14 085样本。该文利用蒙古文文字长度可变特征研究了自动错误检测算法,提高了字库的可靠性。在三种常用手写识别模型上评估了字库的性能,其中基于循环神经网络的模型表现出最佳性能,在字典受限条件下测试集Ⅰ的词错误率达到2.20%,测试集Ⅱ达到了5.55%。  相似文献   

16.
17.
在医疗命名实体识别中,由于存在大量医学专业术语和语料中语言不规范的原因,识别的准确率不高。为了识别未登录的医学术语和应对语言不规范问题,提出一种基于N-grams新词发现的Lattice-LSTM的多粒度命名实体识别模型。在医疗对话语料中使用N-grams算法提取新词并构造一个医疗相关的词典,通过Lattice-LSTM模型将输入的字符和所有能在词典匹配的单词一起编码,其中门结构能够使模型选择最相关的字符和单词。Lattice-LSTM能够利用发现的新词信息识别未登录的医学术语,从而得到更好的实验识别结果。  相似文献   

18.
19.
谢飞  龚声蓉  刘纯平  季怡 《计算机科学》2015,42(11):293-298
基于视觉单词的人物行为识别由于在特征中加入了中层语义信息,因此提高了识别的准确性。然而,视觉单词提取时由于前景和背景存在相互干扰,使得视觉单词的表达能力受到影响。提出一种结合局部和全局特征的视觉单词生成方法。该方法首先用显著图检测出前景人物区域,采用提出的动态阈值矩阵对人物区域用不同的阈值来分别检测时空兴趣点,并计算周围的3D-SIFT特征来描述局部信息。在此基础上,采用光流直方图特征描述行为的全局运动信息。通过谱聚类将局部和全局特征融合成视觉单词。实验证明,相对于流行的局部特征视觉单词生成方法,所提出的方法在简单背景的KTH数据集上的识别率比平均识别率提高了6.4%,在复杂背景的UCF数据集上的识别率比平均识别率提高了6.5%。  相似文献   

20.
该文提出了一种规则和藏字语法分析相结合的藏字自动校对算法, 不使用藏字字典和大规模语料库。通过研究藏字构字语法,得到藏字的结构特征,进而对藏字的字母组合进行分段处理,简化藏字构字复杂度,研究出各分段部分的构字规则,然后按照规则进行字的校对。实验表明,系统对现代藏文字的查错率达100%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号