首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
粘连断裂字符行的切分识别,是很多OCR 实际应用中存在的主要困难之一. 本文针对粘连断裂的印刷体数字行,提出了一种基于Viterbi 算法的切分识别方案,该方案采用两次切分识别的层次型结构. 在第二次切分识别过程中,首先,在候选切分点区域,结合灰度图像与二值轮廓信息,采用基于Viterbi 算法搜索的非直线路径进行切分,得到有效的切分路径;然后,结合分类器输出的可信度,采用Viterbi 算法来合并前面得到的候选切分图像块,进行动态切分与识别. 实际的金融票据识别系统实验表明,本文提出的印刷体数字行切分识别方法能够较好的克服字符行的粘连与断裂情况,提高了识别系统的识别率和鲁棒性.  相似文献   

3.
The segmentation of touching characters is still a challenging task, posing a bottleneck for offline Chinese handwriting recognition. In this paper, we propose an effective over-segmentation method with learning-based filtering using geometric features for single-touching Chinese handwriting. First, we detect candidate cuts by skeleton and contour analysis to guarantee a high recall rate of character separation. A filter is designed by supervised learning and used to prune implausible cuts to improve the precision. Since the segmentation rules and features are independent of the string length, the proposed method can deal with touching strings with more than two characters. The proposed method is evaluated on both the character segmentation task and the text line recognition task. The results on two large databases demonstrate the superiority of the proposed method in dealing with single-touching Chinese handwriting.  相似文献   

4.
5.
针对古籍古文献中部分汉字易发生粘连现象,提出一种古籍手写汉字多步分割方法.该方法继承了以往粗分割和细分割相结合的思想,首先采用投影进行粗分割,将手写汉字分为粘连字符和非粘连字符两类;然后针对粘连字符串抛弃常用的串行模式,直接采用粗分割的统计信息,设置初始分割路径,并基于最短分割路径的思想,在初始分割路径的局部邻域内基于最小权值搜索并修改分割路径,从而获得最佳的加权分割路径.实验证明该方法解决了字符分割不足和多处粘连字符的分割问题,有效的提高了分割的准确率,且算法的时间复杂度较低,算法效率较高.  相似文献   

6.
Correct segmentation of handwritten Chinese characters is crucial to their successful recognition. However, due to many difficulties involved, little work has been reported in this area. In this paper, a two-stage approach is presented to segment unconstrained handwritten Chinese characters. A handwritten Chinese character string is first coarsely segmented according to the background skeleton and vertical projection after a proper image preprocessing. With several geometric features, all possible segmentation paths are evaluated by using the fuzzy decision rules learned from examples. As a result, unsuitable segmentation paths are discarded. In the fine segmentation stage that follows, the strokes that may contain segmentation points are first identified. The feature points are then extracted from candidate strokes and taken as segmentation point candidates through each of which a segmentation path may be formed. The geometric features similar to the coarse segmentation stage are used and corresponding fuzzy decision rules are generated to evaluate fine segmentation paths. Experimental results on 1000 Chinese character strings from postal mail show that our approach can achieve a reasonable good overall accuracy in segmenting unconstrained handwritten Chinese characters.  相似文献   

7.
刘阳兴 《计算机应用研究》2011,28(10):3998-4000
针对粘连和搭接字符切分算法的不足,提出一种基于折线切分路径的字符切分算法。该算法利用投影法将粘连搭接字符与非粘连搭接字符分离开,而后结合粘连搭接字符独有的外形特征,通过引入惩罚权重的路径搜索算法快速而准确地得到粘连搭接字符间的折线切分路径;为了避免一些字符在以上的切分过程中被误切碎,利用识别反馈信息对一些字符子图像进行合并。实验结果表明,该算法对印刷体日英混排字符切分有很强的适应性,取得了较理想的切分效果。  相似文献   

8.
An off-line handwritten word recognition system is described. Images of handwritten words are matched to lexicons of candidate strings. A word image is segmented into primitives. The best match between sequences of unions of primitives and a lexicon string is found using dynamic programming. Neural networks assign match scores between characters and segments. Two particularly unique features are that neural networks assign confidence that pairs of segments are compatible with character confidence assignments and that this confidence is integrated into the dynamic programming. Experimental results are provided on data from the U.S. Postal Service.  相似文献   

9.
This paper presents a novel framework for recognition of Ethiopic characters using structural and syntactic techniques. Graphically complex characters are represented by the spatial relationships of less complex primitives which form a unique set of patterns for each character. The spatial relationship is represented by a special tree structure which is also used to generate string patterns of primitives. Recognition is then achieved by matching the generated string pattern against each pattern in the alphabet knowledge-base built for this purpose. The recognition system tolerates variations on the parameters of characters like font type, size and style. Direction field tensor is used as a tool to extract structural features.  相似文献   

10.
非限定性手写汉字串的分割与识别是当前字符识别领域中的一个难点问题.针对手写日期的特点,提出了整词识别和定长汉字串分割识别相结合的组合识别方法.整词识别将字符串作为一个整体进行识别,无需复杂的字符串分割过程.在定长汉字串分割过程中,首先通过识别来预测汉字串的长度,然后通过投影和轮廓分析确定候选分割线,最后通过识别选取最优分割路径.这两种分割识别方法通过规则进行组合,大大提高了系统的性能.在真实票据图像上的实验表明了该方法的有效性,分割识别正确率达到了93.3%.  相似文献   

11.
Word searching in non-structural layout such as graphical documents is a difficult task due to arbitrary orientations of text words and the presence of graphical symbols. This paper presents an efficient approach for word searching in documents of non-structural layout using an efficient indexing and retrieval approach. The proposed indexing scheme stores spatial information of text characters of a document using a character spatial feature table (CSFT). The spatial feature of text component is derived from the neighbor component information. The character labeling of a multi-scaled and multi-oriented component is performed using support vector machines. For searching purpose, the positional information of characters is obtained from the query string by splitting it into possible combinations of character pairs. Each of these character pairs searches the position of corresponding text in document with the help of CSFT. Next, the searched text components are joined and formed into sequence by spatial information matching. String matching algorithm is performed to match the query word with the character pair sequence in documents. The experimental results are presented on two different datasets of graphical documents: maps dataset and seal/logo image dataset. The results show that the method is efficient to search query word from unconstrained document layouts of arbitrary orientation.  相似文献   

12.
The board game FragmindTM poses the following problem: The player has to reconstruct an (unknown) string s over the alphabet $\Sigma$. To this end, the game reports the following information to the player, for every character $x \in \Sigma$: First, the string s is cleaved wherever the character x is found in s. Second, every resulting fragment y is scrambled by a random permutation so that the only information left is how many times y contains each character $\sigma \in \Sigma$. These scrambled fragments are then reported to the player. Clearly, distinct strings can show identical cleavage patterns for all cleavage characters. In fact, even short strings of length 30+ usually have non-unique cleavage patterns. To this end, we introduce a generalization of the game setup called Sequencing from Compomers. We also generate those fragments of s that contain up to k uncleaved characters x, for some small and fixed threshold k. This modification dramatically increases the length of strings that can be uniquely reconstructed. We show that it is NP-hard to decide whether there exists some string compatible with the input data, but we also present a branch-and-bound runtime heuristic to find all such strings: The input data are transformed into subgraphs of the de Bruijn graph, and we search for walks in these subgraphs simultaneously. The above problem stems from the analysis of mass spectrometry data from base-specific cleavage of DNA sequences, and gives rise to a completely new approach to DNA de-novo sequencing.  相似文献   

13.
Scene text detection plays a significant role in various applications,such as object recognition,document management,and visual navigation.The instance segmentation based method has been mostly used in existing research due to its advantages in dealing with multi-oriented texts.However,a large number of non-text pixels exist in the labels during the model training,leading to text mis-segmentation.In this paper,we propose a novel multi-oriented scene text detection framework,which includes two main modules:character instance segmentation (one instance corresponds to one character),and character flow construction (one character flow corresponds to one word).We use feature pyramid network(FPN) to predict character and non-character instances with arbitrary directions.A joint network of FPN and bidirectional long short-term memory (BLSTM) is developed to explore the context information among isolated characters,which are finally grouped into character flows.Extensive experiments are conducted on ICDAR2013,ICDAR2015,MSRA-TD500 and MLT datasets to demonstrate the effectiveness of our approach.The F-measures are 92.62%,88.02%,83.69% and 77.81%,respectively.  相似文献   

14.
Information retrieval in document image databases   总被引:2,自引:0,他引:2  
With the rising popularity and importance of document images as an information source, information retrieval in document image databases has become a growing and challenging problem. In this paper, we propose an approach with the capability of matching partial word images to address two issues in document image retrieval: word spotting and similarity measurement between documents. First, each word image is represented by a primitive string. Then, an inexact string matching technique is utilized to measure the similarity between the two primitive strings generated from two word images. Based on the similarity, we can estimate how a word image is relevant to the other and, thereby, decide whether one is a portion of the other. To deal with various character fonts, we use a primitive string which is tolerant to serif and font differences to represent a word image. Using this technique of inexact string matching, our method is able to successfully handle the problem of heavily touching characters. Experimental results on a variety of document image databases confirm the feasibility, validity, and efficiency of our proposed approach in document image retrieval.  相似文献   

15.
16.
采用混合神经网络高精确度提取机票字符   总被引:2,自引:0,他引:2  
文章针对机票复杂背景提出了一个进行字符分离的高准确率新算法。该方法采用一个基于主成分分析(Prin-cipalComponentsAnalysis,PCA)和学习向量量化(LearningVectorQuantization,LVQ)混合神经网络作为高效的字符提取器,实际应用证明该字符提取算法准确率高,为准确的字符定位和OCR提供了良好的输入。  相似文献   

17.
一种无约束手写体数字串分割方法   总被引:11,自引:1,他引:11  
针对无约束手写体数字串中的连笔字符,本文提出以基于识别的分割方法为主,结合运用剖分方法和全局识别方法等多种分割策略的数字串分割方法。这种方法直接针对数字串分割,也可以运用到非数字字符串的分割中,其分割思想对连笔汉字的分割也具有一定指导意义。  相似文献   

18.
手写数字串切分是手写数字OCR系统中必不可少的组成部分.实际应用中一般用框格对数字的书写范围进行约束,切分过程比较容易,如果没有框格约束,手写数字串的切分就成为一个难题.针对无约束的手写数字串切分的难点,提出了一种新的粘连数字串切分方法.该方法先使用主曲线实现字符模板的笔画抽取,然后依据字符笔画的模糊特征处理笔画,最后以字符识别器提供的置信度为依据完成切分过程.为验证该新切分方法的效果.对从银行实地采集的3 000份真实支票进行了切分实验,其中363张支票存在粘连现象,切分正确率为89.68%.实验结果表明,该算法能够有效地切分多字粘连的手写体数字串.  相似文献   

19.
A brief analysis of character strings and string processing is given. Text processing is defined as the processing of character strings to control not only the sequential relations among the characters, but also spatial and form relations among the symbols used to produce a physical display of the character string. Requirements are given for a data structure by which character strings may be represented to facilitate text processing, including independent manipulation of sequential, spatial, and form relations. A Text Processing Code (TPC) meeting these requirements is presented in detail. Several other coding schemes are examined and shown to be inadequate for text processing as defined here.  相似文献   

20.
字符串近似匹配在网络安全中有广泛的应用。本文从中文字符串相似度角度出发,提出了通过单个汉字的细分来提高字符相似度的想法,并从汉字"成簇性"方面进行分析,引出了汉字的Key表示方法,将汉字与Key的映射关系归结为规则,讨论了规则的获取方法。设计了基于规则的中文字符串近似匹配的框架,提出了新的相似度计算模型,并通过实验对整个流程加以验证,证明基于规则的中文字符串近似匹配的优越性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号