共查询到20条相似文献,搜索用时 10 毫秒
1.
2.
G. Louloudis B. Gatos I. Pratikakis C. HalatsisAuthor vitae 《Pattern recognition》2009,42(12):3169-3183
In this paper, we present a segmentation methodology of handwritten documents in their distinct entities, namely, text lines and words. Text line segmentation is achieved by applying Hough transform on a subset of the document image connected components. A post-processing step includes the correction of possible false alarms, the detection of text lines that Hough transform failed to create and finally the efficient separation of vertically connected characters using a novel method based on skeletonization. Word segmentation is addressed as a two class problem. The distances between adjacent overlapped components in a text line are calculated using the combination of two distance metrics and each of them is categorized either as an inter- or an intra-word distance in a Gaussian mixture modeling framework. The performance of the proposed methodology is based on a consistent and concrete evaluation methodology that uses suitable performance measures in order to compare the text line segmentation and word segmentation results against the corresponding ground truth annotation. The efficiency of the proposed methodology is demonstrated by experimentation conducted on two different datasets: (a) on the test set of the ICDAR2007 handwriting segmentation competition and (b) on a set of historical handwritten documents. 相似文献
3.
4.
Nazih Ouwayed Abdel Bela?d 《International Journal on Document Analysis and Recognition》2012,15(4):297-314
The multi-orientation occurs frequently in ancient handwritten documents, where the writers try to update a document by adding some annotations in the margins. Due to the margin narrowness, this gives rise to lines in different directions and orientations. Document recognition needs to find the lines everywhere they are written whatever their orientation. This is why we propose in this paper a new approach allowing us to extract the multi-oriented lines in scanned documents. Because of the multi-orientation of lines and their dispersion in the page, we use an image meshing allowing us to progressively and locally determine the lines. Once the meshing is established, the orientation is determined using the Wigner–Ville distribution on the projection histogram profile. This local orientation is then enlarged to limit the orientation in the neighborhood. Afterward, the text lines are extracted locally in each zone basing on the follow-up of the orientation lines and the proximity of connected components. Finally, the connected components that overlap and touch in adjacent lines are separated. The morphology analysis of the terminal letters of Arabic words is here considered. The proposed approach has been experimented on 100 documents reaching an accuracy of about 98.6%. 相似文献
5.
This paper presents a new Bayesian-based method of unconstrained handwritten offline Chinese text line recognition. In this method, a sample of a real character or non-character in realistic handwritten text lines is jointly recognized by a traditional isolated character recognizer and a character verifier, which requires just a moderate number of handwritten text lines for training. To improve its ability to distinguish between real characters and non-characters, the isolated character recognizer is negatively trained using a linear discriminant analysis (LDA)-based strategy, which employs the outputs of a traditional MQDF classifier and the LDA transform to re-compute the posterior probability of isolated character recognition. In tests with 383 text lines in HIT-MW database, the proposed method achieved the character-level recognition rates of 71.37% without any language model, and 80.15% with a bi-gram language model, respectively. These promising results have shown the effectiveness of the proposed method for unconstrained handwritten offline Chinese text line recognition. 相似文献
6.
7.
Laurence Likforman-Sulem Abderrazak Zahour Bruno Taconet 《International Journal on Document Analysis and Recognition》2007,9(2-4):123-138
There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited
electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as
word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks,
a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background
noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective
of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical
interest. 相似文献
8.
Xujun Peng Srirangaraj Setlur Venu Govindaraju Sitaram Ramachandrula 《Pattern recognition letters》2012,33(7):943-950
A boosted tree classifier is proposed to segment machine printed, handwritten and overlapping text from documents with handwritten annotations. Each node of the tree-structured classifier is a binary weak learner. Unlike a standard decision tree (DT) which only considers a subset of training data at each node and is susceptible to over-fitting, we boost the tree using all available training data at each node with different weights. The proposed method is evaluated on a set of machine-printed documents which have been annotated by multiple writers in an office/collaborative environment. The experimental results show that the proposed algorithm outperforms other methods on an imbalanced data set. 相似文献
9.
Kaur Rupinder Pal Jindal M. K. Kumar Munish Jindal Simpel Rani Tuteja Shikha 《Pattern Analysis & Applications》2022,25(1):189-208
Pattern Analysis and Applications - Segmentation is a significant stage for the recognition of old newspapers. Text-line extraction in the documents like newspaper pages which have very complex... 相似文献
10.
目的 手写文本行提取是文档图像处理中的重要基础步骤,对于无约束手写文本图像,文本行都会有不同程度的倾斜、弯曲、交叉、粘连等问题。利用传统的几何分割或聚类的方法往往无法保证文本行边缘的精确分割。针对这些问题提出一种基于文本行回归-聚类联合框架的手写文本行提取方法。方法 首先,采用各向异性高斯滤波器组对图像进行多尺度、多方向分析,利用拖尾效应检测脊形结构提取文本行主体区域,并对其骨架化得到文本行回归模型。然后,以连通域为基本图像单元建立超像素表示,为实现超像素的聚类,建立了像素-超像素-文本行关联层级随机场模型,利用能量函数优化的方法实现超像素的聚类与所属文本行标注。在此基础上,检测出所有的行间粘连字符块,采用基于回归线的k-means聚类算法由回归模型引导粘连字符像素聚类,实现粘连字符分割与所属文本行标注。最后,利用文本行标签开关实现了文本行像素的操控显示与定向提取,而不再需要几何分割。结果 在HIT-MW脱机手写中文文档数据集上进行文本行提取测试,检测率DR为99.83%,识别准确率RA为99.92%。结论 实验表明,提出的文本行回归-聚类联合分析框架相比于传统的分段投影分析、最小生成树聚类、Seam Carving等方法提高了文本行边缘的可控性与分割精度。在高效手写文本行提取的同时,最大程度地避免了相邻文本行的干扰,具有较高的准确率和鲁棒性。 相似文献
11.
12.
Moftah Elzobi Ayoub Al-Hamadi Zaher Al Aghbari Laslo Dings 《International Journal on Document Analysis and Recognition》2013,16(3):295-308
Even though a lot of researches have been conducted in order to solve the problem of unconstrained handwriting recognition, an effective solution is still a serious challenge. In this article, we address two Arabic handwriting recognition-related issues. Firstly, we present IESK-arDB, a new multi-propose off-line Arabic handwritten database. It is publicly available and contains more than 4,000 word images, each equipped with binary version, thinned version as well as a ground truth information stored in separate XML file. Additionally, it contains around 6,000 character images segmented from the database. A letter frequency analysis showed that the database exhibits letter frequencies similar to that of large corpora of digital text, which proof the database usefulness. Secondly, we proposed a multi-phase segmentation approach that starts by detecting and resolving sub-word overlaps, then hypothesizing a large number of segmentation points that are later reduced by a set of heuristic rules. The proposed approach has been successfully tested on IESK-arDB. The results were very promising, indicating the efficiency of the suggested approach. 相似文献
13.
Edith C. Herrera-Luna Edgardo M. Felipe-Riveron Salvador Godoy-Calderon 《Pattern recognition letters》2011,32(8):1139-1144
In this paper a new approach is presented for tackling the problem of identifying the author of a handwritten text. This problem is solved with a simple, yet powerful, modification of the so called ALVOT family of supervised classification algorithms with a novel differentiated-weighting scheme. Compared to other previously published approaches, the proposed method significantly reduces the number and complexity of the text-features to be extracted from the text. Also, the specific combination of line-level and word-level features used introduces an eclectic paradigm between texture-related and structure-related approaches. 相似文献
14.
Javad Sadri Mohammad J. Jalili Younes Akbari Atefeh Foroozandeh 《Pattern Analysis & Applications》2014,17(4):849-862
Millions of handwritten bank cheques are processed manually every day in banks and other financial institutions all over the world. Substitution of manual cheque processing with automatic cheque reader system saves time and the cost of processing. In the recent years, systems such as A2iA have been made in order to automate processing of Latin cheques. Normally, these systems are based on the standard structures of cheques such as Check 21 in the USA or Check 006 in Canada. There are major problems in traditional (currently used) Persian bank cheques, which yield low accuracy and computational cost in their automatic processing. In this paper, in order to solve these problems, a novel structure for Persian handwritten bank cheques is presented. Importance and supremacy of this new structure for Persian handwritten bank cheques is shown by conducting several experiments on our created database of cheques based on the new structure. The created database includes 500 handwritten bank cheques based on the presented structure. Experimental results verify the usefulness and importance of the new structure in automatic processing of Persian handwritten bank cheques which provides a standard guideline for automatic processing of Persian handwritten bank cheques comparable to Check 21 or Check 006. 相似文献
15.
16.
Zi-Rui Wang Jun Du Wen-Chao Wang Jian-Fang Zhai Jin-Shui Hu 《International Journal on Document Analysis and Recognition》2018,21(4):241-251
This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24% by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53%. 相似文献
17.
Khaoula Elagouni Christophe Garcia Franck Mamalet Pascale Sébillot 《International Journal on Document Analysis and Recognition》2014,17(1):19-31
Text embedded in multimedia documents represents an important semantic information that helps to automatically access the content. This paper proposes two neural-based optical character recognition (OCR) systems that handle the text recognition problem in different ways. The first approach segments a text image into individual characters before recognizing them, while the second one avoids the segmentation step by integrating a multi-scale scanning scheme that allows to jointly localize and recognize characters at each position and scale. Some linguistic knowledge is also incorporated into the proposed schemes to remove errors due to recognition confusions. Both OCR systems are applied to caption texts embedded in videos and in natural scene images and provide outstanding results showing that the proposed approaches outperform the state-of-the-art methods. 相似文献
18.
Malakar Samir Sarkar Ram Basu Subhadip Kundu Mahantapas Nasipuri Mita 《Neural computing & applications》2021,33(1):449-468
Neural Computing and Applications - Recognition of unconstrained handwritten word images is an interesting research problem which gets more challenging when lexicon-free words are considered.... 相似文献
19.
Managing complex documents over the WWW: a case study for XML 总被引:2,自引:0,他引:2
Ciancarini P. Vitali F. Mascolo C. 《Knowledge and Data Engineering, IEEE Transactions on》1999,11(4):629-638
The use of the World Wide Web as a communication medium for knowledge engineers and software designers is limited by the lack of tools for writing, sharing, and verifying documents written with design notations. For instance, the Z language has a rich set of mathematical characters, and requires graphic-rich boxes and schemas for structuring a specification document. It is difficult to integrate Z specifications and text on WWW pages written with HTML, and traditional tools are not suited for the task. On the other hand, a newly proposed standard for markup languages, namely XML, allows one to define any set of markup elements; hence, it is suitable for describing any kind of notation. Unfortunately, the proposed standard for rendering XML documents, namely XSL, provides for text-only (although sophisticated) rendering of XML documents, and thus it cannot be used for more complex notations. We present a Java-based tool for applying any notation to elements of XML documents. These XML documents can thus be shown on current-generation WWW browsers with Java capabilities. A complete package for displaying Z specifications has been implemented and integrated with standard text parts. Being a complete rendering engine, text parts and Z specifications can be freely intermixed, and all the standard features of XML (including HTML links and form elements) are available outside and inside Z specifications. Furthermore, the extensibility of our engine allows any additional notations to be supported and integrated with the ones we describe 相似文献
20.
在工程图纸计算机处理过程中,分割字符与图形是非常重要的步骤,但字符与图形粘连的问题很难处理。本文在分析了几种字符与图形分割技术的原理和实现方法的基础上,提出一种分割与图形粘连字符的方法——图元屏蔽技术。试验结果表明,这种方法对于分割与正交方向线段粘连的字符十分有效,特别适用于处理字符串中全部与图形粘连或多个粘连的现象。 相似文献