首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Word spotting has become a field of strong research interest in document image analysis over the last years. Recently, AttributeSVMs were proposed which predict a binary attribute representation (Almazán et al. in IEEE Trans Pattern Anal Mach Intell 36(12):2552–2566, 2014). At their time, this influential method defined the state of the art in segmentation-based word spotting. In this work, we present an approach for learning attribute representations with convolutional neural networks(CNNs). By taking a probabilistic perspective on training CNNs, we derive two different loss functions for binary and real-valued word string embeddings. In addition, we propose two different CNN architectures, specifically designed for word spotting. These architectures are able to be trained in an end-to-end fashion. In a number of experiments, we investigate the influence of different word string embeddings and optimization strategies. We show our attribute CNNs to achieve state-of-the-art results for segmentation-based word spotting on a large variety of data sets.  相似文献   

2.
In this paper, we present a segmentation methodology of handwritten documents in their distinct entities, namely, text lines and words. Text line segmentation is achieved by applying Hough transform on a subset of the document image connected components. A post-processing step includes the correction of possible false alarms, the detection of text lines that Hough transform failed to create and finally the efficient separation of vertically connected characters using a novel method based on skeletonization. Word segmentation is addressed as a two class problem. The distances between adjacent overlapped components in a text line are calculated using the combination of two distance metrics and each of them is categorized either as an inter- or an intra-word distance in a Gaussian mixture modeling framework. The performance of the proposed methodology is based on a consistent and concrete evaluation methodology that uses suitable performance measures in order to compare the text line segmentation and word segmentation results against the corresponding ground truth annotation. The efficiency of the proposed methodology is demonstrated by experimentation conducted on two different datasets: (a) on the test set of the ICDAR2007 handwriting segmentation competition and (b) on a set of historical handwritten documents.  相似文献   

3.
4.
The retrieval of information from scanned handwritten documents is becoming vital with the rapid increase of digitized documents, and word spotting systems have been developed to search for words within documents. These systems can be either template matching algorithms or learning based. This paper presents a coherent learning based Arabic handwritten word spotting system which can adapt to the nature of Arabic handwriting, which can have no clear boundaries between words. Consequently, the system recognizes Pieces of Arabic Words (PAWs), then re-constructs and spots words using language models. The proposed system produced promising result for Arabic handwritten word spotting when tested on the CENPARMI Arabic documents database.  相似文献   

5.
We propose a method to perform text searches on handwritten word image databases when no ground-truth data is available to learn models or select example queries. The approach proceeds by synthesizing multiple images of the query string using different computer fonts. While this idea has been successfully applied to printed documents in the past, its application to the handwritten domain is not straightforward. Indeed, the domain mismatch between queries (synthetic) and database images (handwritten) leads to poor accuracy.Our solution is to represent the queries with robust features and use a model that explicitly accounts for the domain mismatch. While the model is trained using synthetic images, its generative process produces samples according to the distribution of handwritten features. Furthermore, we propose an unsupervised method to perform font selection which has a significant impact on accuracy. Font selection is formulated as finding an optimal weighted mixture of fonts that best approximates the distribution of handwritten low-level features. Experiments demonstrate that the proposed method is an effective way to perform queries without using any human annotated example in any part of the process.  相似文献   

6.
A fast method of handwritten word recognition suitable for real time applications is presented in this paper. Preprocessing, segmentation and feature extraction are implemented using a chain code representation of the word contour. Dynamic matching between characters of a lexicon entry and segment(s) of the input word image is used to rank the lexicon entries in order of best match. Variable duration for each character is defined and used during the matching. Experimental results prove that our approach using the variable duration outperforms the method using fixed duration in terms of both accuracy and speed. Speed of the entire recognition process is about 200 msec on a single SPARC-10 platform and the recognition accuracy is 96.8 percent are achieved for lexicon size of 10, on a database of postal words captured at 212 dpi  相似文献   

7.
In this paper, we present an effective approach for grouping text lines in online handwritten Japanese documents by combining temporal and spatial information. With decision functions optimized by supervised learning, the approach has few artificial parameters and utilizes little prior knowledge. First, the strokes in the document are grouped into text line strings according to off-stroke distances. Each text line string, which may contain multiple lines, is segmented by optimizing a cost function trained by the minimum classification error (MCE) method. At the temporal merge stage, over-segmented text lines (caused by stroke classification errors) are merged with a support vector machine (SVM) classifier for making merge/non-merge decisions. Last, a spatial merge module corrects the segmentation errors caused by delayed strokes. Misclassified text/non-text strokes (stroke type classification precedes text line grouping) can be corrected at the temporal merge stage. To evaluate the performance of text line grouping, we provide a set of performance metrics for evaluating from multiple aspects. In experiments on a large number of free form documents in the Tokyo University of Agriculture and Technology (TUAT) Kondate database, the proposed approach achieves the entity detection metric (EDM) rate of 0.8992 and the edit-distance rate (EDR) of 0.1114. For grouping of pure text strokes, the performance reaches EDM of 0.9591 and EDR of 0.0669.  相似文献   

8.
International Journal on Document Analysis and Recognition (IJDAR) - In this article, we propose a new approach to segmentation-free word spotting that is based on the combination of three...  相似文献   

9.
An adaptive handwritten word recognition method is presented. A recursive architecture based on interaction between flexible character classification and deductive decision making is developed. The recognition process starts from the initial coarse level using a minimum number of features, then increases the discrimination power by adding other features adaptively and recursively until the result is accepted by the decision maker. For the computational aspect of a feasible solution, a unified decision metric, recognition confidence; is derived from two measurements: pattern confidence, evaluation of absolute confidence using shape features, and lexical confidence, evaluation of the relative string dissimilarity in the lexicon. Practical implementation and experimental results in reading the handwritten words of the address components of US mail pieces are provided. Up to a 4 percent improvement in recognition performance is achieved compared to a nonadaptive method. The experimental result shows that the proposed method has advantages in producing valid answers using the same number of features as conventional methods  相似文献   

10.
11.
Text line segmentation in handwritten documents is an important task in the recognition of historical documents. Handwritten document images contain text lines with multiple orientations, touching and overlapping characters between consecutive text lines and different document structures, making line segmentation a difficult task. In this paper, we present a new approach for handwritten text line segmentation solving the problems of touching components, curvilinear text lines and horizontally overlapping components. The proposed algorithm formulates line segmentation as finding the central path in the area between two consecutive lines. This is solved as a graph traversal problem. A graph is constructed using the skeleton of the image. Then, a path-finding algorithm is used to find the optimum path between text lines. The proposed algorithm has been evaluated on a comprehensive dataset consisting of five databases: ICDAR2009, ICDAR2013, UMD, the George Washington and the Barcelona Marriages Database. The proposed method outperforms the state-of-the-art considering the different types and difficulties of the benchmarking data.  相似文献   

12.
The multi-orientation occurs frequently in ancient handwritten documents, where the writers try to update a document by adding some annotations in the margins. Due to the margin narrowness, this gives rise to lines in different directions and orientations. Document recognition needs to find the lines everywhere they are written whatever their orientation. This is why we propose in this paper a new approach allowing us to extract the multi-oriented lines in scanned documents. Because of the multi-orientation of lines and their dispersion in the page, we use an image meshing allowing us to progressively and locally determine the lines. Once the meshing is established, the orientation is determined using the Wigner–Ville distribution on the projection histogram profile. This local orientation is then enlarged to limit the orientation in the neighborhood. Afterward, the text lines are extracted locally in each zone basing on the follow-up of the orientation lines and the proximity of connected components. Finally, the connected components that overlap and touch in adjacent lines are separated. The morphology analysis of the terminal letters of Arabic words is here considered. The proposed approach has been experimented on 100 documents reaching an accuracy of about 98.6%.  相似文献   

13.
International Journal on Document Analysis and Recognition (IJDAR) - We present a framework for learning an efficient holistic representation for handwritten word images. The proposed method uses a...  相似文献   

14.
This paper presents a complete system able to categorize handwritten documents, i.e. to classify documents according to their topic. The categorization approach is based on the detection of some discriminative keywords prior to the use of the well-known tf-idf representation for document categorization. Two keyword extraction strategies are explored. The first one proceeds to the recognition of the whole document. However, the performance of this strategy strongly decreases when the lexicon size increases. The second strategy only extracts the discriminative keywords in the handwritten documents. This information extraction strategy relies on the integration of a rejection model (or anti-lexicon model) in the recognition system. Experiments have been carried out on an unconstrained handwritten document database coming from an industrial application concerning the processing of incoming mails. Results show that the discriminative keyword extraction system leads to better recall/precision tradeoffs than the full recognition strategy. The keyword extraction strategy also outperforms the full recognition strategy for the categorization task.  相似文献   

15.
16.
Computer-based forensic handwriting analysis requires sophisticated methods for the pre-processing of digitized paper documents, in order to provide high-quality digitized handwriting, which represents the original handwritten product as accurately as possible. Due to the requirement of processing a huge amount of different document types, neither a standardized queue of processing stages, fixed parameter sets nor fixed image operations are qualified for such pre-processing methods. Thus, we present an open layered framework that covers adaptation abilities at the parameter, operator, and algorithm levels. Moreover, an embedded module, which uses genetic programming, might generate specific filters for background removal on-the-fly. The framework is understood as an assistance system for forensic handwriting experts and has been in use by the Bundeskriminalamt, the federal police bureau in Germany, for two years. In the following, the layered framework will be presented, fundamental document-independent filters for textured, homogeneous background removal and for foreground removal will be described, as well as aspects of the implementation. Results of the framework-application will also be given. Received July 12, 2000 / Revised October 13, 2000  相似文献   

17.
Neural Computing and Applications - Due to the cursive nature, segmentation of handwritten Bangla words into characters and also recognition of the same sometimes become a very challenging problem...  相似文献   

18.
Technical documents, which often have complicated structures, are often produced during Architecture/Engineering/Construction (A/E/C) projects and research. Applying information retrieval (IR) techniques directly to long or multi-topic documents often does not lead to satisfactory results. One way to address the problem is to partition each document into several “passages”, and treat each passage as an independent document. In this research, a novel passage partitioning approach is designed. It generates passages according to domain knowledge, which is represented by base domain ontology. Such a passage is herein defined as an OntoPassage. In order to demonstrate the advantage of the OntoPassage partitioning approach, this research implements a concept-based IR system to illustrate the application of such an approach. The research also compares the OntoPassage partitioning approach with several conventional passage partitioning approaches to verify its IR effectiveness. It is shown that, with the proposed OntoPassage approach, IR effectiveness on domain-specific technical reports is as good as conventional passage partitioning approaches. In addition, the OntoPassage approach provides the possibility to display the concepts in each passage, and concept-based IR may thus be implemented.  相似文献   

19.
In this paper, we present a new text line detection method for handwritten documents. The proposed technique is based on a strategy that consists of three distinct steps. The first step includes image binarization and enhancement, connected component extraction, partitioning of the connected component domain into three spatial sub-domains and average character height estimation. In the second step, a block-based Hough transform is used for the detection of potential text lines while a third step is used to correct possible splitting, to detect text lines that the previous step did not reveal and, finally, to separate vertically connected characters and assign them to text lines. The performance evaluation of the proposed approach is based on a consistent and concrete evaluation methodology.  相似文献   

20.
With the ever-increasing growth of the World Wide Web, there is an urgent need for an efficient information retrieval system that can search and retrieve handwritten documents when presented with user queries. However, unconstrained handwriting recognition remains a challenging task with inadequate performance thus proving to be a major hurdle in providing robust search experience in handwritten documents. In this paper, we describe our recent research with focus on information retrieval from noisy text derived from imperfect handwriting recognizers. First, we describe a novel term frequency estimation technique incorporating the word segmentation information inside the retrieval framework to improve the overall system performance. Second, we outline a taxonomy of different techniques used for addressing the noisy text retrieval task. The first method uses a novel bootstrapping mechanism to refine the OCR’ed text and uses the cleaned text for retrieval. The second method uses the uncorrected or raw OCR’ed text but modifies the standard vector space model for handling noisy text issues. The third method employs robust image features to index the documents instead of using noisy OCR’ed text. We describe these techniques in detail and also discuss their performance measures using standard IR evaluation metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号