首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Searching and indexing historical handwritten collections are a very challenging problem. We describe an approach called word spotting which involves grouping word images into clusters of similar words by using image matching to find similarity. By annotating “interesting” clusters, an index that links words to the locations where they occur can be built automatically. Image similarities computed using a number of different techniques including dynamic time warping are compared. The word similarities are then used for clustering using both K-means and agglomerative clustering techniques. It is shown in a subset of the George Washington collection that such a word spotting technique can outperform a Hidden Markov Model word-based recognition technique in terms of word error rates. An erratum to this article can be found at  相似文献   

2.
针对脱机手写维吾尔文本行图像中单词切分问题,提出了FCM融合K-means的聚类算法。通过该算法得到单词内距离和单词间距离两种分类。以聚类结果为依据,对文字区域进行合并,得到切分点,再对切分点内的文字进行连通域标注,进行着色处理。以50幅不同的人书写的维吾尔脱机手写文本图像为实验对象,共有536行和4?002个单词,正确切分率达到80.68%。实验结果表明,该方法解决了手写维吾尔文在切分过程中,单词间距离不规律带来的切分困难的问题和一些单词间重叠的问题。同时实现了大篇幅手写文本图像的整体处理。  相似文献   

3.
Text line segmentation in handwritten documents is an important task in the recognition of historical documents. Handwritten document images contain text lines with multiple orientations, touching and overlapping characters between consecutive text lines and different document structures, making line segmentation a difficult task. In this paper, we present a new approach for handwritten text line segmentation solving the problems of touching components, curvilinear text lines and horizontally overlapping components. The proposed algorithm formulates line segmentation as finding the central path in the area between two consecutive lines. This is solved as a graph traversal problem. A graph is constructed using the skeleton of the image. Then, a path-finding algorithm is used to find the optimum path between text lines. The proposed algorithm has been evaluated on a comprehensive dataset consisting of five databases: ICDAR2009, ICDAR2013, UMD, the George Washington and the Barcelona Marriages Database. The proposed method outperforms the state-of-the-art considering the different types and difficulties of the benchmarking data.  相似文献   

4.
Information spotting in scanned historical document images is a very challenging task. The joint use of the mechanical press and of human controlled inking introduced great variability in ink level within a book or even within a page. Consequently characters are often broken or merged together and thus become difficult to segment and recognize. The limitations of commercial OCR engines for information retrieval in historical document images have inspired alternative means of identification of given words in such documents. We present a word spotting method for scanned documents in order to find the word images that are similar to a query word, without assuming a correct segmentation of the words into characters. The connected components are first processed to transform a word pattern into a sequence of sub-patterns. Each sub-pattern is represented by a sequence of feature vectors. A modified Edit distance is proposed to perform a segmentation-driven string matching and to compute the Segmentation Driven Edit (SDE) distance between the words to be compared. The set of SDE operations is defined to obtain the word segmentations that are the most appropriate to evaluate their similarity. These operations are efficient to cope with broken and touching characters in words. The distortion of character shapes is handled by coupling the string matching process with local shape comparisons that are achieved by Dynamic Time Warping (DTW). The costs of the SDE operations are provided by the DTW distances. A sub-optimal version of the SDE string matching is also proposed to reduce the computation time, nevertheless it did not lead to a great decrease in performance. It is possible to enter a query by example or a textual query entered with the keyboard. Textual queries can be used to directly spot the word without the need to synthesize its image, as far as character prototype images are available. Results are presented for different documents and compared with other methods, showing the efficiency of our method.  相似文献   

5.
In this paper, we will present a mathematical analysis of the transition proportion for the normal threshold (NorT) based on the transition method. The transition proportion is a parameter of NorT which plays an important role in the theoretical development of NorT. We will study the mathematical forms of the quadratic equation from which NorT is computed. Through this analysis, we will describe how the transition proportion affects NorT. Then, we will prove that NorT is robust to inaccurate estimations of the transition proportion. Furthermore, our analysis extends to thresholding methods that rely on Bayes rule, and it also gives the mathematical bases for potential applications of the transition proportion as a feature to estimate stroke width and detect regions of interest. In the majority of our experiments, we used a database composed of small images that were extracted from DIBCO 2009 and H-DIBCO 2010 benchmarks. However, we also report evaluations using the original (H-)DIBCO?s benchmarks.  相似文献   

6.
7.
8.
In this article, our goal is to describe mathematically and experimentally the gray-intensity distributions of the fore- and background of handwritten historical documents. We propose a local pixel model to explain the observed asymmetrical gray-intensity histograms of the fore- and background. Our pixel model states that, locally, the gray-intensity histogram is the mixture of gray-intensity distributions of three pixel classes. Following our model, we empirically describe the smoothness of the background for different types of images. We show that our model has potential application in binarization. Assuming that the parameters of the gray-intensity distributions are correctly estimated, we show that thresholding methods based on mixtures of lognormal distributions outperform thresholding methods based on mixtures of normal distributions. Our model is supported with experimental tests that are conducted with extracted images from DIBCO 2009 and H-DIBCO 2010 benchmarks. We also report results for all four DIBCO benchmarks.  相似文献   

9.
This paper presents a complete system able to categorize handwritten documents, i.e. to classify documents according to their topic. The categorization approach is based on the detection of some discriminative keywords prior to the use of the well-known tf-idf representation for document categorization. Two keyword extraction strategies are explored. The first one proceeds to the recognition of the whole document. However, the performance of this strategy strongly decreases when the lexicon size increases. The second strategy only extracts the discriminative keywords in the handwritten documents. This information extraction strategy relies on the integration of a rejection model (or anti-lexicon model) in the recognition system. Experiments have been carried out on an unconstrained handwritten document database coming from an industrial application concerning the processing of incoming mails. Results show that the discriminative keyword extraction system leads to better recall/precision tradeoffs than the full recognition strategy. The keyword extraction strategy also outperforms the full recognition strategy for the categorization task.  相似文献   

10.
This paper proposes an automatic text-independent writer identification framework that integrates an industrial handwriting recognition system, which is used to perform an automatic segmentation of an online handwritten document at the character level. Subsequently, a fuzzy c-means approach is adopted to estimate statistical distributions of character prototypes on an alphabet basis. These distributions model the unique handwriting styles of the writers. The proposed system attained an accuracy of 99.2% when retrieved from a database of 120 writers. The only limitation is that a minimum length of text needs to be present in the document in order for sufficient accuracy to be achieved. We have found that this minimum length of text is about 160 characters or approximately equivalent to 3 lines of text. In addition, the discriminative power of different alphabets on the accuracy is also reported.  相似文献   

11.
Word spotting has become a field of strong research interest in document image analysis over the last years. Recently, AttributeSVMs were proposed which predict a binary attribute representation (Almazán et al. in IEEE Trans Pattern Anal Mach Intell 36(12):2552–2566, 2014). At their time, this influential method defined the state of the art in segmentation-based word spotting. In this work, we present an approach for learning attribute representations with convolutional neural networks(CNNs). By taking a probabilistic perspective on training CNNs, we derive two different loss functions for binary and real-valued word string embeddings. In addition, we propose two different CNN architectures, specifically designed for word spotting. These architectures are able to be trained in an end-to-end fashion. In a number of experiments, we investigate the influence of different word string embeddings and optimization strategies. We show our attribute CNNs to achieve state-of-the-art results for segmentation-based word spotting on a large variety of data sets.  相似文献   

12.
Recent work on extracting features of gaps in handwritten text allows a classification of these gaps into inter-word and intra-word classes using suitable classification techniques. In this paper, we first analyse the features of the gaps using mutual information. We then investigate the underlying data distribution by using visualisation methods. These suggest that a complicated structure exists, which makes them difficult to be separated into two distinct classes. We apply five different supervised classification algorithms from the machine learning field on both the original dataset and a dataset with the best features selected using mutual information. Moreover, we improve the classification result with the aid of a set of feature variables of strokes preceding and following each gap. The classifiers are compared by employing McNemar's test. We find that SVMs and MLPs outperform the other classifiers and that preprocessing to select features works well. The best classification result attained suggests that the technique we employ is particularly suitable for digital ink manipulation at the level of words.  相似文献   

13.
The retrieval of information from scanned handwritten documents is becoming vital with the rapid increase of digitized documents, and word spotting systems have been developed to search for words within documents. These systems can be either template matching algorithms or learning based. This paper presents a coherent learning based Arabic handwritten word spotting system which can adapt to the nature of Arabic handwriting, which can have no clear boundaries between words. Consequently, the system recognizes Pieces of Arabic Words (PAWs), then re-constructs and spots words using language models. The proposed system produced promising result for Arabic handwritten word spotting when tested on the CENPARMI Arabic documents database.  相似文献   

14.
15.
16.
International Journal on Document Analysis and Recognition (IJDAR) - In this article, we propose a new approach to segmentation-free word spotting that is based on the combination of three...  相似文献   

17.
词典是自动分词基础,分词词典机制的优劣直接影响到中文分词速度和效率.本文在对中文编码体系和中文分词的算法进行研究的基础上,设计一种新的分词词典,以及基于该词典结构上的一种多次Hash自动分词算法.分析表明,该算法的时间复杂度有很大的改善.  相似文献   

18.
This paper describes an iterative technique for gradually deforming a mask binary image with successive local affine transformation (LAT) operations so as to yield the best match to an input binary image as one new and promising approach toward robust handwritten character recognition. The method uses local shapes in the sense that the LAT of each point at one location is optimized using locations of other points by means of least-squares data fitting using Gaussian window functions. It also uses a multiscale refinement technique that decreases the spread of window functions with each iteration. Especially in handwritten character recognition, structural information is indispensable for robust shape matching or discrimination. The method is enhanced to explicitly incorporate structures by weighting the above least-squares criterion with similarity measures of both topological and geometric features of the mask and input images. Moreover, deformation constraints are imposed on each iteration, not only to promote and stabilize matching convergence but also to suppress an excessive matching process. Shape matching experiments have been successfully carried out using skeletons of totally unconstrained handwritten numerals  相似文献   

19.
A recognition system for general isolated off-line handwritten words using an approximate segment-string matching algorithm is described. The fundamental paradigm employed is a character-based segment-then-recognize/match strategy. An additional user supplied contextual information in the form of a lexicon guides a graph search to estimate the most likely word image identity. This system is designed to operate robustly in the presence of document noise, poor handwriting, and lexicon errors. A pre-processing step is initially applied to the image to remove noise artifacts and normalize the handwriting. An oversegmentation approach is used to improve the likelihood of capturing the individual characters embedded in the word. A directed graph is constructed that contains many possible interpretations of the word image, many implausible. The most likely graph path and associated confidence is computed for each lexicon word to produce a final lexicon ranking. Experiments highlighting the characteristics of this algorithm are given  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号