首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
2.
Document representation and its application to page decomposition   总被引:6,自引:0,他引:6  
Transforming a paper document to its electronic version in a form suitable for efficient storage, retrieval, and interpretation continues to be a challenging problem. An efficient representation scheme for document images is necessary to solve this problem. Document representation involves techniques of thresholding, skew detection, geometric layout analysis, and logical layout analysis. The derived representation can then be used in document storage and retrieval. Page segmentation is an important stage in representing document images obtained by scanning journal pages. The performance of a document understanding system greatly depends on the correctness of page segmentation and labeling of different regions such as text, tables, images, drawings, and rulers. We use the traditional bottom-up approach based on the connected component extraction to efficiently implement page segmentation and region identification. A new document model which preserves top-down generation information is proposed based on which a document is logically represented for interactive editing, storage, retrieval, transfer, and logical analysis. Our algorithm has a high accuracy and takes approximately 1.4 seconds on a SGI Indy workstation for model creation, including orientation estimation, segmentation, and labeling (text, table, image, drawing, and ruler) for a 2550×3300 image of a typical journal page scanned at 300 dpi. This method is applicable to documents from various technical journals and can accommodate moderate amounts of skew and noise  相似文献   

3.
Geometric structure analysis of document images: a knowledge-based approach   总被引:1,自引:0,他引:1  
This paper presents a knowledge-based method for sophisticated geometric structure analysis of technical journal pages. The proposed knowledge base encodes geometric characteristics that are not only common in technical journals but also publication-specific in the form of rules. The method takes the hybrid of top-down and bottom-up techniques and consists of two phases: region segmentation and identification. Generally, the result of the segmentation process does not have a one-to-one matching with composite layout components. Therefore, the proposed method identifies non-text objects, such as images, drawings, and tables, as well as text objects, by splitting or grouping segmented regions into composite layout components. Experimental results with 372 images scanned from the IEEE Transactions on Pattern Analysis and Machine Intelligence show that the proposed method has performed geometric structure analysis successfully on more than 99 percent of the test images.  相似文献   

4.
One of the difficulties in the understanding of document images is document layout analysis, which is the first step in document image modeling. In this paper, a robust system for which a multilevel-homogeneity structure is used in accordance with a hybrid methodology is proposed to deal with this problem. Our system consists of the following three main stages: classification, segmentation, and refinement and labeling. Different from other page segmentation methods, the proposed system includes an efficient algorithm to detect table regions in document images. Besides, to create an effective application, the proposed system is designed to work with a variety of document languages. The proposed method was tested with the ICDAR2015 competition (RDCL-2015) and three other published datasets in different languages. The results of these tests show that the accuracy of proposed system is superior to the previous methods.  相似文献   

5.
一种优化的文档图像分割方法   总被引:1,自引:0,他引:1  
文档图像在数字图书馆、电子商务以及电子政务等工程中已获得广泛应用。如何对文档图像进行有效的转换、存储和传输,成为人们研究的焦点。将文档图像分割成不同的区域,根据不同区域的特点分别进行处理,成为一种有效的解决方案。本文在传统的块分割和图层分割方法的基础上,提出了一种优化的文档图像分割思路,对这两种方法进行了合理的综合处理,能够取得更好的效果。  相似文献   

6.
Automatic document processing: A survey   总被引:8,自引:0,他引:8  
  相似文献   

7.
This paper describes an intelligent forms processing system (IFPS) which provides capabilities for automatically indexing form documents for storage/retrieval to/from a document library and for capturing information from scanned form images using intelligent character recognition (ICR). The system also provides capabilities for efficiently storing form images. IFPS consists of five major processing components: (1) An interactive document analysis stage that analyzes a blank form in order to define a model of each type of form to be accepted by the system; the parameters of each model are stored in a form library. (2) A form recognition module that collects features of an input form in order to match it against one represented in the form library; the primary features used in this step are the pattern of lines defining data areas on the form. (3) A data extraction component that registers the selected model to the input form, locates data added to the form in fields of interest, and removes the data image to a separate image area. A simple mask defining the center of the data region suffices to initiate the extraction process; search routines are invoked to track data that extends beyond the masks. Other special processing is called on to detect lines that intersect the data image and to delete the lines with minimum distortion to the rest of the image. (4) An ICR unit that converts the extracted image data to symbol code for input to data base or other conventional processing systems. Three types of ICR logic have been implemented in order to accommodate monospace typing, proportionally spaced machine text, and handprinted alphanumerics. (5) A forms dropout module that removes the fixed part of a form and retains only the data filled in for storage. The stored data can be later combined with the fixed form to reconstruct the original form. This provides for extremely efficient storage of form images, thus making possible the storage of very large number of forms in the system. IFPS is implemented as part of a larger image management system called Image and Records Management system (IRM). It is being applied in forms data management in several state government applications.  相似文献   

8.
Transforming paper documents into XML format with WISDOM++   总被引:1,自引:1,他引:0  
The transformation of scanned paper documents to a form suitable for an Internet browser is a complex process that requires solutions to several problems. The application of an OCR to some parts of the document image is only one of the problems. In fact, the generation of documents in HTML format is easier when the layout structure of a page has been extracted by means of a document analysis process. The adoption of an XML format is even better, since it can facilitate the retrieval of documents in the Web. Nevertheless, an effective transformation of paper documents into this format requires further processing steps, namely document image classification and understanding. WISDOM++ is a document processing system that operates in five steps: document analysis, document classification, document understanding, text recognition with an OCR, and transformation into HTML/XML format. The innovative aspects described in the paper are: the preprocessing algorithm, the adaptive page segmentation, the acquisition of block classification rules using techniques from machine learning, the layout analysis based on general layout principles, and a method that uses document layout information for conversion to HTML/XML formats. A benchmarking of the system components implementing these innovative aspects is reported. Received June 15, 2000 / Revised November 7, 2000  相似文献   

9.
This paper proposes a method to compare document images in multilingual corpus, which is composed of character segmentation, feature extraction and similarity measure. In character segmentation, a top-down strategy is used. We apply projection and self-adaptive threshold to analyze the layout and then segment the text line by horizontal projection. Then, English, Chinese and Japanese are recognized by different methods based on the distribution and ratios of text line. Finally, character segmentation with different languages is done using different strategies. In feature extraction and similarity measure, four features are given for coarse measurement, and then a template is set up. Based on the templates, a fast template matching method based on coarse-to-fine strategy and bit memory is presented for precise matching. The experimental results demonstrate that our method can handle multilingual document images of different resolutions and font sizes with high precision and speed.  相似文献   

10.
Detection and recognition of textual information in an image or video sequence is important for many applications. The increased resolution and capabilities of digital cameras and faster mobile processing allow for the development of interesting systems. We present an application based on the capture of information presented at a slide-show presentation or at a poster session. We describe the development of a system to process the textual and graphical information in such presentations. The application integrates video and image processing, document layout understanding, optical character recognition (OCR), and pattern recognition. The digital imaging device captures slides/poster images, and the computing module preprocesses and annotates the content. Various problems related to metric rectification, key-frame extraction, text detection, enhancement, and system integration are addressed. The results are promising for applications such as a mobile text reader for the visually impaired. By using powerful text-processing algorithms, we can extend this framework to other applications, e.g., document and conference archiving, camera-based semantics extraction, and ontology creation.Received: 18 December 2003, Revised: 1 November 2004, Published online: 2 February 2005  相似文献   

11.
表格文件图象逻辑结构提取方法   总被引:3,自引:1,他引:2       下载免费PDF全文
近几年来,国内外已提出了许多关于表格文件图象分析的方法,但其中关于表格逻辑结构提取物方法却很少,为此,提出了一种关于表格文件逻辑结构撮的方法,此方法主要分为整表的全局划分、局部的逻辑结构分析和整表的再次全局划分3个步骤,该方法强调对文件全局和局部布局结构的综合分析,与以往的仅仅从局部上对表格逻辑结构进行了确定的方法相比,它具有较高的识别正确率,并可以识别结构更为复杂的表格文件。  相似文献   

12.
A methodology for automatic identification and segmentation of white matter hyper-intensities appearing in magnetic resonance images of brain axial cuts is presented. To this end, a sequence of image processing technics is employed to form an image where the hyper-intensities in white matter differ notoriously from the rest of the objects. This pre-processing stage facilitates the posterior process of identification and segmentation of the hyper-intensity volumes. The proposed methodology was tested on 55 magnetic resonance images from six patients. These images were analysed by the proposed system and the resulted hyper-intensity images were compared with the images manually segmented by experts. The experimental results show the mean rate of true positives of 0.9, the mean rate of false positives of 0.7 and the similarity index of 0.7; it is worth commenting that the false positives are found mostly within the grey matter not causing problems in early diagnosis. The proposed methodology for magnetic resonance image processing and analysis may be useful in the early detection of white matter lesions.  相似文献   

13.
Parameter-free geometric document layout analysis   总被引:1,自引:0,他引:1  
Automatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. The authors propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than previous ones  相似文献   

14.
针对传真图像,为了提高版面分割与分类准确率,提高处理速度,以连通区域为处理元素,通过合理设定阈值,将水平和垂直相邻连通区域进行合并,快速准确地分割图像。并且将版面分割过程与分类过程相结合,根据连通区域的大小建立矩阵,提取能够表征区域信息的八维特征,然后使用BP神经网络将版面区域分为文字区域和非文字区域两类。实验中得到版面分割准确率为89.2%,版面分类准确率为94.22%。实验结果证明,该算法能够快速准确地对传真图像进行分割和分类,具有较强的实用价值。  相似文献   

15.

Datasets of documents in Arabic are urgently needed to promote computer vision and natural language processing research that addresses the specifics of the language. Unfortunately, publicly available Arabic datasets are limited in size and restricted to certain document domains. This paper presents the release of BE-Arabic-9K, a dataset of more than 9000 high-quality scanned images from over 700 Arabic books. Among these, 1500 images have been manually segmented into regions and labeled by their functionality. BE-Arabic-9K includes book pages with a wide variety of complex layouts and page contents, making it suitable for various document layout analysis and text recognition research tasks. The paper also presents a page layout segmentation and text extraction baseline model based on fine-tuned Faster R-CNN structure (FFRA). This baseline model yields cross-validation results with an average accuracy of 99.4% and F1 score of 99.1% for text versus non-text block classification on 1500 annotated images of BE-Arabic-9K. These results are remarkably better than those of the state-of-the-art Arabic book page segmentation system ECDP. FFRA also outperforms three other prior systems when tested on a competition benchmark dataset, making it an outstanding baseline model to challenge.

  相似文献   

16.
17.
基于灰度均衡的指纹图像分割算法   总被引:6,自引:2,他引:6       下载免费PDF全文
针对MBF200芯片指纹采集器采集的指纹图像的特点,提出了一种新的指纹图像分割方法。该方法简单实用,能快速而有效的分割指纹图像,符合指纹识别系统的实时性要求。首先对指纹图像进行灰度均衡处理,然后根据图像的灰度特征对指纹图像进行分块分割,最后应用数学形态学修复指纹图像的前景边缘。使用该方法对研究室自行设计的MBF200半导体指纹采集器采集到的指纹图像,进行大量的测试。实验结果表明,该方法对这种类型的指纹图像分割是有效的。  相似文献   

18.
Effective compound image compression algorithms require compound images to be first segmented into regions such as text, pictures and background to minimize the loss of visual quality of text during compression. In this paper, a new compound image segmentation algorithm based on the Mixed Raster Content model (MRC) of multilayer approach is proposed (foreground/mask/background). This algorithm first segments a compound image into different classes. Then each class is transformed to the three-layer MRC model differently according to the property of that class. Finally, the foreground and the background layers are compressed using JPEG 2000. The mask layer is compressed using JBIG2. The proposed morphological-based segmentation algorithm design a binary segmentation mask which partitions a compound image into different layers, such as the background layer and the foreground layer accurately. Experimental results show that it is more robust with respect to the font size, style, colour, orientation, and alignment of text in an uneven background. At similar bit rates, our MRC compression with the morphology-based segmentation achieves a much higher subjective quality and coding efficiency than the state-of-the-art compression algorithms, such as JPEG, JPEG 2000 and H.264/AVC-I.  相似文献   

19.
This paper presents a method of page segmentation based on the approximated area Voronoi diagram. The characteristics of the proposed method are as follows: (1) The Voronoi diagram enables us to obtain the candidates of boundaries of document components from page images with non-Manhattan layout and a skew. (2) The candidates are utilized to estimate the intercharacter and interline gaps without the use of domain-specific parameters to select the boundaries. From the experimental results for 128 images with non-Manhattan layout and the skew of 0°∼45° as well as 98 images with Manhattan layout, we have confirmed that the method is effective for extraction of body text regions, and it is as efficient as other methods based on connected component analysis.  相似文献   

20.
康厚良  杨玉婷 《图学学报》2022,43(5):865-874
以卷积神经网络(CNN)为代表的深度学习技术在图像分类和识别领域表现出了非常优异的性能。但东巴象形文字未有标准、公开的数据集,无法借鉴或使用已有的深度学习算法。为了快速建立权威、有效的东巴文字库,分析已出版东巴文档的版面结构,从文档中提取文本行、东巴字成为了当前的首要任务。因此,结合东巴象形文字文档图像的结构特点,给出了东巴文档图像的文本行自动分割算法。首先利用基于密度和距离的k-均值聚类算法确定了文本行的分类数量和分类标准;然后,通过文字块的二次处理矫正了分割中的错误结果,提高了算法的准确率。在充分利用东巴字文档结构特征的同时,保留了机器学习模型客观、无主观经验影响的优势。通过实验表明,该算法可用于东巴文档图像、脱机手写汉字、东巴经的文本行分割,以及文本行中东巴字和汉字的分割,具有实现简单、准确性高、适应性强的特点,从而为东巴文字库的建立奠定基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号