首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cheap, ubiquitous, high-resolution digital cameras have led to opportunities that demand camera-based text understanding, such as wearable computing or assistive technology. Perspective distortion is one of the main challenges for text recognition in camera captured images since the camera may often not have a fronto-parallel view of the text. We present a method for perspective recovery of text in natural scenes, where text can appear as isolated words, short sentences or small paragraphs (as found on posters, billboards, shop and street signs etc.). It relies on the geometry of the characters themselves to estimate a rectifying homography for every line of text, irrespective of the view of the text over a large range of orientations. The horizontal perspective foreshortening is corrected by fitting two lines to the top and bottom of the text, while the vertical perspective foreshortening and shearing are estimated by performing a linear regression on the shear variation of the individual characters within the text line. The proposed method is efficient and fast. We present comparative results with improved recognition accuracy against the current state-of-the-art.  相似文献   

2.
Automatic text segmentation and text recognition for video indexing   总被引:13,自引:0,他引:13  
Efficient indexing and retrieval of digital video is an important function of video databases. One powerful index for retrieval is the text appearing in them. It enables content-based browsing. We present our new methods for automatic segmentation of text in digital videos. The algorithms we propose make use of typical characteristics of text in videos in order to enable and enhance segmentation performance. The unique features of our approach are the tracking of characters and words over their complete duration of occurrence in a video and the integration of the multiple bitmaps of a character over time into a single bitmap. The output of the text segmentation step is then directly passed to a standard OCR software package in order to translate the segmented text into ASCII. Also, a straightforward indexing and retrieval scheme is introduced. It is used in the experiments to demonstrate that the proposed text segmentation algorithms together with existing text recognition algorithms are suitable for indexing and retrieval of relevant video sequences in and from a video database. Our experimental results are very encouraging and suggest that these algorithms can be used in video retrieval applications as well as to recognize higher level semantics in videos.  相似文献   

3.
4.
Video text detection and segmentation for optical character recognition   总被引:1,自引:0,他引:1  
In this paper, we present approaches to detecting and segmenting text in videos. The proposed video-text-detection technique is capable of adaptively applying appropriate operators for video frames of different modalities by classifying the background complexities. Effective operators such as the repeated shifting operations are applied for the noise removal of images with high edge density. Meanwhile, a text-enhancement technique is used to highlight the text regions of low-contrast images. A coarse-to-fine projection technique is then employed to extract text lines from video frames. Experimental results indicate that the proposed text-detection approach is superior to the machine-learning-based (such as SVM and neural network), multiresolution-based, and DCT-based approaches in terms of detection and false-alarm rates. Besides text detection, a technique for text segmentation is also proposed based on adaptive thresholding. A commercial OCR package is then used to recognize the segmented foreground text. A satisfactory character-recognition rate is reported in our experiments.Published online: 14 December 2004  相似文献   

5.
许多自然场景图像中都包含丰富的文本,它们对于场景理解有着重要的作用。随着移动互联网技术的飞速发展,许多新的应用场景都需要利用这些文本信息,例如招牌识别和自动驾驶等。因此,自然场景文本的分析与处理也越来越成为计算机视觉领域的研究热点之一,该任务主要包括文本检测与识别。传统的文本检测和识别方法依赖于人工设计的特征和规则,且模型设计复杂、效率低、泛化性能差。随着深度学习的发展,自然场景文本检测、自然场景文本识别以及端到端的自然场景文本检测与识别都取得了突破性的进展,其性能和效率都得到了显著提高。本文介绍了该领域相关的研究背景,对基于深度学习的自然场景文本检测、识别以及端到端自然场景文本检测与识别的方法进行整理分类、归纳和总结,阐述了各类方法的基本思想和优缺点。并针对隶属于不同类别下的方法,进一步论述和分析这些主要模型的算法流程、适用场景和技术发展路线。此外,列举说明了部分主流公开数据集,对比了各个模型方法在代表性数据集上的性能情况。最后总结了目前不同场景数据下的自然场景文本检测、识别及端到端自然场景文本检测与识别算法的局限性以及未来的挑战和发展趋势。  相似文献   

6.
7.
Detecting and recognizing text in natural images are quite challenging and have received much attention from the computer vision community in recent years. In this paper, we propose a robust end-to-end scene text recognition method, which utilizes tree-structured character models and normalized pictorial structured word models. For each category of characters, we build a part-based tree-structured model (TSM) so as to make use of the character-specific structure information as well as the local appearance information. The TSM could detect each part of the character and recognize the unique structure as well, seamlessly combining character detection and recognition together. As the TSMs could accurately detect characters from complex background, for text localization, we apply TSMs for all the characters on the coarse text detection regions to eliminate the false positives and search the possible missing characters as well. While for word recognition, we propose a normalized pictorial structure (PS) framework to deal with the bias caused by words of different lengths. Experimental results on a range of challenging public datasets (ICDAR 2003, ICDAR 2011, SVT) demonstrate that the proposed method outperforms state-of-the-art methods both for text localization and word recognition.  相似文献   

8.
Extraction and recognition of artificial text in multimedia documents   总被引:1,自引:0,他引:1  
Abstract The systems currently available for contentbased image and video retrieval work without semantic knowledge, i. e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e. g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique.An erratum to this article can be found at  相似文献   

9.
We present two different approaches to the location and recovery of text in images of real scenes. The techniques we describe are invariant to the scale and 3D orientation of the text, and allow recovery of text in cluttered scenes. The first approach uses page edges and other rectangular boundaries around text to locate a surface containing text, and to recover a fronto-parallel view. This is performed using line detection, perceptual grouping, and comparison of potential text regions using a confidence measure. The second approach uses low-level texture measures with a neural network classifier to locate regions of text in an image. Then we recover a fronto-parallel view of each located paragraph of text by separating the individual lines of text and determining the vanishing points of the text plane. We illustrate our results using a number of images. Received May 20, 2001 / Accepted June 19, 2001  相似文献   

10.
To increase the range of sizes of video scene text recognizable by optical character recognition (OCR), we developed a Bayesian super-resolution algorithm that uses a text-specific bimodal prior. We evaluated the effectiveness of the bimodal prior, compared and in conjunction with a piecewise smoothness prior, visually and by measuring the accuracy of the OCR results on the variously super-resolved images. The bimodal prior improved the readability of 4- to 7-pixel-high scene text significantly better than bicubic interpolation and increased the accuracy of OCR results better than the piecewise smoothness prior.  相似文献   

11.
对于少数民族古籍的保护与传承,国家予以高度重视,并强调了对这些不可再生文化资源透彻数字化的重要性。随着文档图像分析与识别技术的不断进步,对少数民族文字的文本分析与识别研究受到广泛关注,并取得显著成就,成为人工智能应用研究的一个热点领域。然而,由于少数民族文字种类繁多、应用场景多样及数据集的稀缺性等问题,这一研究领域仍面临诸多挑战。本文旨在总结先前的工作,并为未来的研究提供支持,重点讨论了印刷体文本、联机手写、古籍文档及场景文字识别等任务,概述了国内外在少数民族文种识别领域的发展和最新成果。首先阐明了少数民族文字文本分析与识别的重要性及其价值,介绍了特定少数民族文字及其古籍文档的特征。然后,回顾了这一领域的发展历史和现状,分析并总结了传统方法的代表性成果及其应用;详细讨论了研究重点向深度神经网络模型和深度学习方法的全面转移,这一转变使得各文种的识别性能得到了显著提升。最后,基于相关分析,本文指出了在不同文种文档分析与识别中存在的精度和泛化能力等方面的不足,以及与汉文文本分析与识别的差异;面对少数民族文字文本识别领域的主要困难与挑战,展望了未来的研究趋势和技术发展目标。  相似文献   

12.
Document recognition is a lively research area with much effort concentrated on optical character recognition. Less attention is paid to locating and extracting text from the general (non-desktop, non-scanner) environment. Such contact-free extraction of text from a general scene has applications in the context of wearable computing, robotic vision, point and click document capture, or as an aid for visually handicapped people. Here, a novel automatic text reading system is introduced using an active camera focused on text regions already located in the scene (using our recent work). Initially, a located region of text is analysed to determine the optimal zoom that would foveate onto it. Then a number of images are captured over the text region to construct a high-resolution mosaic composite of the whole region. This magnified image of the text is suitable for reading by humans or for recognition by OCR, or even for text-to speech synthesis. Although we employed a low resolution camera, we still obtained very good results. ID="A1"Correspondance and offprint requests to: Dr M. Mirmehdi, Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK. Email: majid@cs.bris.ac.uk  相似文献   

13.
针对目前单纯依赖于分析图像内容或文本关键词的成人图像判定算法的不足,提出一种融合网络图像的相关文本特征与图像内容语义特征的成人图像判定算法。成人图像的特征信息可能存在于其图像内容及其相关文本如图像文件名、所在网页中。在视觉词袋模型的基础上,将文本分析得到的相关文本特征与图像视觉元素特征如纹理、局部形态等进行底层特征融合,并采用支持向量机分类器实现图像分类。实验结果表明,该算法具有较好的分类效果。  相似文献   

14.
This paper presents a new method for detecting and recognizing text in complex images and video frames. Text detection is performed in a two-step approach that combines the speed of a text localization step, enabling text size normalization, with the strength of a machine learning text verification step applied on background independent features. Text recognition, applied on the detected text lines, is addressed by a text segmentation step followed by an traditional OCR algorithm within a multi-hypotheses framework relying on multiple segments, language modeling and OCR statistics. Experiments conducted on large databases of real broadcast documents demonstrate the validity of our approach.  相似文献   

15.
Documents may be captured at any orientation when viewed with a hand-held camera. Here, a method of recovering fronto-parallel views of perspectively skewed text documents in single images is presented, useful for ‘point-and-click’ scanning or when generally seeking regions of text in a scene. We introduce a novel extension to the commonly used 2D projection profiles in document recognition to locate the horizontal vanishing point of the text plane. Following further analysis, we segment the lines of text to determine the style of justification of the paragraphs. The change in line spacings exhibited due to perspective is then used to locate the document's vertical vanishing point. No knowledge of the camera focal length is assumed. Using the vanishing points, a fronto-parallel view is recovered which is then suitable for OCR or other high-level recognition. We provide results demonstrating the algorithm's performance on documents over a wide range of orientations.  相似文献   

16.
在生物医学临床病历文本的命名实体识别任务中,传统的解决方案由于对实体的边界划分不够精确,影响了部分复合实体的识别。通过研究复合实体的特性,提出一种集成的卷积神经网络(E-CNN)模型与双向长短期记忆网络(BLSTM)和条件随机场(CRF)结合的模型,通过对CNN中的卷积层设定不同卷积窗口的大小,来捕获多个词语之间更丰富的边界特征信息。然后将集成的特征信息传递给BLSTM模型进行训练,最后由CRF模型得到最终的序列标注。实验结果表明,该方法针对临床病历文本中的复合实体识别具有良好的效果。  相似文献   

17.
Stop word location and identification for adaptive text recognition   总被引:2,自引:0,他引:2  
Abstract. We propose a new adaptive strategy for text recognition that attempts to derive knowledge about the dominant font on a given page. The strategy uses a linguistic observation that over half of all words in a typical English passage are contained in a small set of less than 150 stop words. A small dictionary of such words is compiled from the Brown corpus. An arbitrary text page first goes through layout analysis that produces word segmentation. A fast procedure is then applied to locate the most likely candidates for those words, using only widths of the word images. The identity of each word is determined using a word shape classifier. Using the word images together with their identities, character prototypes can be extracted using a previously proposed method. We describe experiments using simulated and real images. In an experiment using 400 real page images, we show that on average, eight distinct characters can be learned from each page, and the method is successful on 90% of all the pages. These can serve as useful seeds to bootstrap font learning. Received October 8, 1999 / Revised March 29, 2000  相似文献   

18.
自然场景图像中的文本提供了重要的语意信息,它是图像内容的重要来源.针对当前的求解算法普遍存在提取文本精确度不高等缺点,提出了一种文本定位准确的文本提取算法.先将原始图片进行金字塔分解,然后进行彩色图像边缘提取和二值化,再形态学文本定位,最后文本区域字符提取.对ICDAR数据库图片的测试结果表明,该方法对文字颜色、大小字体以及排列方向具有较强的鲁棒性,同时也具有较高的精确度和提取率.  相似文献   

19.
Even though visual attention models using bottom-up saliency can speed up object recognition by predicting object locations, in the presence of multiple salient objects, saliency alone cannot discern target objects from the clutter in a scene. Using a metric named familiarity, we propose a top-down method for guiding attention towards target objects, in addition to bottom-up saliency. To demonstrate the effectiveness of familiarity, the unified visual attention model (UVAM) which combines top-down familiarity and bottom-up saliency is applied to SIFT based object recognition. The UVAM is tested on 3600 artificially generated images containing COIL-100 objects with varying amounts of clutter, and on 126 images of real scenes. The recognition times are reduced by 2.7× and 2×, respectively, with no reduction in recognition accuracy, demonstrating the effectiveness and robustness of the familiarity based UVAM.  相似文献   

20.
ABSTRACT

Advances in the Natural Language Processing (NLP) and machine learning fields have led to the development of automated methods for the recognition of personality traits from text available from social media and similar sources. Systems of this kind exploit the close relation between lexical knowledge and personality models – such as the well-known Big Five model – to provide information about the author of an input text in a non-intrusive fashion, and at a low cost. Although now a well-established research topic in the field, the computational recognition of personality traits from text still leaves a number of research questions worth further exploration. In particular, this paper attempts to shed light on three main issues: (i) whether we may develop psycholinguistics-motivated models of personality recognition when such knowledge sources are not available for the target language under consideration; (ii) whether the use of psycholinguistic knowledge may be still superior to contemporary word vector representations; and (iii) whether we may infer certain personality facets from a corpus that does not explicitly convey this information. In this paper these issues are dealt with in a series of individual experiments of personality recognition from Facebook text, whose initial results should aid the future development of more robust systems of this kind.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号