首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper proposes a two-stage system for text detection in video images. In the first stage, text lines are detected based on the edge map of the image leading in a high recall rate with low computational time expenses. In the second stage, the result is refined using a sliding window and an SVM classifier trained on features obtained by a new Local Binary Pattern-based operator (eLBP) that describes the local edge distribution. The whole algorithm is used in a multiresolution fashion enabling detection of characters for a broad size range. Experimental results, based on a new evaluation methodology, show the promising overall performance of the system on a challenging corpus, and prove the superior discriminating ability of the proposed feature set against the best features reported in the literature.  相似文献   

2.
Text data present in images and video contain useful information for automatic annotation, indexing, and structuring of images. Extraction of this information involves detection, localization, tracking, extraction, enhancement, and recognition of the text from a given image. However, variations of text due to differences in size, style, orientation, and alignment, as well as low image contrast and complex background make the problem of automatic text extraction extremely challenging. While comprehensive surveys of related problems such as face detection, document analysis, and image & video indexing can be found, the problem of text information extraction is not well surveyed. A large number of techniques have been proposed to address this problem, and the purpose of this paper is to classify and review these algorithms, discuss benchmark data and performance evaluation, and to point out promising directions for future research.  相似文献   

3.
Video text detection and segmentation for optical character recognition   总被引:1,自引:0,他引:1  
In this paper, we present approaches to detecting and segmenting text in videos. The proposed video-text-detection technique is capable of adaptively applying appropriate operators for video frames of different modalities by classifying the background complexities. Effective operators such as the repeated shifting operations are applied for the noise removal of images with high edge density. Meanwhile, a text-enhancement technique is used to highlight the text regions of low-contrast images. A coarse-to-fine projection technique is then employed to extract text lines from video frames. Experimental results indicate that the proposed text-detection approach is superior to the machine-learning-based (such as SVM and neural network), multiresolution-based, and DCT-based approaches in terms of detection and false-alarm rates. Besides text detection, a technique for text segmentation is also proposed based on adaptive thresholding. A commercial OCR package is then used to recognize the segmented foreground text. A satisfactory character-recognition rate is reported in our experiments.Published online: 14 December 2004  相似文献   

4.
Automatic text segmentation and text recognition for video indexing   总被引:13,自引:0,他引:13  
Efficient indexing and retrieval of digital video is an important function of video databases. One powerful index for retrieval is the text appearing in them. It enables content-based browsing. We present our new methods for automatic segmentation of text in digital videos. The algorithms we propose make use of typical characteristics of text in videos in order to enable and enhance segmentation performance. The unique features of our approach are the tracking of characters and words over their complete duration of occurrence in a video and the integration of the multiple bitmaps of a character over time into a single bitmap. The output of the text segmentation step is then directly passed to a standard OCR software package in order to translate the segmented text into ASCII. Also, a straightforward indexing and retrieval scheme is introduced. It is used in the experiments to demonstrate that the proposed text segmentation algorithms together with existing text recognition algorithms are suitable for indexing and retrieval of relevant video sequences in and from a video database. Our experimental results are very encouraging and suggest that these algorithms can be used in video retrieval applications as well as to recognize higher level semantics in videos.  相似文献   

5.
视频和图像文本提取方法综述   总被引:1,自引:0,他引:1  
文本提取在视频和图像中具有重要的应用价值。近年来,大数据时代带来了海量信息检索的迫切需求,大量视频和图像中文本的提取方法涌现出来。回顾了视频和图像中文本提取的算法,从文本提取流程出发,将其分为文本区域检测定位和文本分割两大步骤。在每个步骤中,分析并比较了现有算法的使用范围及相对优缺点,讨论了图像公用数据库,列举了近些年来图像中文本提取的重要应用,指出了当前研究中存在的问题,展望了视频和场景图像文本提取方法的发展趋势。  相似文献   

6.
一种stroke滤波器文字分割算法   总被引:1,自引:0,他引:1  
为解决复杂背景中准确地进行文字分割的问题,提出了一种应用stroke滤波器进行文本分割的新方法。首先进行stroke滤波器的合理设计,并应用所设计的stroke滤波器来判别文本的彩色极性,得到初次分割的二值图。然后进行基于区域生长的文字分割。最后,应用OCR(optical character recognition)模块提高文本分割的整体性能。将提出的算法与其他算法进行了比较,结果表明,所提算法更为有效。  相似文献   

7.
朱成军  李超  熊璋 《计算机工程》2007,33(10):218-219
视频中的文本提供了描述视频内容的有用信息,对于构建基于高级语义的多媒体检索系统具有重要作用。该文从视频文本的特点出发,分析了视频文本检测和识别的各种技术方法及优缺点,以及该领域国内外的发展现状和下一步研究的重点方向。  相似文献   

8.
In this paper, we present a real-time image processing technique for the detection of steam in video images. The assumption made is that the presence of steam acts as a blurring process, which changes the local texture pattern of an image while reducing the amount of details. The problem of detecting steam is treated as a supervised pattern recognition problem. A statistical hidden Markov tree (HMT) model derived from the coefficients of the dual-tree complex wavelet transform (DT-CWT) in small 48×48 local regions of the image frames is used to characterize the steam texture pattern. The parameters of the HMT model are used as an input feature vector to a support vector machine (SVM) technique, specially tailored for this purpose. By detecting and determining the total area covered by steam in a video frame, a computerized image processing system can automatically decide if the frame can be used for further analysis. The proposed method was quantitatively evaluated by using a labelled image data set with video frames sampled from a real oil sand video stream. The classification results were 90% correct when compared to human labelled image frames. The technique is useful as a pre-processing step in automated image processing systems.  相似文献   

9.
为了在视频图像中进行字幕信息的实时提取,提出了一套简捷而有效的方法。首先进行文字事件检测,然后进行边缘检测、阈值计算和边缘尺寸限制,最后依据文字像素密度范围进一步滤去非文字区域的视频字幕,提出的叠加水平和垂直方向边缘的方法,加强了检测到的文字的边缘;对边缘进行尺寸限制过滤掉了不符合文字尺寸的边缘。应用投影法最终确定视频字幕所在区域。最后,利用OCR识别技术对提取出来的文字区域进行识别,完成视频中文字的提取。以上方法的结合保证了提出算法的正确率和鲁棒性。  相似文献   

10.
用C#识别图像中的文字   总被引:1,自引:0,他引:1  
为了对图像中的文字进行快速准确的识别,探讨了采用C#和Microsoft Office中自带的MODI组件进行OCR程序开发的方法和技巧。实验表明,所编程序的处理速度和识别率都取得了满意的结果。  相似文献   

11.
Extraction and recognition of artificial text in multimedia documents   总被引:1,自引:0,他引:1  
Abstract The systems currently available for contentbased image and video retrieval work without semantic knowledge, i. e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e. g. by key word based queries. In this paper we present an algorithm to localise artificial text in images and videos using a measure of accumulated gradients and morphological processing. The quality of the localised text is improved by robust multiple frame integration. A new technique for the binarisation of the text boxes based on a criterion maximizing local contrast is proposed. Finally, detection and OCR results for a commercial OCR are presented, justifying the choice of the binarisation technique.An erratum to this article can be found at  相似文献   

12.
一种视频中字符的集成型切分与识别算法   总被引:3,自引:0,他引:3  
杨武夷  张树武 《自动化学报》2010,36(10):1468-1476
视频文本行图像识别的技术难点主要来源于两个方面: 1)粘连字符的切分与识别问题; 2)复杂背景中字符的切分与识别问题. 为了能够同时切分和识别这两种情况中的字符, 提出了一种集成型的字符切分与识别算法. 该集成型算法首先对文本行图像二值化, 基于二值化的文本行图像的水平投影估计文本行高度. 其次根据字符笔划粘连的程度, 基于图像分析或字符识别对二值图像中的宽连通域进行切分. 然后基于字符识别组合连通域得到候选识别结果, 最后根据候选识别结果构造词图, 基于语言模型从词图中选出字符识别结果. 实验表明该集成型算法大大降低了粘连字符及复杂背景中字符的识别错误率.  相似文献   

13.
基于笔画和Adaboost的两层视频文字定位算法   总被引:3,自引:1,他引:2  
程豪  黄磊  刘昌平  谭怒涛 《自动化学报》2008,34(10):1312-1318
在定位和验证的两级框架下提出了一种新的视频文字定位算法. 在定位模块中, 充分利用字符的笔画属性, 引入对字符区域有很强的响应的笔画算子; 经笔画提取, 密度过滤, 区域分解得候选文本行. 在验证模块中, 提取对文字有较强鉴别能力的边缘方向直方图特征, 使用Adaboost算法训练的分类器对候选文本行进行筛选. 实验结果表明, 该算法具有较强的鲁棒性, 在不同类型的视频帧中都能得到较好的定位结果.  相似文献   

14.
Text detection is important in the retrieval of texts from digital pictures, video databases and webpages. However, it can be very challenging since the text is often embedded in a complex background. In this paper, we propose a classification-based algorithm for text detection using a sparse representation with discriminative dictionaries. First, the edges are detected by the wavelet transform and scanned into patches by a sliding window. Then, candidate text areas are obtained by applying a simple classification procedure using two learned discriminative dictionaries. Finally, the adaptive run-length smoothing algorithm and projection profile analysis are used to further refine the candidate text areas. The proposed method is evaluated on the Microsoft common test set, the ICDAR 2003 text locating set, and an image set collected from the web. Extensive experiments show that the proposed method can effectively detect texts of various sizes, fonts and colors from images and videos.  相似文献   

15.
The need for content-based access to image and video information from media archives has captured the attention of researchers in recent years. Research efforts have led to the development of methods that provide access to image and video data. These methods have their roots in pattern recognition. The methods are used to determine the similarity in the visual information content extracted from low level features. These features are then clustered for generation of database indices. This paper presents a comprehensive survey on the use of these pattern recognition methods which enable image and video retrieval by content.  相似文献   

16.
We report on the development and implementation of a robust algorithm for extracting text in digitized color video. The algorithm first computes maximum gradient difference to detect potential text line segments from horizontal scan lines of the video. Potential text line segments are then expanded or combined with potential text line segments from adjacent scan lines to form text blocks, which are then subject to filtering and refinement. Color information is then used to more precisely locate text pixels within the detected text blocks. The robustness of the algorithm is demonstrated by using a variety of color images digitized from broadcast television for testing. The algorithm also performs well on images after JPEG compression and decompression, and on images corrupted with different types of noise.  相似文献   

17.
This paper describes the winning algorithm we submitted to the recent NICE.I iris recognition contest. Efficient and robust segmentation of noisy iris images is one of the bottlenecks for non-cooperative iris recognition. To address this problem, a novel iris segmentation algorithm is proposed in this paper. After reflection removal, a clustering based coarse iris localization scheme is first performed to extract a rough position of the iris, as well as to identify non-iris regions such as eyelashes and eyebrows. A novel integrodifferential constellation is then constructed for the localization of pupillary and limbic boundaries, which not only accelerates the traditional integrodifferential operator but also enhances its global convergence. After that, a curvature model and a prediction model are learned to deal with eyelids and eyelashes, respectively. Extensive experiments on the challenging UBIRIS iris image databases demonstrate that encouraging accuracy is achieved by the proposed algorithm which is ranked the best performing algorithm in the recent open contest on iris recognition (the Noisy Iris Challenge Evaluation, NICE.I).  相似文献   

18.
19.
Real-world text on street signs, nameplates, etc. often lies in an oblique plane and hence cannot be recognized by traditional OCR systems due to perspective distortion. Furthermore, such text often comprises only one or two lines, preventing the use of existing perspective rectification methods that were primarily designed for images of document pages. We propose an approach that reliably rectifies and subsequently recognizes individual lines of text. Our system, which includes novel algorithms for extraction of text from real-world scenery, perspective rectification, and binarization, has been rigorously tested on still imagery as well as on MPEG-2 video clips in real time.Received: 15 December 2003, Published online: 14 December 2004Gregory K. Myers: Correspondence to  相似文献   

20.
Reading text in natural images has focused again the attention of many researchers during the last few years due to the increasing availability of cheap image-capturing devices in low-cost products like mobile phones. Therefore, as text can be found on any environment, the applicability of text-reading systems is really extensive. For this purpose, we present in this paper a robust method to read text in natural images. It is composed of two main separated stages. Firstly, text is located in the image using a set of simple and fast-to-compute features highly discriminative between character and non-character objects. They are based on geometric and gradient properties. The second part of the system carries out the recognition of the previously detected text. It uses gradient features to recognize single characters and Dynamic Programming (DP) to correct misspelled words. Experimental results obtained with different challenging datasets show that the proposed system exceeds state-of-the-art performance, both in terms of localization and recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号