首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于最大-最小相似度学习方法的文本提取   总被引:1,自引:0,他引:1  
付慧  刘峡壁  贾云得 《软件学报》2008,19(3):621-629
应用最大-最小相似度(maximum-minimum similarity,简称MMS)学习方法,对基于高斯混合模型的文本区域提取方法中的有关参数进行优化.该学习方法通过最大化正样本相似度和最小化反样本相似度获得最佳分类能力.根据这种判别学习思想,建立了相应的目标函数,并利用最速梯度下降法寻找目标函数最小值,以得到文本区域提取方法的最优参数集合.文本区域提取实验结果表明:在用期望最大化(expectation maximization,简称EM)算法获得参数的极大似然估计值后,使用最大-最小相似度学习方法,使文本提取综合性能明显提高,开放实验的召回率和准确率分别达到98.55%和93.56%.在实验中,最大-最小相似度学习方法的表现还优于常用的判别学习方法——最小分类错误(minimum classification error,简称MCE)学习方法.  相似文献   

2.
建立了相邻字符区域的高斯混合模型,用于区分字符与非字符.在此基础上,提出了一种从图像中提取多语种文本的方法.首先对输入图像进行二值化,并执行形态学闭运算,使二值图像中每个字符成为一个单独的连通成分.然后根据各连通成分重心的Voronoi区域,形成连通成分之间的邻接关系;最后在贝叶斯框架下,基于相邻字符区域的高斯混合模型计算相应的伪概率,以此为判据将每个连通成分标注为字符或非字符.利用所提出的文本提取方法,进行了复杂中英文文本的提取实验,获得大于97%的准确率和大于80%的召回率,证实了方法的有效性.  相似文献   

3.
针对彩色印刷图像背景色彩丰富和汉字存在多个连通分量,连通域文字分割算法不能精确提取文字,提出基于汉字连通分量的彩色印刷图像版面分割方法。利用金字塔变换逆半调算法对图像进行预处理,通过颜色采样和均值偏移分割图像颜色,标记文字连通分量,根据汉字结构和连通分量特性重建汉字连通分量,分析文字连通分量连接关系确定文字排列方向实现文字分割。实验结果表明,该方法能够有效地重建汉字连通分量,在彩色印刷图像中实现对不同字体、字号、颜色的文字分割。  相似文献   

4.
In this paper, we present a new text line detection method for handwritten documents. The proposed technique is based on a strategy that consists of three distinct steps. The first step includes image binarization and enhancement, connected component extraction, partitioning of the connected component domain into three spatial sub-domains and average character height estimation. In the second step, a block-based Hough transform is used for the detection of potential text lines while a third step is used to correct possible splitting, to detect text lines that the previous step did not reveal and, finally, to separate vertically connected characters and assign them to text lines. The performance evaluation of the proposed approach is based on a consistent and concrete evaluation methodology.  相似文献   

5.
This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24% by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53%.  相似文献   

6.
This paper presents a novel technique for recognizing broken characters found in degraded text documents by modeling it as a set-partitioning problem (SPP). The proposed technique searches for the optimal set-partition of the connected components by which each subset yields a reconstructed character. Given the non-linear nature of the objective function needed for optimal set-partitioning, we design an algorithm that we call Heuristic Incremental Integer Programming (HIIP). The algorithm employs integer programming (IP) with an incremental approach using heuristics to hasten the convergence. The objective function is formulated as probability functions that reflect common OCR measurements – pattern resemblance, sizing conformity and distance between connected components. We applied the HIIP technique to Thai and English degraded text documents and achieved accuracy rates over 90%. We also compared HIIP against three competing algorithms and achieved higher comparative accuracy in each case.  相似文献   

7.
针对古代汉字文档的特点,提出了适合于古文档的列切分方法和字切分方法。提出的列切分方法直接对文档的笔画投影进行分析,采用一种基于分层投影过滤和变长间隙阈值的递归切分算法。该算法在列间隔较小、列与格线存在粘连、文档具有一定程度的倾斜的情况下,也能准确地抽取出列,尤其对短列的切分达到了较好的效果。提出的字切分方法分为两步,进行粗切分确定大致的切分位置,采用基于连通域分析与粘连点判断的方法做进一步的细切分。该算法对具有较多粘连和重叠汉字的列,也能较好地切分出完整的单字。实验结果表明,提出的方法用于古代汉字文档切分能够获得较好的效果。  相似文献   

8.
复杂彩色文本图像中字符的提取   总被引:4,自引:1,他引:4  
从复杂彩色文本图像中提取和识别字符已经成为一个既困难又有趣的问题。本文给出了一个具有创新性和实用性的区域生长算法用于彩色图像的分割:彩色图像游程邻接算法CRAG(color run-length adjacency graph algorithm)。我们将该算法用于彩色文本图像,首先得到图像的彩色连通域,再对这些连通域的平均颜色进行颜色聚类,可得到若干个聚类中心,然后根据不同的颜色中心将图像分为相应的彩色层面,最后通过连通域分析判断所需的文字层。该生长算法修改并扩展了传统的BAG算法,并将其运用于彩色印刷体文本图像中,充分利用了彩色图像的颜色和位置信息。实验结果表明新的方法能很好的从彩色印刷图像中提取多种常见的艺术字,并具有较高的提取速度,同时保留了文字和背景图像的原始色彩,便于将来的图像恢复。  相似文献   

9.
为了提高经典目标检测算法对自然场景文本定位的准确性,以及克服传统字符检测模型由于笔画间存在非连通性引起的汉字错误分割问题,提出了一种直接高效的自然场景汉字逼近定位方法。采用经典的EAST算法对场景图像中的文字进行检测。对初检的文字框进行调整使其更紧凑和更完整地包含文字,主要由提取各连通笔画成分、汉字分割和文字形状逼近三部分组成。矫正文字区域和识别文字内容。实验结果表明,提出的算法在保持平均帧率为3.1 帧/s的同时,对ICDAR2015、ICDAR2017-MLT和MSRA-TD500三个多方向数据集上文本定位任务中的F-score分别达到83.5%、72.8%和81.1%;消融实验验证了算法中各模块的有效性。在ICDAR2015数据集上的检测和识别综合评估任务中的性能也验证了该方法相比一些最新方法取得了更好的性能。  相似文献   

10.
This paper presents a text block extraction algorithm that takes as its input a set of text lines of a given document, and partitions the text lines into a set of text blocks, where each text block is associated with a set of homogeneous formatting attributes, e.g. text-alignment, indentation. The text block extraction algorithm described in this paper is probability based. We adopt an engineering approach to systematically characterising the text block structures based on a large document image database, and develop statistical methods to extract the text block structures from the image. All the probabilities are estimated from an extensive training set of various kinds of measurements among the text lines, and among the text blocks in the training data set. The off-line probabilities estimated in the training then drive all decisions in the on-line text block extraction. An iterative, relaxation-like method is used to find the partitioning solution that maximizes the joint probability. To evaluate the performance of our text block extraction algorithm, we used a three-fold validation method and developed a quantitative performance measure. The algorithm was evaluated on the UW-III database of some 1600 scanned document image pages. The text block extraction algorithm identifies and segments 91% of text blocks correctly.  相似文献   

11.
信息的连续采集会造成部分字符存在连笔,进而影响字符识别率.为此,提出一种基于连笔消除的空间手写字符识别方法.将空间手写字符平面化,提取字符拐点和笔画方向特征.为避免笔画的误消除,利用支持向量机把未知字符分为带连笔字符和非连笔字符,通过连笔的书写特征消除连笔,将空间字符轨迹转化为平面字符轨迹,直接用平面字符分类器进行字符识别.实验结果表明,该方法连笔消除效果显著,利用现有字符库即可获得较高的字符识别率.  相似文献   

12.
赵栋材 《微处理机》2012,33(5):35-38,43
木刻藏文经书文中出现字符间粘连、断裂、遮挡现象严重,为识别带来极大的困难。在字符切分、特征提取等文字识别方法基础上,增加了基于BP网络的训练方法,通过对大量的木刻藏文经书字符的训练,修正了数据,收敛了识别结果。实验结果显示,此方法有助于提高木刻藏文经书的文字识别正确率。  相似文献   

13.
为解决朝鲜语古籍数字化中朝汉文种混排字符切分困难的问题,提出一种朝鲜语古籍图像的文字切分算法。针对古籍列与列之间存在不连续间隔线、倾斜或者粘连等问题,提出一种基于连通域投影的列切分方法。利用连通域的删除、合并、拆分等操作对文字进行切分。使用一种多步切分法完成了具有文字大小不一,横向、纵向混合排版特点图像的字符切分工作。对于粘连字,采用改进的滴水算法进行有效切分。实验结果表明所提出的算法能够很好地完成朝、汉文种混排,文字大小不一,排版情况复杂的朝鲜语古籍图像的文字切分工作。该算法的列切分准确率为97.69%,字切分准确率为87.79%。  相似文献   

14.
This work proposes a novel adaptive approach for character segmentation and feature vector extraction from seriously degraded images. An algorithm based on the histogram automatically detects fragments and merges these fragments before segmenting the fragmented characters. A morphological thickening algorithm automatically locates reference lines for separating the overlapped characters. A morphological thinning algorithm and the segmentation cost calculation automatically determine the baseline for segmenting the connected characters. Basically, our approach can detect fragmented, overlapped, or connected character and adaptively apply for one of three algorithms without manual fine-tuning. Seriously degraded images as license plate images taken from real world are used in the experiments to evaluate the robustness, the flexibility and the effectiveness of our approach. The system approach output data as feature vectors keep useful information more accurately to be used as input data in an automatic pattern recognition system.  相似文献   

15.
16.
一种视频中字符的集成型切分与识别算法   总被引:3,自引:0,他引:3  
杨武夷  张树武 《自动化学报》2010,36(10):1468-1476
视频文本行图像识别的技术难点主要来源于两个方面: 1)粘连字符的切分与识别问题; 2)复杂背景中字符的切分与识别问题. 为了能够同时切分和识别这两种情况中的字符, 提出了一种集成型的字符切分与识别算法. 该集成型算法首先对文本行图像二值化, 基于二值化的文本行图像的水平投影估计文本行高度. 其次根据字符笔划粘连的程度, 基于图像分析或字符识别对二值图像中的宽连通域进行切分. 然后基于字符识别组合连通域得到候选识别结果, 最后根据候选识别结果构造词图, 基于语言模型从词图中选出字符识别结果. 实验表明该集成型算法大大降低了粘连字符及复杂背景中字符的识别错误率.  相似文献   

17.
汉字笔段形成规律及其提取方法   总被引:8,自引:0,他引:8  
该文从点阵图像行(列)连通像素段出发,研究汉字图像的笔段构成,发现汉字点阵图像仅由阶梯型笔段和平行长笔段两种类型的笔段构成,并归纳出阶梯型笔段和平行长笔段的形成规律.以笔段形成规律为基础提出了汉字笔段的提取方法,该方法将像素级汉字图像转变为以笔段为单位的图像,有利于汉字识别、汉字细化及汉字字体的自动生成.最后该文给出了印刷体和手写体汉字笔段提取的实验结果.  相似文献   

18.
多字体印刷维吾尔文字符识别系统的研究与开发   总被引:2,自引:0,他引:2  
该文介绍了维吾尔文的特点及维吾尔文字符识别系统.针对维吾尔文的连体结构.重点讨论了解决过程中的技术难点.其中利用投影分离出连体段中的字母.采用边切分边识别的方法,对文本图像进行了切分.分类.提取外围特征,并通过样张的训练.使维吾尔文字符的识别获得了较满意的结果.  相似文献   

19.
We consider an online string matching problem in which we find all the occurrences of a pattern of m characters in a text of n characters, where all the characters of the pattern are available before processing, while the characters of the text are input one after the other. We propose a space-time optimal parallel algorithm for this problem using a neural network approach, This algorithm uses m McCulloch-Pitts neurons connected as a linear array. It processes every input character of the text in one step and hence it requires at most n iteration steps.  相似文献   

20.
该文提出了一种汉字字形视觉重心的计算方法。首先收集常用汉字图像样本,通过图像预处理,提取出样本汉字的连通区域视觉平衡中心;然后招集被试对样本汉字进行视觉重心标注;再利用统计建模的方法,构建出连通区域视觉平衡中心和汉字整体视觉重心之间的关系模型。与相关方法比较,文中方法考虑了汉字视觉重心依赖于人的主观体验这一因素。该方法能广泛应用于汉字特征提取、汉字结构设计与优化等应用领域。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号