首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a novel local threshold algorithm for the binarization of document images. Stroke width of handwritten and printed characters in documents is utilized as the shape feature. As a result, in addition to the intensity analysis, the proposed algorithm introduces the stroke width as shape information into local thresholding. Experimental results for both synthetic and practical document images show that the proposed local threshold algorithm is superior in terms of segmentation quality to the threshold approaches that solely use intensity information.  相似文献   

2.
Binary image representation is essential format for document analysis. In general, different available binarization techniques are implemented for different types of binarization problems. The majority of binarization techniques are complex and are compounded from filters and existing operations. However, the few simple thresholding methods available cannot be applied to many binarization problems. In this paper, we propose a local binarization method based on a simple, novel thresholding method with dynamic and flexible windows. The proposed method is tested on selected samples called the DIBCO 2009 benchmark dataset using specialized evaluation techniques for binarization processes. To evaluate the performance of our proposed method, we compared it with the Niblack, Sauvola and NICK methods. The results of the experiments show that the proposed method adapts well to all types of binarization challenges, can deal with higher numbers of binarization problems and boosts the overall performance of the binarization.  相似文献   

3.
4.
Marginal noise is a common phenomenon in document analysis which results from the scanning of thick documents or skew documents. It usually appears in the front of a large and dark region around the margin of document images. Marginal noise might cover meaningful document objects, such as text, graphics and forms. The overlapping of marginal noise with meaningful objects makes it difficult to perform the task of segmentation and recognition of document objects. This paper proposes a novel approach to remove marginal noise. The proposed approach consists of two steps which are marginal noise detection and marginal noise deletion. Marginal noise detection will reduce an original document image into a smaller image, and then find marginal noise regions according to the shape length and location of the split blocks. After the detection of marginal noise regions, different removal methods are performed. A local thresholding method is proposed for the removal of marginal noise in gray-scale document images, whereas a region growing method is devised for binary document images. Experimenting with a wide variety of test samples reveals the feasibility and effectiveness of our proposed approach in removing marginal noises.  相似文献   

5.
In this work, a multi-scale binarization framework is introduced, which can be used along with any adaptive threshold-based binarization method. This framework is able to improve the binarization results and to restore weak connections and strokes, especially in the case of degraded historical documents. This is achieved thanks to localized nature of the framework on the spatial domain. The framework requires several binarizations on different scales, which is addressed by introduction of fast grid-based models. This enables us to explore high scales which are usually unreachable to the traditional approaches. In order to expand our set of adaptive methods, an adaptive modification of Otsu's method, called AdOtsu, is introduced. In addition, in order to restore document images suffering from bleed-through degradation, we combine the framework with recursive adaptive methods. The framework shows promising performance in subjective and objective evaluations performed on available datasets.  相似文献   

6.
A segmentation algorithm using a water flow model [Kim et al., Pattern Recognition 35 (2002) 265–277] has already been presented where a document image can be efficiently divided into two regions, characters and background, due to the property of locally adaptive thresholding. However, this method has not decided when to stop the iterative process and required long processing time. Plus, characters on poor contrast backgrounds often fail to be separated successfully. Accordingly, to overcome the above drawbacks to the existing method, the current paper presents an improved approach that includes extraction of regions of interest (ROIs), an automatic stopping criterion, and hierarchical thresholding. Experimental results show that the proposed method can achieve a satisfactory binarization quality, especially for document images with a poor contrast background, and is significantly faster than the existing method.  相似文献   

7.
数据库中大量重复图片的存在不仅影响学习器性能,而且耗费大量存储空间。针对海量图片去重,提出一种基于pHash分块局部探测的海量图像查重算法。首先,生成所有图片的pHash值;其次,将pHash值划分成若干等长的部分,若两张图片的某一个pHash部分的值一致,则这两张图片可能是重复的;最后,探讨了图片重复的传递性问题,针对传递和非传递两种情况分别进行了算法实现。实验结果表明,所提算法在处理海量图片时具有非常高的效率,在设定相似度阈值为13的条件下,传递性算法对近30万张图片的查重仅需2 min,准确率达到了53%。  相似文献   

8.
9.
基于自动阈值的CT图像快速肺实质分割算法   总被引:6,自引:1,他引:5       下载免费PDF全文
首先给出了一个肺实质分割的基本框架,结合最佳阈值法、数学形态学方法,对图像进行了粗分割;然后,针对左右肺未完全分离的情况,提出了快速自适应的优化细分割方法。在实际临床胸部CT图像上的实验表明,该方法的分割结果和专家的人工分割结果很接近,特别是对于左右肺的分离有很好的实验效果,成功分割率达到94.8%。  相似文献   

10.
陈霞  王希常  张华英  刘江 《计算机应用》2011,31(9):2378-2381
针对二值文档图像的版权保护问题,提出了一种二值文档图像零水印算法。该算法使用文档图像的局部二值模式(LBP)构造纹理谱图像,然后使用该纹理谱图像的直方图构造零水印信息。相比其他文档图像水印算法,该算法具有更好的隐蔽性,而且不会改变原始图像的信息。水印攻击包括图像剪切、加噪声和图像旋转操作,实验分析结果表明这些攻击操作对零水印信息的影响不大,最低标准相关系数在0.85以上,具有很好的稳定性。  相似文献   

11.
Among various thresholding methods, minimum cross entropy is implemented for its effectiveness and simplicity. Although it is efficient and gives excellent result in case of bi-level thresholding, but its evaluation becomes computationally costly when extended to perform multilevel thresholding owing to the exhaustive search performed for the optimum threshold values. Therefore, in this paper, an efficient multilevel thresholding technique based on cuckoo search algorithm is adopted to render multilevel minimum cross entropy more practical and reduce the complexity. Experiments have been conducted over different color images including natural and satellite images exhibiting low resolution, complex backgrounds and poor illumination. The feasibility and efficiency of proposed approach is investigated through an extensive comparison with multilevel minimum cross entropy based methods that are optimized using artificial bee colony, bacterial foraging optimization, differential evolution, and wind driven optimization. In addition, the proposed approach is compared with thresholding techniques depending on between-class variance (Otsu) method and Tsalli’s entropy function. Experimental results based on qualitative results and different fidelity parameters depicts that the proposed approach selects optimum threshold values more efficiently and accurately as compared to other compared techniques and produces high quality of the segmented images.  相似文献   

12.
基于最大模糊熵和微粒群的双阈值图像分割   总被引:1,自引:1,他引:0  
基于最大模糊熵准则和微粒群算法,提出了一种新的双阈值图像分割方法.该方法通过定义3种模糊隶属度函数,将图像模糊划分为暗、灰和亮3个不同的区域.同时采用微粒群算法搜索最大模糊准则下模糊参数的最优组合,进而确定图像的两个最佳分割阈值.仿真结果表明,该算法具有良好的分割效果和较强的实时处理能力.  相似文献   

13.
为降低基于梯度的边界检测算法的复杂度,常使用两种梯度近似算法。但这些梯度近似值受边界方向的影响较大,降低了边界检测的性能。提出了通用梯度近似算法的数学模型和两种优化准则,进而推导出两种梯度近似的优化算法。分析表明:与常用算法相比,优化算法在各向同性的性能方面提高4.4倍,在梯度幅度的逼近度提高57倍。同时,给出了优化算法的简单快捷的实现方法。  相似文献   

14.
谢凤英  姜志国  汪雷 《计算机应用》2006,26(7):1587-1589
针对扫描背景不定且含有图表信息的复杂文本图像,提出了一种有效的倾斜检测方法。该方法首先通过对梯度图像的统计分析,自适应地选取到了包含文字的特征子区;在特征子区内,论文把文字行间的空白条带看作一条隐含的线,用优化理论计算出空白条带的倾斜角度,这也就是文本的倾斜角度。实验结果表明,该倾斜检测方法不受扫描背景、边界大小、文本布局及行间距等情况的限制,具有速度快、精度高、适应性强的特点。  相似文献   

15.
高光谱图像空间分辨率不足容易导致异常检测虚警率过高,针对此提出了一种新的异常检测算法。算法首先利用主成分分析PCA对低分辨率高光谱图像进行主成分提取,然后对所提取的主成分和高分辨率图像分别进行IHS变换,分别得到各自的强度分量。运用IHS变换的可逆性,将高光谱数据新的强度分量与原色度分量H和饱和度分量S进行IHS逆变换,得到空间信息增强的高光谱图像数据,最后使用改进的KwRX算法对空间信息增强的高光谱图像数据进行异常检测。仿真实验表明,与KRX算法、PCA-KRX算法相比,本算法在检测目标像素数和虚警个数上都有较大的改善,说明了本算法的的有效性和可行性。  相似文献   

16.
基于变化检测的多时相图像的融合算法   总被引:1,自引:0,他引:1  
李小春  陈鲸 《计算机应用》2005,25(6):1310-1312
在分析多时相图像特点的基础上,提出了基于变化检测的多时相图像融合算法。该算法将小波变换的特征提取方法与ICA子空间映射的变化检测相结合,确定多时相图像各区域变化的强弱,根据本文提出的共生区域增长算法,以及变化检测的结果提出了多时相图像融合的方案,实现目标特征模板的提取。仿真结果表明,本文算法是比较有效的。  相似文献   

17.
Detecting change areas among two or more remote sensing images is a key technique in remote sensing. It usually consists of generating and analyzing a difference image thus to produce a change map. Analyzing the difference image to obtain the change map is essentially a binary classification problem, and can be solved by optimization algorithms. This paper proposes an accelerated genetic algorithm based on search-space decomposition (SD-aGA) for change detection in remote sensing images. Firstly, the BM3D algorithm is used to preprocess the remote sensing image to enhance useful information and suppress noises. The difference image is then obtained using the logarithmic ratio method. Secondly, after saliency detection, fuzzy c-means algorithm is conducted on the salient region detected in the difference image to identify the changed, unchanged and undetermined pixels. Only those undetermined pixels are considered by the optimization algorithm, which reduces the search space significantly. Inspired by the idea of the divide-and-conquer strategy, the difference image is decomposed into sub-blocks with a method similar to down-sampling, where only those undetermined pixels are analyzed and optimized by SD-aGA in parallel. The category labels of the undetermined pixels in each sub-block are optimized according to an improved objective function with neighborhood information. Finally the decision results of the category labels of all the pixels in the sub-blocks are remapped to their original positions in the difference image and then merged globally. Decision fusion is conducted on each pixel based on the decision results in the local neighborhood to produce the final change map. The proposed method is tested on six diverse remote sensing image benchmark datasets and compared against six state-of-the-art methods. Segmentations on the synthetic image and natural image corrupted by different noise are also carried out for comparison. Results demonstrate the excellent performance of the proposed SD-aGA on handling noises and detecting the changed areas accurately. In particular, compared with the traditional genetic algorithm, SD-aGA can obtain a much higher degree of detection accuracy with much less computational time.  相似文献   

18.
利用基于BOLD(Blood Oxygenation Level Dependent)效应的fMRI图像和两室模型可以定量计算脑氧摄取分数(Oxygen Extraction Fraction,OEF),在脑血管病的预测和诊断上有较大的临床应用价值。但由于BOLD效应fMRI图像的信噪比较低,研究并设计有效的BOLD效应fMRI图像去噪算法,从而提高OEF计算结果的准确性是急需解决的问题。因此,设计了基于贝叶斯估计的自适应阈值小波去噪方法对BOLD效应fMRI图像进行分析和去噪,并将结果图像应用于OEF值的计算。实验结果表明该方法能有效提高OEF计算结果的准确性。  相似文献   

19.
样点自适应补偿(SAO)是第二代数字音视频编解码标准(AVS2)和高效视频编码(HEVC)标准中环路滤波耗时较多的一部分。针对现有自适应样点补偿算法计算量大、复杂度高等问题,提出一种改进的快速率失真算法。该算法主要通过分析各个边缘模式下不同补偿值的变化与所对应的率失真变化之间的关系,对原本定义的补偿值与写入码流的二元符号串之间的关系表进行修改,在不需要计算每个补偿值的率失真代价的情况下,设定一个提前终止条件,快速找到当前样值偏移补偿单元最优的补偿值。实验结果表明,与AVS2下的计算结果相比,在保证图像率失真基本不变的前提下,改进的算法减少了寻找最优补偿值的计算量以及75%的循环次数和33%的环路滤波运行时间,从而降低了计算的复杂度。  相似文献   

20.

针对灰度可见光和红外图像的融合图像缺乏色彩信息、图像的高阶信息在变换域中统计独立性不足的缺陷, 提出一种基于独立分量分析和IHS (亮度-色度-饱和度) 变换域的融合方法. 该方法利用IHS 变换域能够有效分离图像亮度分量和彩色信息的优势, 对灰度可见光图像建立灰度图像的彩色传递模型. 利用各分量的独立性进行基于独立分量分析和IHS 变换域的图像融合, 并得到最终的彩色融合图像, 使融合图像更加符合人类视觉要求. 仿真实验验证了所提出算法的有效性.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号