首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Adaptive multilevel rough entropy evolutionary thresholding   总被引:1,自引:0,他引:1  
In this study, comprehensive research into rough set entropy-based thresholding image segmentation techniques has been performed producing new and robust algorithmic schemes. Segmentation is the low-level image transformation routine that partitions an input image into distinct disjoint and homogenous regions using thresholding algorithms most often applied in practical situations, especially when there is pressing need for algorithm implementation simplicity, high segmentation quality, and robustness. Combining entropy-based thresholding with rough set results in the rough entropy thresholding algorithm.The authors propose a new algorithm based on granular multilevel rough entropy evolutionary thresholding that operates on a multilevel domain. The MRET algorithm performance has been compared to the iterative RET algorithm and standard k-means clustering methods on the basis of β-index as a representative validation measure. Performance in experimental assessment suggests that granular multilevel rough entropy threshold based segmentations - MRET - present high quality, comparable with and often better than k-means clustering based segmentations. In this context, the rough entropy evolutionary thresholding MRET algorithm is suitable for specific segmentation tasks, when seeking solutions that incorporate spatial data features with particular characteristics.  相似文献   

2.
低对比度图像的自适应阈值化算法   总被引:2,自引:0,他引:2  
在处理光照不均匀的图像分割时用常用的阈值分割方法不能得到良好的分割效果。提出通过小波多分辨力滤波将滤波以后的低通图像作为图像的自适应阈值进行二值化 ,获得了比较理想的效果。该算法在枪支在线OCR识别系统中得到实际应用。  相似文献   

3.
4.
Segmentation is an important step to obtain quantitative information from tomographic data sets. However, it is usually not possible to obtain an accurate segmentation based on a single, global threshold. Instead, local thresholding schemes can be applied that use a varying threshold. Selecting the best local thresholds is not a straightforward task, as local image features often do not provide sufficient information for choosing a proper threshold.Recently, the concept of projection distance was proposed by the authors as a new criterion for evaluating the quality of a tomogram segmentation [K.J. Batenburg, J. Sijbers, Automatic threshold selection for tomogram segmentation by reprojection of the reconstructed image, in: Computer Analysis of Images and Patterns, in: Lecture Notes in Computer Science, vol. 4673, Springer, Berlin/Heidelberg, 2007, pp. 563-570.]. In this paper, we describe how projection distance minimization (PDM) can be used to select local thresholds, based on the available projection data from which the tomogram was initially computed.The results of several experiments are presented in which our local thresholding approach is compared with alternative thresholding methods. These results demonstrate that the local thresholding approach yields segmentations that are significantly more accurate compared to previously published methods, in particular when the initial reconstruction contains artifacts.  相似文献   

5.
Local entropy-based transition region extraction and thresholding   总被引:16,自引:0,他引:16  
Transition region based thresholding is a newly developed approach for image segmentation in recent years. Gradient-based transition region extraction methods (G-TREM) are greatly affected by noise. Local entropy in information theory represents the variance of local region and catches the natural properties of transition regions. In this paper, we present a novel local entropy-based transition region extraction method (LE-TREM), which effectively reduces the affects of noise. Experimental results demonstrate that LE-TREM significantly outperforms the conventional G-TREM.  相似文献   

6.
Thresholding method based on transition region is a newly developed approach for image segmentation in recent years. In this paper, a novel transition region extraction and thresholding method based on gray level difference is proposed by analyzing properties of transition region. The gray level difference can effectively represent the essence of transition region. Hence, the proposed algorithm can accurately extract transition region of an image and get ideal segmentation result. The proposed algorithm was compared with two classic transition region-based methods on a variety of synthetic and real world images, and the experimental results show the effectiveness and efficiency of the algorithm.  相似文献   

7.
Li  Yuxin  Pan  Zhibin  Du  Dong  Li  Rui 《Multimedia Tools and Applications》2020,79(27-28):19575-19593
Multimedia Tools and Applications - Image denoising is a widely used approach in the field of image processing, which restores image more accurately. In particular, higher-order singular value...  相似文献   

8.
We utilize the linear system theory to establish a theory model of transition region. With the model, we reveal an important property of transition region, namely the gray level distribution symmetry. Utilizing the property, we propose a new thresholding framework based on stable transition region set. The elements of the stable transition region set are equal or close to each other in the average gray level. As an example of the proposed framework, we have shown that the feature transformation based on the multiscale gradient multiplication technology is an effective means of estimating the threshold. We have performed subjective and objective comparisons on both synthetic and real images. The experimental results show the segmentation quality of the proposed approach is superior to three conventional transition region-based thresholding methods.  相似文献   

9.
In this paper, we present an adaptive two-step contourlet-wavelet iterative shrinkage/thresholding (TcwlST) algorithm for remote sensing image restoration. This algorithm can be used to deal with various linear inverse problems (LIPs), including image deconvolution and reconstruction. This algorithm is a new version of the famous two-step iterative shrinkage/thresholding (TWIST) algorithm. First, we use the split Bregrnan Rudin-Osher-Fatemi (ROF) model, based on a sparse dictionary, to decompose the image into cartoon and texture parts, which are represented by wavelet and contourlet, respectively. Second, we use an adaptive method to estimate the regularization parameter and the shrinkage threshold. Finally, we use a linear search method to find a step length and a fast method to accelerate convergence. Results show that our method can achieve a signal-to-noise ratio improvement (ISNR) for image restoration and high convergence speed.  相似文献   

10.
Computational Visual Media - Due to the lack of color in manga (Japanese comics), black-and-white textures are often used to enrich visual experience. With the rising need to digitize manga,...  相似文献   

11.
This paper introduces an adaptive or category-dependent normalization method that normalizes an input pattern against each reference pattern using global/local affine transformation (GAT/LAT) in a hierarchical manner as a general deformation model. Also, the normalization criterion is clearly defined as minimization of the mean of nearest-neighbor interpoint distances between each reference pattern and a normalized input pattern. Optimal GAT/LAT is determined by iterative application of weighted least-squares fitting techniques. Experiments using input patterns of 3,171 character categories, including Kanji, Kana, and alphanumerics, written by 36 people in the cursive style against square-style reference patterns show that the proposed method not only can absorb a fairly large amount of handwriting fluctuation within the same category, but the discrimination ability is greatly improved by the suppression of excessive normalization against similarly shaped but different categories. Furthermore, comparative results obtained by the conventional shape normalization method for preprocessing are presented  相似文献   

12.
In this paper, a color image segmentation approach based on homogram thresholding and region merging is presented. The homogram considers both the occurrence of the gray levels and the neighboring homogeneity value among pixels. Therefore, it employs both the local and global information. Fuzzy entropy is utilized as a tool to perform homogram analysis for finding all major homogeneous regions at the first stage. Then region merging process is carried out based on color similarity among these regions to avoid oversegmentation. The proposed homogram-based approach (HOB) is compared with the histogram-based approach (HIB). The experimental results demonstrate that the HOB can find homogeneous regions more effectively than HIB does, and can solve the problem of discriminating shading in color images to some extent.  相似文献   

13.
提出了基于空域结构的纹理识别算法。用纹理的基元结构分布与纹理分布字典参数进行了相关及旋转相关比较,得出了纹理归类结果及纹理相似的最佳精度,用可识别纹理参数调整纹理分布字典,使之具有自适应性。实验结果表明,用本算法识别的纹理与视觉识别结果高度一致。  相似文献   

14.
Color segmentation is a very popular technique for real-time object tracking. However, even with adaptive color segmentation schemes, under varying environmental conditions in video sequences, the tracking tends to be unreliable. To overcome this problem, many multiple cue fusion techniques have been suggested. One of the cues that complements color nicely is texture. However, texture segmentation has not been used for object tracking mainly because of the computational complexity of texture segmentation. This paper presents a formulation for fusing texture and color in a manner that makes the segmentation reliable while keeping the computational cost low, with the goal of real-time target tracking. An autobinomial Gibbs Markov random field is used for modeling the texture and a 2D Gaussian distribution is used for modeling the color. This allows a probabilistic fusion of the texture and color cues and for adapting both the texture and color over time for target tracking. Experiments with both static images and dynamic image sequences establish the feasibility of the proposed approach.  相似文献   

15.
Pattern Analysis and Applications - This paper presents a new perspective of text area segmentation from document images using a novel adaptive thresholding for image enhancement. Using sliding...  相似文献   

16.
In this paper, we propose a general framework of adaptive local thresholding based on a verification-based multithreshold probing scheme. Object hypotheses are generated by binarization using hypothetic thresholds and accepted/rejected by a verification procedure. The application-dependent verification procedure can be designed to fully utilize all relevant informations about the objects of interest. In this sense, our approach is regarded as knowledge-guided adaptive thresholding, in contrast to most algorithms known from the literature. We apply our general framework to detect vessels in retinal images. An experimental evaluation demonstrates superior performance over global thresholding and a vessel detection method recently reported in the literature. Due to its simplicity and general nature, our novel approach is expected to be applicable to a variety of other applications.  相似文献   

17.
The problem of coherent detection for distributed target in compound-Gaussian clutter with inverse gamma texture is studied and three detectors. One-step generalized likelihood ratio test (GLRT), maximum a-posteriori GLRT and two-step GLRT, are proposed respectively in a Bayesian architecture. Resultantly, these detectors have similar detection structures with their test statistics modulated by the shape and scale parameters of the texture. Alternatively, they can be reformulated into another form with their test statistics associated with the scale parameter and detection thresholds related with the shape parameter. And this detection structure can be seen as a matched filter form with a shape-parameter-dependent threshold like the detectors for point target. Subsequently, the proposed detectors are compared with two-step GLRT based on compound-Gaussian clutter without considering texture model, their detection performances are evaluated, and their robustness are analyzed via Monte Carlo simulations. Results enlighten us that: (1) the three Bayesian detectors bear pretty much the same detection performances; (2) the detection performances fluctuate more intensely when the shape parameter or the scale parameter is smaller; (3) the shape parameter has more influences on the detection performances than the scale parameter, as it is an indication of the clutter impulsiveness.  相似文献   

18.
Document image binarization involves converting gray level images into binary images, which is a feature that has significantly impacted many portable devices in recent years, including PDAs and mobile camera phones. Given the limited memory space and the computational power of portable devices, reducing the computational complexity of an embedded system is of priority concern. This work presents an efficient document image binarization algorithm with low computational complexity and high performance. Integrating the advantages of global and local methods allows the proposed algorithm to divide the document image into several regions. A threshold surface is then constructed based on the diversity and the intensity of each region to derive the binary image. Experimental results demonstrate the effectiveness of the proposed method in providing a promising binarization outcome and low computational cost.  相似文献   

19.
We consider the problem of estimating an infinite-dimensional vector θ observed in Gaussian white noise. Under the condition that components of the vector have a Gaussian prior distribution that depends on an unknown parameter β, we construct an adaptive estimator with respect to β. The proposed method of estimation is based on the empirical Bayes approach.  相似文献   

20.
The use of black & white (B&W) air photographs for the production of historic land cover maps can be done by image classification, using additional texture features. In this paper we evaluate the importance of a number of parameters in the image classification process based on texture, such as the window size, angle and distance used to produce the texture features, the number of features used, the image quantization level and its spatial resolution. The evaluation was performed using five photographs from the 1950s. The influence of the classification method, the number of classes searched for in the images and the post‐processing tasks were also investigated. The effect of each of these parameters for the classification accuracy was evaluated by cross‐validation. The selection of the best parameters was performed based on the validation results, and also on the computation load involved for each case and the end user requirements. The final classification results were good (average accuracy of 85.7%, k = 0.809) and the method has proven to be useful for the production of historic land cover maps from B&W air photographs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号