首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this brief, the case where the watermark is detected in a noisy interpolated version of the originally watermarked image is investigated. Polyphase decomposition is utilized at the detection side in order to enable the flexible formation of a fused image, which is appropriate for watermark detection. The optimal fused correlator, obtained by combining information from different image components, is derived through a statistical analysis of the correlation detector properties, followed by Lagrange optimization. It is shown that it is preferable to perform detection in a fused image rather than the original image.  相似文献   

2.
Edge detection in noisy images by neuro-fuzzy processing   总被引:1,自引:0,他引:1  
A novel neuro-fuzzy (NF) operator for edge detection in digital images corrupted by impulse noise is presented. The proposed operator is constructed by combining a desired number of NF subdetectors with a postprocessor. Each NF subdetector in the structure evaluates a different pixel neighborhood relation. Hence, the number of NF subdetectors in the structure may be varied to obtain the desired edge detection performance. Internal parameters of the NF subdetectors are adaptively optimized by training by using simple artificial training images. The performance of the proposed edge detector is evaluated on different test images and compared with popular edge detectors from the literature. Simulation results indicate that the proposed NF operator outperforms competing edge detectors and offers superior performance in edge detection in digital images corrupted by impulse noise.  相似文献   

3.
Contrast-independent curvilinear structure detection in biomedical images   总被引:1,自引:0,他引:1  
Many biomedical applications require detection of curvilinear structures in images and would benefit from automatic or semiautomatic segmentation to allow high-throughput measurements. Here, we propose a contrast-independent approach to identify curvilinear structures based on oriented phase congruency, i.e., the phase congruency tensor (PCT). We show that the proposed method is largely insensitive to intensity variations along the curve and provides successful detection within noisy regions. The performance of the PCT is evaluated by comparing it with state-of-the-art intensity-based approaches on both synthetic and real biological images.  相似文献   

4.
Human skin detection in images is desirable in many practical applications, e.g., human–computer interaction and adult-content filtering. However, existing methods are mainly suffer from confusing backgrounds in real-world images. In this paper, we try to address this issue by exploring and combining several human skin properties, i.e. color property, texture property and region property. First, images are divided into superpixels, and robust skin seeds and background seeds are acquired through color property and texture property of skin. Then we combining color, region and texture properties of skin by proposing a novel skin color and texture based graph cuts (SCTGC) to acquire the final skin detection results. Comprehensive and comparative experiments show that the proposed method achieves promising performance and outperforms many state-of-the-art methods over publicly available challenging datasets with a great part of hard images.  相似文献   

5.
The aim of this paper is to provide a theoretical set up and a mathematical model for the problem of image reconstruction. The original image belongs to a family of two-dimensional (2-D) possibly discontinuous functions, but is blurred by a Gaussian point spread function introduced by the measurement device. In addition, the blurred image is corrupted by an additive noise. We propose a preprocessing of data which enhances the contribution of the signal discontinuous component over that one of the regular part, while damping down the effect of noise. In particular we suggest to convolute data with a kernel defined as the second order derivative of a Gaussian spread function. Finally, the image reconstruction is embedded in an optimal problem framework. Now convexity and compactness properties for the admissible set play a fundamental role. We provide an instance of a class of admissible sets which is relevant from an application point of view while featuring the desired properties.  相似文献   

6.
Robust detection of skew in document images   总被引:4,自引:0,他引:4  
  相似文献   

7.
针对边缘检测算法存在的检测精度与抑噪的矛盾,提出一种基于新的图像边缘检测算法。算法将检测窗口按照0o,45o,90o和135o四个不同方向分别划分为两个子区域,先统计每个检测窗口(3×3)内脉冲噪声点的个数,如果超过3个,则扩大检测窗口至5×5。对于检测窗口每个方向划分的两个子区域,分别计算区域内的非噪声点的平均灰度值,利用平均值差的绝对值作为窗口的方向梯度值,进而求得中心点的梯度。然后,对梯度图像采用改进的非极大值抑制方法进行细化,并提取边缘。实验结果表明,该算法检测的图像边缘方向性较强,边缘较细,不仅对不同程度脉冲噪声干扰图像具有较强的抑噪能力,而且对高斯噪声也具有一定程度的抑制效果,算法具有较强的适应性。  相似文献   

8.
Robust simultaneous detection of coronary borders in complex images   总被引:5,自引:0,他引:5  
Visual estimation of coronary obstruction severity from angiograms suffers from poor inter- and intraobserver reproducibility and is often inaccurate. In spite of the widely recognized limitations of visual analysis, automated methods have not found widespread clinical use, in part because they too frequently fail to accurately identify vessel borders. The authors have developed a robust method for simultaneous detection of left and right coronary borders that is suitable for analysis of complex images with poor contrast, nearby or overlapping structures, or branching vessels. The reliability of the simultaneous border detection method and that of the authors' previously reported conventional border detection method were tested in 130 complex images, selected because conventional automated border detection might be expected to fail. Conventional analysis failed to yield acceptable borders in 65/130 or 50% of images. Simultaneous border detection was much more robust (p<.001) and failed in only 15/130 or 12% of complex images. Simultaneous border detection identified stenosis diameters that correlated significantly better with observer-derived stenosis diameters than did diameters obtained with conventional border detection (p<0.001), Simultaneous detection of left and right coronary borders is highly robust and has substantial promise for enhancing the utility of quantitative coronary angiography in the clinical setting.  相似文献   

9.
Noise degrades the performance of any image compression algorithm. This paper studies the effect of noise on lossy image compression. The effect of Gaussian, Poisson, and film-grain noise on compression is studied. To reduce the effect of the noise on compression, the distortion is measured with respect to the original image not to the input of the coder. Results of noisy source coding are then used to design the optimal coder. In the minimum-mean-square-error (MMSE) sense, this is equivalent to an MMSE estimator followed by an MMSE coder. The coders for the Poisson noise and the film-grain noise cases are derived and their performance is studied. The effect of this preprocessing step is studied using standard coders, e.g., JPEG, also. As is demonstrated, higher quality is achieved at lower bit rates.  相似文献   

10.
Noise degrades the performance of any image compression algorithm. However, at very low bit rates, image coders effectively filter noise that may he present in the image, thus, enabling the coder to operate closer to the noise free case. Unfortunately, at these low bit rates the quality of the compressed image is reduced and very distinctive coding artifacts occur. This paper proposes a combined restoration of the compressed image from both the artifacts introduced by the coder along with the additive noise. The proposed approach is applied to images corrupted by data-dependent Poisson noise and to images corrupted by film-grain noise when compressed using a block transform-coder such as JPEG. This approach has proved to be effective in terms of visual quality and peak signal-to-noise ratio (PSNR) when tested on simulated and real images.  相似文献   

11.
A robust quantiser design for image coding is presented. The proposed quantiser can be viewed as the combination of compound of a quantiser, a variable length code (VLC) coder, and a channel coder. Simulation results show that our proposed scheme has a graceful distortion behaviour within the designed noise range  相似文献   

12.
We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets.  相似文献   

13.
Fuzzy operator for sharpening of noisy images   总被引:1,自引:0,他引:1  
Russo  F. Ramponi  G. 《Electronics letters》1992,28(18):1715-1717
An operator is presented which is able to sharpen the details of an image, by applying fuzzy reasoning rules to the input luminance values. The particular processing rules used allow the operator to be insensitive to noise.<>  相似文献   

14.
An edge extraction technique for noisy images   总被引:1,自引:0,他引:1  
We present an algorithm for extracting edges from noisy images. Our method uses an unsupervised learning approach for local threshold computation by means of Pearson's method for mixture density identification. We tested the technique by applying it to computer-generated images corrupted with artificial noise and to an actual Thallium-201 heart image and it is shown that the technique has potential use for noisy images.  相似文献   

15.
The paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.  相似文献   

16.
This article proposes a method to design a vector quantizer (VQ) for robust performance under noisy channel conditions. By re-optimizing the quantizer at progressively lower levels of assumed channel noise, the design is less susceptible to poor local optima. The method is applied to: (1) channel-optimized VQ design; and (2) index assignment for a source-optimized VQ. For both problems, we demonstrate substantial performance improvements over commonly used techniques  相似文献   

17.
Automatic image processing methods are a prerequisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.   相似文献   

18.
A system for scene-oriented hierarchical classification of blurry and noisy images is proposed. It attempts to simulate important features of the human visual perception. The underlying approach is based on three strategies: extraction of essential signatures captured from a global context, simulating the global pathway; highlight detection based on local conspicuous features of the reconstructed image, simulating the local pathway; and hierarchical classification of extracted features using probabilistic techniques. The techniques involved in hierarchical classification use input from both the local and global pathways. Visual context is exploited by a combination of Gabor filtering with the principal component analysis. In parallel, a pseudo-restoration process is applied together with an affine invariant approach to improve the accuracy in the detection of local conspicuous features. Subsequently, the local conspicuous features and the global essential signature are combined and clustered by a Monte Carlo approach. Finally, clustered features are fed to a self-organizing tree algorithm to generate the final hierarchical classification results. Selected representative results of a comprehensive experimental evaluation validate the proposed system.  相似文献   

19.
We present a regularization method that employs a cross-entropy functional and which deals with the three issues of complex-valued data, prior information, and noise mitigation. The basic model of this approach is similar to that used in usual maximum a posteriori analysis and allows for a more general relationship between the image and its configuration entropy than that usually employed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号