共查询到20条相似文献,搜索用时 31 毫秒
1.
一种自适应的图像双边滤波方法 总被引:15,自引:0,他引:15
提出一种利用双边滤波的图像平滑滤波方法,即在滤除图像中高频噪声的同时,按照图像亮度变化保持图像中处于高频部分的边缘信息的自适应滤波过程。该滤波方法将传统的Gauss滤波器的权系数优化成Gauss函数和图像的亮度信息乘积的形式,优化后的权系数再与图像作卷积运算。这样,滤波时就可以考虑到图像的亮度信息,在滤除图像噪声的同时尽量保持了图像的边缘。由于双边滤波的方法可以使滤波器的权系数随着图像的亮度变化而改变,所以在滤波过程中能达到自适应滤波的目的。 相似文献
2.
3.
Positron emission tomography (PET) is one of the key molecular imaging modalities in medicine and biology. Penalized iterative image reconstruction algorithms frequently used in PET are based on maximum-likelihood (ML) and maximum a posterior (MAP) estimation techniques. The ML algorithm produces noisy artifacts whereas the MAP algorithm eliminates noisy artifacts by utilizing availableprior information in the reconstruction process. The MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class and irrespective of the strength of interaction between the nearest neighbors. A Hebbian neural learning scheme is proposed to model the nature of interpixel interaction to reconstruct artifact-free edge preserving reconstruction. A key motivation of the proposed approach is to avoid oversmoothing across edges that is often the case with MAP algorithms. It is assumed that local correlation plays a significant role in PET image reconstruction, and proper modeling of correlation weight (which defines the strength of interpixel interaction) is essential to generate artifact-free reconstruction. The Hebbian learning-based approach modifies the interaction weight by adding a small correction that is proportional to the product of the input signal (neighborhood pixels) and output signal. Quantitative analysis shows that the Hebbian learning-based adaptive weight adjustment approach is capable of producing better reconstructed images compared with those reconstructed by conventional ML and MAP-based algorithms in PET image reconstruction. 相似文献
4.
闪光CCD图像的中值-非线性扩散滤波 总被引:3,自引:0,他引:3
根据闪光CCD图像的特点,提出了一种中值-非线性扩散滤波(Median-NonlinearDiffusionFiltering,简称MNDF)方法。该方法采用中值预滤波来估计图像的真实边缘,通过求解偏微分方程(PartialDifferentialEquation,简称PDE)来进行非线性扩散滤波,充分发挥了中值滤波和非线性扩散滤波的优势,能更好地消除噪声、保护边缘。实验结果表明,在高斯噪声和脉冲噪声同时存在的情况下,MNDF方法取得的滤波效果较P-M方案和Catte方案要好,信噪比改善因子提高3~5倍,均方误差减小1.3~2.7倍。对闪光照相CCD图像取得了很好的消噪声结果,保护了边缘信息。 相似文献
5.
Chung-Bin Wu Bin-Da Liu Jar-Ferr Yang 《IEEE transactions on instrumentation and measurement》2003,52(3):780-784
In this paper, a simple fuzzy-based algorithm to remove the impulse noise from images is proposed. To achieve real-time applications, the proposed filter architecture, which combines fuzzy noise detection and noise filtering, is also designed. With low computational complexity, simulation results show that the proposed filters effectively remove the impulse noise. 相似文献
6.
7.
R. Pugalenthi A. Sheryl Oliver M. Anuradha 《International journal of imaging systems and technology》2020,30(4):1119-1131
Noise filtering performance in medical images is improved using a neuro-fuzy network developed with the combination of a post processor and two neuro-fuzzy (NF) filters. By the fact, the Sugeno-type is found to be less accurate during impulse noise reduction process. In this paper, we propose an improved firefly algorithm based hybrid neuro-fuzzy filter in both the NF filters to improve noise reduction performance. The proposed noise reduction system combines the advantages of the neural, fuzzy and firefly algorithms. In addition, an improved version of firefly algorithm called searching diversity based particle swarm firefly algorithm is used to reduce the local trapping problem as well as to determine the optimal shape of membership function in fuzzy system. Experimental results show that the proposed filter has proved its effectiveness on reducing the impulse noise in medical images against different impulse noise density levels. 相似文献
8.
Image denoising has been considered as an essential image processing problem that is difficult to address. In this study, two image denoising algorithms based on fractional calculus operators are proposed. The first algorithm uses the convolution of covariance of fractional Gaussian fields with the fractional sincα (FS) (sinc function of order α). The second algorithm uses the convolution of covariance of fractional Gaussian fields with the fractional differential Heaviside function, which is the limit of FS. In the proposed algorithms, the given noisy image is processed in a blockwise manner. Each processed pixel is convolved with the mask windows on four directions. The final filtered image based on either FS or fractional differential Heaviside function (FDHS) can be obtained by determining the average magnitude of the four convolution test results for each filter mask windows. The outcomes are evaluated using visual perception and peak signal to noise ratio. Experiments prove the effectiveness of the proposed algorithms in removing Gaussian and Speckle noise. The proposed FS and FDHS achieved average PSNR of 28.88, 28.26?dB, respectively, for Gaussian noise. The improvements outperform those achieved with the use of Gaussian and Wiener filters. 相似文献
9.
目的针对高斯-脉冲混合噪声图像中难以有效去除大量奇异点或离群数据的问题,提出一种基于凸包优化的盲源分离方法来去除图像中的混合噪声。方法该方法把混合噪声和原图均看作未知的源信号,依据噪声图像中混合噪声与原图内容的加性关系建立盲源分离的模型,并利用凸包优化的方法构建源信号(凸包极点)的仿射包,然后通过最小化仿射包到凸包(噪声图像)上的投影误差,求解混合噪声和原图2个源信号,实现去噪混合噪声、复原原图的目的。结果实验结果发现,无论高斯-脉冲混合噪声强弱,该方法去噪复原后的峰值信噪比和平均结构相似性分别在39.9129 d B和0.9以上。结论由实验数据证实该方法可有效地从盲源分离的角度去除图像中高斯-脉冲混合噪声、复原原始图像。 相似文献
10.
当图像中同时存在高斯噪声和椒盐噪声时,单一的均值滤波或中值滤波很难达到最佳滤波效果。 分析了噪声特点和各种滤波方法的优势,提出了一种基于神经网络的图像混合滤波及融合算法:首先建立概率神经网络,检测椒盐噪声和高斯噪声点,并分别利用中值滤波和均值滤波去除噪声点,然后建立径向基函数神经网络,利用训练好的径向基函数神经网络融合 2 种不同滤波的图像,输出理想的融合图像。 Matlab 仿真实验结果表明,该算法有效去除混合噪声的同时,能很好地保护图像的边缘与细节,是一种有效的方法。 相似文献
11.
一种结合小波变换和维纳滤波的图像去噪算法 总被引:2,自引:1,他引:1
目的为了有效消除噪声图像中的椒盐噪声、高斯噪声甚至混合噪声,结合维纳滤波的优势和小波分解各分量的特点,提出一种新的图像去噪算法。方法该算法先将含噪声图像进行小波变换,分离出1个低频分量和3个中高频分量,然后对低频分量进行自适应维纳滤波,对3个中高频分量用Canny算子提取边缘,最后将处理后的4个分量进行重构得到去噪后的图像。结果仿真结果表明,该算法对扫描仪引入的常见噪声均表现出较好的去噪效果,PSNR值均大于20 d B。尤其是对于高斯噪声和混合噪声,新算法去噪后的PSNR结果高于维纳滤波、软阈值小波滤波和文献[9]算法1~8 d B,效果较好。结论结合小波变换和维纳滤波的图像去噪算法,能够较好去除噪声图像的多种类型噪声,是一种较为优秀的去噪算法。 相似文献
12.
Graph filtering, which is founded on the theory of graph signal processing, is
proved as a useful tool for image denoising. Most graph filtering methods focus on learning
an ideal lowpass filter to remove noise, where clean images are restored from noisy ones by
retaining the image components in low graph frequency bands. However, this lowpass filter
has limited ability to separate the low-frequency noise from clean images such that it makes
the denoising procedure less effective. To address this issue, we propose an adaptive
weighted graph filtering (AWGF) method to replace the design of traditional ideal lowpass
filter. In detail, we reassess the existing low-rank denoising method with adaptive
regularizer learning (ARLLR) from the view of graph filtering. A shrinkage approach
subsequently is presented on the graph frequency domain, where the components of noisy
image are adaptively decreased in each band by calculating their component significances.
As a result, it makes the proposed graph filtering more explainable and suitable for
denoising. Meanwhile, we demonstrate a graph filter under the constraint of subspace
representation is employed in the ARLLR method. Therefore, ARLLR can be treated as a
special form of graph filtering. It not only enriches the theory of graph filtering, but also
builds a bridge from the low-rank methods to the graph filtering methods. In the
experiments, we perform the AWGF method with a graph filter generated by the classical
graph Laplacian matrix. The results show our method can achieve a comparable denoising
performance with several state-of-the-art denoising methods. 相似文献
13.
14.
15.
The presence of zero-order diffraction and a conjugate image in digital holography essentially diminishes the quality of the reconstructed image. In this paper, a novel method that adopts numerical operation to eliminate the zero-order diffraction and conjugate image is presented. The whole process needs only one hologram and a complex finite impulse response (FIR) digital filter. The method of numerical elimination is simple; it filters the hologram directly in the spatial domain instead of in the frequency domain. The design of a complex finite impulse response filter is described in detail. The experimental results demonstrate that the operation can completely eliminate the zero-order diffraction and conjugate image and significantly enhance the quality of the reconstructed image. 相似文献
16.
提出了一种基于对数谱估计的改进型语音增强算法。相对于传统语音增强算法,在语音信号存在不确定的条件下,利用软判决增益因子修正技术调正带噪语音信号的对数谱幅度,抑制背景噪声。引入的改进型先验信噪比估计和语音信号先验不存在概率估计方法,能够有效地估计得出语音信号的存在概率,进而求得语音信号存在时的谱增益因子函数,联合语音信号不存在时设定的增益因子函数加权求得谱增益函数。计算机仿真表明,即使在低信噪比条件下,输入背景噪声为高斯白噪声和粉红噪声等加性白噪声时,所提算法对噪声的抑制效果非常明显,且有效地克服了传统算法中引入的“音乐噪声”和语音信号畸变。 相似文献
17.
18.
In the summation convolution backprojection method of image reconstruction in computed tomography, the final image accuracy
depends on the convolution filter used. Filters are designed to attenuate high spatial frequencies when noisy projection data
are used. This paper explores the differences between the images reconstructed using a range of filters, and compares the
results with the case of the ramp filter that provides the “best” image for ideal, noise-free, projection data. It is shown
that systematic errors between these images and the best image exist, and that these errors are related to the second differential
of the reconstruction filter with respect to spatial frequency. This error determination may be used to correct computed tomography
images that have been reconstructed using inappropriate filters, and this theory is tested using noise-free projection data
from two computer simulated images. It is shown that the corrected images are far closer to the original images. 相似文献
19.
20.
Hong Shangguan Yi Liu Xueying Cui Yunjiao Bai Quan Zhang Zhiguo Gui 《International journal of imaging systems and technology》2016,26(1):3-14
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective. 相似文献