首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A new wavelet-based fuzzy single and multi-channel image denoising   总被引:1,自引:0,他引:1  
In this paper, we propose a new wavelet shrinkage algorithm based on fuzzy logic. In particular, intra-scale dependency within wavelet coefficients is modeled using a fuzzy feature. This feature space distinguishes between important coefficients, which belong to image discontinuity and noisy coefficients. We use this fuzzy feature for enhancing wavelet coefficients' information in the shrinkage step. Then a fuzzy membership function shrinks wavelet coefficients based on the fuzzy feature. In addition, we extend our noise reduction algorithm for multi-channel images. We use inter-relation between different channels as a fuzzy feature for improving the denoising performance compared to denoising each channel, separately. We examine our image denoising algorithm in the dual-tree discrete wavelet transform, which is the new shiftable and modified version of discrete wavelet transform. Extensive comparisons with the state-of-the-art image denoising algorithm indicate that our image denoising algorithm has a better performance in noise suppression and edge preservation.  相似文献   

2.
Guided image denoising recovers clean target images by fusing guidance images and noisy target images. Several deep neural networks have been designed for this task, but they are black-box methods lacking interpretability. To overcome the issue, this paper builds a more interpretable network. To start with, an observation model is proposed to account for modality gap between target and guidance images. Then, this paper formulates a deep prior regularized optimization problem, and solves it by alternating direction method of multipliers (ADMM) algorithm. The update rules are generalized to design the network architecture. Extensive experiments conducted on FAIP and RNS datasets manifest that the novel network outperforms several state-of-the-art and benchmark methods regarding both evaluation metrics and visual inspection.  相似文献   

3.
This paper presents a novel data fusion paradigm based on fuzzy evidential reasoning. A new fuzzy evidence structure model is first introduced to formulate probabilistic evidence and fuzzy evidence in a unified framework. A generalized Dempster’s rule is then utilized to combine fuzzy evidence structures associated with multiple information sources. Finally, an effective decision rule is developed to take into account uncertainty, quantified by Shannon entropy and fuzzy entropy, of probabilistic evidence and fuzzy evidence, to deal with conflict and to achieve robust decisions. To demonstrate the effectiveness of the proposed paradigm, we apply it to classifying synthetic images and segmenting multi-modality human brain MR images. It is concluded that the proposed paradigm outperforms both the traditional Dempster–Shafer evidence theory based approach and the fuzzy reasoning based approach  相似文献   

4.
为了解决现有方法的去噪程度不彻底、纹理细节失真度较大等问题,提出自适应加权向量滤波法.该方法将含噪图像分成若干处理块,通过扫描将与中心像素不同的像素点集中存储在一个行(列)向量中,提取出其最大值、最小值与中值后与每一个待测图的像素点比较,根据不同结果,定义变量、加权原像素值、最值和中值,重构每一个像素点,合成新图.实验表明:该方法在处理不同类型图像的去噪性能和纹理保护方面的能力有提升,具有很强的自适应性.  相似文献   

5.
This note aims to present new denoising method based on thin-plate splines (TPS). The proposed approach is based on the general TPS denoising Berman [2], however, its unfavorable smoothing of edges and details is suppressed by introduction of a weighting approach applied locally. The performance of the method is shown and compared to the original TPS denoising.  相似文献   

6.
张静  孙俊 《微计算机信息》2007,23(3):284-285
通过对图像的小波变换系数进行阈值操作,可有效降低噪声,但还是保留一些噪声。Wiener滤波是一种线性滤波方法,用小波阈值方法结合Wiener滤波,可进一步对图像噪声进行降噪。实验结果表明小波阈值Wiener滤波方法是一种有效的图像降噪方法,其在图像恢复上和人眼视觉上都优于小波阈值方法。  相似文献   

7.
在小波半软阈值图像去噪方法基础上,提出了一种基于自适应局部相关系数的新方法。该方法在软阈值法和硬阈值法之间有很好的折衷,通过加入局部相关系数,使其在各种小波变换中均能增强子带内小波系数的相关性。在阈值选取中选用了基于Bayes风险估计的自适应阈值和具有统计意义上的阈值方法,获得了小波系数不同子带不同方向的最优估计。实验结果显示,该方法去噪效果显著,同时能够改善小波变换所造成的图像视觉失真和边缘振荡效应,更好地保留了图像边缘和细节纹理特征。该方法可通过调节局部相关系数控制图像去噪程度和效果,能满足不同需求,具有很高的实用价值。  相似文献   

8.
A moment-based nonlocal-means algorithm for image denoising   总被引:3,自引:0,他引:3  
Image denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The nonlocal (NL) means filter is a very successful technique for denoising textured images. However, this algorithm is only defined up to translation without considering the orientation and scale for each image patch. In this paper, we introduce the Zernike moments into NL-means filter, which are the magnitudes of a set of orthogonal complex moments of the image. The Zernike moments in small local windows of each pixel in the image are computed to obtain the local structure information for each patch, and then the similarities according to this information are computed instead of pixel intensity. For the rotation invariant of the Zernike moments, we can get much more pixels or patches with higher similarity measure and make the similarity of patches translation-invariant and rotation-invariant. The proposed algorithm is demonstrated on real images corrupted by white Gaussian noise (WGN). The comparative experimental results show that the improved NL-means filter achieves better denoising performance.  相似文献   

9.
目前,卷积神经网络已成为视觉对象识别的主流机器学习方法。有研究表明,网络层数越深,所提取的深度特征表征能力越强。然而,当数据集规模不足时,过深的网络往往容易过拟合,深度特征的分类性能将受到制约。因此,提出了一种新的卷积神经网络分类算法:并行融合网FD-Net。以网络融合的方式提高特征的表达能力,并行融合网首先组织2个相同的子网并行提取图像特征,然后使用精心设计的特征融合器将子网特征进行多尺度融合,提取出更丰富、更精确的融合特征用于分类。此外, 采用了随机失活和批量规范化等方法协助特征融合器去除冗余特征,并提出了相应的训练策略控制计算开销。最后,分别以经典的ResNet、InceptionV3、DenseNet和MobileNetV2作为基础模型,在UECFOOD-100和Caltech101等数据集上进行了深入的研究和评估。实验结果表明,并行融合网能在有限的训练样本上训练出识别能力更强的分类模型,有效提高图像的分类准确率。  相似文献   

10.
提出了一种新的红外与可见光图像融合算法,首先应用非抽样Contourlet变换(NSCT)对图像进行多尺度、多方向变换,对变换的低频子带采用改进的能量加权法融合,带通子带融合采用最大系数与区域方差加权相结合方法,然后对融合的2个子带系数进行NSCT反变换,得到融合图像。对不同算法的融合实验结果进行比较,通过主观和客观评价,该算法融合效果较好。  相似文献   

11.
Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non-local denoising was introduced. The non-local means method proposed by Buades, Morel and Coll computes the denoised image as a weighted average of pixels across the whole image. The weight between pixels is based on the similarity between neighborhoods around them. This method attracted the attention of other researchers who proposed improvements and modifications to it. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral properties of the graph that represents the similarity of neighborhoods of the image. We also propose a method to automatically estimate the parameters which produce the optimal results in terms of mean square error and perceptual quality.  相似文献   

12.
在对多幅图像的不同区域进行拼接时,需要涉及到图像分割、配准以及图像融合等方法。针对多幅图像不同区域进行拼接的情况,提出了一种“基于膨胀的渐进渐出”图像融合算法,该算法在图像融合过程中结合形态学方法,大幅度提高了拼接图像的效果。多个实验验证了所提算法的有效性。  相似文献   

13.
Multi-exposure image fusion (MEF) is an important area in computer vision and has attracted increasing interests in recent years. Apart from conventional algorithms, deep learning techniques have also been applied to MEF. However, although many efforts have been made on developing MEF algorithms, the lack of benchmarking studies makes it difficult to perform fair and comprehensive performance comparison among MEF algorithms, thus hindering the development of this field significantly. In this paper, we fill this gap by proposing a benchmark of multi-exposure image fusion (MEFB), which consists of a test set of 100 image pairs, a code library of 21 algorithms, 20 evaluation metrics, 2100 fused images, and a software toolkit. To the best of our knowledge, this is the first benchmarking study in the field of MEF. This paper also gives a literature review on MEF methods with a focus on deep learning-based algorithms. Extensive experiments have been conducted using MEFB for comprehensive performance evaluation and for identifying effective algorithms. We expect that MEFB will serve as an effective platform for researchers to compare the performance of MEF algorithms.  相似文献   

14.
The estimation of the signal variance is a critical challenge in wavelet-domain minimum mean square error(MMSE) based image denoising.In contrast to the conventional approaches that treat the neighboring wavelet coefficients equally to estimate the signal variance at each coefficient position,here an adaptive approach is proposed that utilizes a bilateral statistical scheme adaptively adjusting the contributions of neighboring wavelet coefficients to provide an accurate estimation of the signal variance.Experimental results are presented to demonstrate the superior performance of the proposed approach.  相似文献   

15.
The advantages of the bricks-and-clicks retail format in the battle for the online customer has been widely discussed but empirical research on it has been limited. We applied a multi-channel store image perspective to assess its influence on online purchase intentions. Drawing on a sample of 630 customers of a large music retail store in the Netherlands, the results demonstrated that offline and online store perceptions directly influenced online purchase intention. In addition, our findings confirmed that offline store impressions were used as references for their online store counterparts. Synergy and reference effects are discussed.  相似文献   

16.
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition. Different fusion methods have been proposed in literature, including multiresolution analysis. This paper is an image fusion tutorial based on wavelet decomposition, i.e. a multiresolution image fusion approach. We can fuse images with the same or different resolution level, i.e. range sensing, visual CCD, infrared, thermal or medical. The tutorial performs a synthesis between the multiscale-decomposition-based image approach (Proc. IEEE 87 (8) (1999) 1315), the ARSIS concept (Photogramm. Eng. Remote Sensing 66 (1) (2000) 49) and a multisensor scheme (Graphical Models Image Process. 57 (3) (1995) 235). Some image fusion examples illustrate the proposed fusion approach. A comparative analysis is carried out against classical existing strategies, including those of multiresolution.  相似文献   

17.
This paper presents a homogeneity similarity based method, which is a new patch-based image denoising method. In traditional patch-based methods, such as the NL-means method, block matching mainly depends on structure similarity. The homogeneity similarity is defined in adaptive weighted neighborhoods, which can find more similar points than the structure similarity, and so it is more effective, especially for points with less repetitive patterns, such as corner and end points. Comparative results on synthetic and real image denoising indicate that our method can effectively remove noise and preserve effective information, such as edges and contrast, while avoiding artifacts. The application on medical image denoising also demonstrates that our method is practical.  相似文献   

18.
给出了Ridgelet变换的理论,并提出了一种基于尺度因子与Ridgelet变换的图像去噪算法,将Ridgelet应用于图像去噪并与小波去噪进行比较。实验结果表明,该算法对高斯白噪声污染的图像去噪具有较好的效果,不仅可以提高处理图像的信噪比,而且图像的视觉效果有明显改善。  相似文献   

19.
改进的LIP偏微分方程图像去噪方法   总被引:1,自引:0,他引:1  
针对对数图像处理-全变分(LIP_TV)去噪模型存在的不足,提出一种改进的LIP偏微分方程去噪方法。首先基于LIP数学理论,在LIP梯度算子中,引入四方向导数信息,得到改进的LIP梯度算子以全面客观地度量图像信息,更好地控制扩散过程。然后利用人类视觉系统的结构化特性,用噪声可见度函数构造新的保真项系数,进一步保持了图像的边缘细节并避免了人为估计噪声水平。理论分析和实验结果表明,该改进方法能够更好地去除噪声和保持图像边缘细节特征,在视觉效果和客观评价指标上都明显优于LIP_TV方法。  相似文献   

20.
Contourlet变换系数加权的医学图像融合   总被引:2,自引:0,他引:2       下载免费PDF全文
目的 由于获取医学图像的原理和设备不同,不同模式所成图像的质量、空间与时间特性都有较大差别,并且不同模式成像提供了不互相覆盖的互补信息,临床上通常需要对几幅图像进行综合分析来获取信息。方法 为了提高对多源图像融合信息的理解能力,结合Contourlet变换在多尺度和多方向分析方法的优势,将Contourlet变换应用于医学图像融合中。首先将源图像经过Contourlet变换分解获得不同尺度多个方向下的分解系数。其次通过对Contourlet变换后的系数进行分析来确定融合规则。融合规则主要体现在Contourlet变换后图像中的低频子带系数与高频子带系数的优化处理中。针对低频子带主要反映图像细节的特点,对低频子带系数采用区域方差加权融合规则;针对高频子带系数包含图像中有用边缘细节信息的特点,对高频子带系数采用基于主图像的条件加权融合规则。最后经过Contourlet变换重构获得最终融合图像。结果 分别进行了基于Contourlet变换的不同融合规则实验对比分析和不同融合方法实验对比分析。通过主观视觉效果及客观评价指标进行评价,并与传统融合算法进行比较,该算法能够克服融合图像在边缘及轮廓部分变得相对模糊的问题,并能有效地融合多源医学图像信息。结论 提出了一种基于Contourlet变换的区域方差加权和条件加权融合算法。通过对CT与MRI脑部医学图像的仿真实验表明,该算法可以增加多模态医学图像互补信息,并能较好地提高医学图像融合的清晰度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号