首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents an augmented Lagrangian (AL) based method for designing of overcomplete dictionaries for sparse representation with general lq-data fidelity term (q ? 2). In the proposed method, the dictionary is updated via a simple gradient descent method after each inner minimization step of the AL scheme. Besides, a modified Iterated Shrinkage/Thresholding Algorithm is employed to accelerate the sparse coding stage of the algorithm. We reveal that the dictionary update strategy of the proposed method is different from most of existing methods because the learned dictionaries become more and more complex regularly. An advantage of the iterated refinement methodology is that it makes the method less dependent on the initial dictionary. Experimental results on real image for Gaussian noise removal (q = 2) and impulse noise removal (q = 1) consistently demonstrate that the proposed approach can efficiently remove the noise while maintaining high image quality.  相似文献   

2.
为了更好地实现图像的去噪效果,提出了一种改进的基于K-SVD(Singular Value Decomposition)字典学习的图像去噪算法。首先,将输入的含噪信号进行K均值聚类分解,将得到的图像块进行稀疏贝叶斯学习和噪声的更新,当迭代到一定次数时继续使用正交匹配追踪(Orthogonal Matching Pursuit,OMP)算法对图像块进行稀疏编码,然后在完成稀疏编码的基础上通过奇异值分解来逐列更新字典,反复迭代至得到过完备字典以实现稀疏表示,最后对处理过的图像进行重构,得到去噪后的图像。实验结果表明,本文的改进算法相对于传统的K-SVD字典的图像去噪能够在保留图像边缘和细节信息的同时,更有效地去除图像中的噪声,具有更好的视觉效果。  相似文献   

3.
包络滤波算法及其在图像去噪中的应用   总被引:2,自引:0,他引:2  
针对图像处理中的图像去噪问题,研究和提出了一种基于包络线的包络滤波方法.该方法通过对包络线进行修正并进行均值,得到原始信号的一种逼近,这种逼近能够有效地保持信号的主要波形成分和边沿,同时抑制信号中的低幅度噪声扰动,从而达到滤除噪声保持边沿的目的.将这一算法应用于图像去噪可以很好地去除低幅度噪声,并保持图像的轮廓与边缘,结果大大优于加权平均、中值滤波和小波去噪技术.  相似文献   

4.
一种应用于图像修复的非负字典学习算法   总被引:3,自引:2,他引:1  
提出了一种基于非负稀疏字典学习的图像修复算法,在非负矩阵分解(NMF)的目标函数中增加稀疏约束项,再通过稀疏编码和字典更新两步迭代学习得到训练样本的非负字典,稀疏编码采用的是非负正交匹配追踪(OMP)算法,字典更新则类似经典的KSVD算法;最终根据字典通过光滑L0范数算法得到待修复图像的稀疏系数,进而实现图像的修复。图像修复实验结果表明,本文算法能够对不同类型缺失的图像做到较好的修复,修复的视觉效果和技术指标都优于当前主流算法。  相似文献   

5.
6.
为了有效恢复被高斯白噪声污染的图像,将双树复小波变换和自适应Wiener滤波结合起来,提出了一种双树复小波-Wiener滤波去噪算法.仿真结果表明,利用该算法去噪后恢复的图像主观质量和峰值信噪比比基于正交小波变换的门限法和Wiener滤波法都要好.  相似文献   

7.
Most subsampled filter banks lack the feature of translation invariance, which is an important characteristic in denoising applications. In this paper, we study and develop new methods to convert a general multichannel, multidimensional filter bank to a corresponding translation-invariant (TI) framework. In particular, we propose a generalized algorithme à trous, which is an extension of the algorithme à trous introduced for 1-D wavelet transforms. Using the proposed algorithm, as well as incorporating modified versions of directional filter banks, we construct the TI contourlet transform (TICT). To reduce the high redundancy and complexity of the TICT, we also introduce semi-translation-invariant contourlet transform (STICT). Then, we employ an adapted bivariate shrinkage scheme to the STICT to achieve an efficient image denoising approach. Our experimental results demonstrate the benefits and potential of the proposed denoising approach. Complexity analysis and efficient realization of the proposed TI schemes are also presented.  相似文献   

8.
针对正则化正交匹配追踪(ROMP)算法在压缩感知(C S)重构中需预估稀疏度导致重构精度不稳定的问题,提出一种改进的 ROMP算法。由于观测信号能够继承原始信号特征,在选择候选集原子过程中引入自适应弱 选择标准,依 据观测信号的信息量设定弱选择标准,实现稀疏度自适应调整。将改进ROMA的算法应用于CS 框架下的医学图 像融合,并提出一种结合观测信号结构相似度的融合规则,当待融合的观测信号之间结构相 似度较高时, 说明待融合的原始信号之间同样具有相似性,以两者信息量的加权作为融合规则。同理,当 待融合的观测 信号结构相似度较低时,选择信息量较大的观测信号作为融合后的观测信号。实验结果表明 ,改进ROMP 算法的重构图像质量优于OMP、ROMP、SAMP等算法,其峰值信噪比(PSNR )提高了2~3dB。应用于医学图像 融合时,得到融合图像具有较好的人类视觉特效,保留了源图像中大部分特征信息,可在较 短时间内得到优质的融合结果。  相似文献   

9.
Sparse representation (SR) has been widely used in image fusion in recent years. However, source image, segmented into vectors, reduces correlation and structural information of texture with conventional SR methods, and extracting texture with the sliding window technology is more likely to cause spatial inconsistency in flat regions of multi-modality medical fusion image. To solve these problems, a novel fusion method that combines separable dictionary optimization with Gabor filter in non-subsampled contourlet transform (NSCT) domain is proposed. Firstly, source images are decomposed into high frequency (HF) and low frequency (LF) components by NSCT. Then the HF components are reconstructed sparsely by separable dictionaries with iterative updating sparse coding and dictionary training. In the process, sparse coefficients and separable dictionaries are updated by orthogonal matching pursuit (OMP) and manifold-based conjugate gradient method, respectively. Meanwhile, the Gabor energy as weighting factor is utilized to guide the LF components fusion, and this further improves the fusion degree of low-significant feature in the flat regions. Finally, the fusion components are transformed to obtain fusion image by inverse NSCT. Experimental results demonstrate the more competitive results of the proposal, leading to the state-of-art performance on both visual quality and objective assessment.  相似文献   

10.
一种改进的EMD图像信号去噪算法   总被引:1,自引:0,他引:1  
《现代电子技术》2016,(16):91-93
针对当前阈值类算法在图像去噪的同时均会将原图像有用成分滤掉,破坏图像的完整性,处理后的图像模糊等问题,提出EMD-SG算法为图像去噪。依据EMD算法对图像的拆分,利用SG滤波器对每个采集点的邻域进行滤波。同时利用最小二乘法方法拟合出采集点邻域内最佳值,并结合IMF进行图像重构。该算法使图像处理过程良好地兼顾了图像除噪效果与图像信号完整性。实验结果表明此算法相比于其他算法具有优良的去噪能力。  相似文献   

11.
A need for an entirely new medical workstation design was identified to increase the deployment of 3D medical imaging and multimedia communication. Recent wide acceptance of the World Wide Web (WWW) as a general communication service within the global network has shown how big the impact of standards and open systems can be. Information is shared among heterogeneous systems and diverse applications on various hardware platforms only by agreeing on a common format for information distribution. For medical image communications, the Digital Imaging and Communication in Medicine (DICOM) standard is possibly anticipating such a role. Logically, the next step is open software: platform-independent tools, which can as easily be transferred and used on multiple platforms. Application of the platform-independent programming language Java enables the creation of plug-in tools, which can easily extend the basic system. Performance problems inherent to all interpreter systems can be circumvented by using a hybrid approach. Computationally intensive functions like image processing functions can be integrated into a natively implemented optimized image processing kernel. Plug-in tools implemented in Java can utilize the kernel functions via a Java-wrapper library. This approach is comparable to the implementation of computationally intensive operations in hardware  相似文献   

12.
This work addresses the design of a novel complex steerable wavelet construction, the generation of transform-space feature measurements associated with corner and edge presence and orientation properties, and the application of these measurements directly to image denoising. The decomposition uses pairs of bandpass filters that display symmetry and antisymmetry about a steerable axis of orientation. While the angular characterization of the bandpass filters is similar to those previously described, the radial characteristic is new, as is the manner of constructing the interpolation functions for steering. The complex filters have been engineered into a multirate system, providing a synthesis and analysis subband filtering system with good reconstruction properties. Although the performance of our proposed denoising strategy is currently below that of recently reported state-of-the-art techniques in denoising, it does compare favorably with wavelet coring approaches employing global thresholds and with an "Oracle" shrinkage technique, and presents a very promising avenue for exploring structure-based denoising in the wavelet domain.  相似文献   

13.
《信息技术》2019,(9):163-167
深度学习技术强大的特征表征能力为遥感图像目标检测提供了一个非常有效的工具。现有的主流深度学习模型参数数量巨大,存储和计算代价高,限制了其应用的推广。文中提出一种轻量化的深度学习模型用于遥感图像目标检测。实验结果表明,文中提出的方法在保持与Tiny_YOLOv3检测精度相当的情况下,模型大小仅为Tiny_YOLOv3的44. 7%。文中提出的模型在检测精度、模型大小和计算开销上可达到更好的平衡。  相似文献   

14.
两阶段三维滤波的红外图像去噪算法   总被引:3,自引:0,他引:3  
针对获取的红外图像含有的噪声,利用图像中的自相似性统计特性,提出两阶段三维滤波的红外图像去噪算法。算法通过块匹配分组,两阶段联合滤波和聚集三个步骤来实现对图像的初次估计和最终估计。初次估计线性硬阈值滤波实现。最终估计采用非线性维纳滤波实现。实验结果表明,与PDE,BLS-GSM相比,算法在图像视觉质量,输出PSNR和和执行时间上具有更好的优势,适合于实时性应用场合。  相似文献   

15.
《现代电子技术》2016,(20):159-162
噪声图像,特别是含有高密度噪声图像在经过去噪后,图像细节(图像高频)丢失较多。针对这一问题,提出一种基于字典学习和高频增强的方法。该算法首先让噪声图像经过降噪算法处理,然后由样本图像依次模拟加噪和去噪过程得到去噪样本图像,样本图像和去噪样本图像相减得到样本差分图像,最后分别训练样本差分图像和去噪样本图像,得到一对高、低分辨率字典,用于重建图像去噪后所缺失的高频。实验结果表明,所提算法在主观的人眼视觉和客观评价上要优于经典的图像降噪算法。  相似文献   

16.
针对通用压缩算法未利用合成孔径雷达(SAR)图像特征的不足,提出一种基于概率分布的自适应海洋SAR图像压缩算法。利用海洋SAR图像的概率分布,根据目标的分布设计量化方案,使目标和背景得到不同程度的保留。利用场景的稀疏性,将阈值以上的像素映射到三元组,对其灰度和位置信息分别熵编码;利用剩余背景层灰度偏差较小的特点作位平面编码。实验结果表明,该算法能有效地压缩图像,同码率下峰值信噪比(PSNR)较JPEG2000高5dB~10dB。本文算法复杂度低,对比度保持好,适用于针对不同需求的海面舰船SAR图像压缩。  相似文献   

17.
一种有效的序列图像自动拼接方法   总被引:2,自引:2,他引:0  
提出了一种基于相位相关法和加速鲁棒特性(SURF:Speeded-Up Robust Features)特征点匹配相结合的序列图像自动拼接算法。首先,利用相位相关法计算归一化相位相关度,通过最大相关度求交进行序列图像的自动排序,并计算得到平移参数;在平移参数指导下,粗估测特征检测感兴趣区域(ROI:Region of ...  相似文献   

18.
为了改善图像去噪的效果,提出一种基于分数阶积分和中值滤波的改进自适应图像去噪算法,首先利用自适应中值滤波算法(Ranked-order Based Adaptive Median Filter,RAMF)中的噪声判别条件来检测噪声点,然后利用"噪声边缘"判别函数对其中的可疑噪声点进行二次检测,同时根据图像的局部统计信息和结构特征构造自适应的分数阶阶次,最后将检测出的噪声点进行自适应的分数阶积分滤波去噪。与传统的分数阶积分去噪算法相比,该自适应算法有效地保留了被错误误去除的图像边缘点,并且实现了分数阶积分的阶次自适应化,在去除噪声的同时很好地保留了图像的边缘及纹理细节信息。  相似文献   

19.
An adaptive spatial fuzzy clustering algorithm for 3-D MR image segmentation   总被引:22,自引:0,他引:22  
An adaptive spatial fuzzy c-means clustering algorithm is presented in this paper for the segmentation of three-dimensional (3-D) magnetic resonance (MR) images. The input images may be corrupted by noise and intensity nonuniformity (INU) artifact. The proposed algorithm takes into account the spatial continuity constraints by using a dissimilarity index that allows spatial interactions between image voxels. The local spatial continuity constraint reduces the noise effect and the classification ambiguity. The INU artifact is formulated as a multiplicative bias field affecting the true MR imaging signal. By modeling the log bias field as a stack of smoothing B-spline surfaces, with continuity enforced across slices, the computation of the 3-D bias field reduces to that of finding the B-spline coefficients, which can be obtained using a computationally efficient two-stage algorithm. The efficacy of the proposed algorithm is demonstrated by extensive segmentation experiments using both simulated and real MR images and by comparison with other published algorithms.  相似文献   

20.
A new approach to the design of optimised codebooks using vector quantisation (VQ) is presented. A strategy of reinforced learning (RL) is proposed which exploits the advantages offered by fuzzy clustering algorithms, competitive learning and knowledge of training vector and codevector configurations. Results are compared with the performance of the generalised Lloyd algorithm (GLA) and the fuzzy K-means (FKM) algorithm. It has been found that the proposed algorithm, fuzzy reinforced learning vector quantisation (FRLVQ), yields an improved quality of codebook design in an image compression application when FRLVQ is used as a pre-process. The investigations have also indicated that RL is insensitive to the selection of both the initial codebook and a learning rate control parameter, which is the only additional parameter introduced by RL from the standard FKM  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号