首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
宋长新 《激光与红外》2012,42(11):1306-1310
聚类作为一种重要的图像分割方法得到了大量研究,提出了一种新的结合稀疏编码的红外图像聚类分割算法,扩展了传统的基于K-means聚类的图像分割方法。结合稀疏编码的聚类算法能有效融合图像的局部信息,而且易于利用像素之间的内在相关性,但是对于分割会出现过分割和像素难以归类的问题。为此,在字典的学习过程中,将原子的聚类算法引入其中,有助于缩减字典中原子所属类别的数目防止出现过分割;同时将稀疏编码系数同原子对聚类中心的隶属程度相结合来判断像素所属的类别。这种处理方式能更好地实现利用像素的内在相关性进行聚类分割,并在其中自然引入了局部空间信息,达到更好分离目标区域和背景区域的目的。实验结果表明,结合稀疏编码的K-means聚类分割算法能更好的实现复杂背景下红外图像重要区域的准确分割提取。  相似文献   

2.
彩色图像去马赛克的非局部稀疏表示方法   总被引:1,自引:0,他引:1       下载免费PDF全文
黄丽丽  肖亮  韦志辉 《电子学报》2014,42(2):272-279
目前,大部分彩色去马赛克(Color DeMosaicking,CDM)算法仅利用了局部的空间和光谱相关性,容易导致CDM复原图像边缘模糊以及细小结构丢失.当图像中出现周期性细小结构时,这些局部方法容易产生诸如锯齿、栅格等失真现象.针对这些问题,我们将字典学习和稀疏编码统一到一个变分框架中,提出了非局部自适应稀疏表示模型.通过非局部相似块聚类自适应地在线学习字典.利用局部和非局部的冗余信息对稀疏编码进行约束,强制稀疏编码靠近其非局部均值以减少编码误差.为了有效抑制服从重尾分布的CDM误差,设计了基于l1范数的数据项.最后,联合交替最小化方法和算子分裂技巧对模型进行有效求解.实验结果验证了本文模型与数值算法的有效性.  相似文献   

3.
基于聚类的图像稀疏去噪方法   总被引:2,自引:0,他引:2  
在图像去噪方法的研究中,非局部均值算法与稀疏去噪算法是近几年受到广为关注的方法.非局部均值算法将具有邻域相似性的像素点作加权平均;而稀疏去噪算法是将图像的非噪声部分用过完备字典进行稀疏表示.基于上述两种方法的思想,本文提出了基于聚类的稀疏去噪方法,该方法结合了非局部均值算法与稀疏去噪算法的优点,对相似的图像块进行聚类,并通过施加l1/l2范数的正则化约束,对同一类中的图像块在过完备字典上进行相同结构的稀疏表示,从而达到去噪目的.在字典的选择上,本文使用DCT字典和双正交小波字典,能够同时保留原图像中的平滑分量与细节分量.实验结果表明,本文方法比传统的稀疏去噪方法有更好的去噪效果.  相似文献   

4.
针对经典K均值聚类方法过于依赖初始值导致聚类结果不稳定,文章提出了一种自适应初始中心点聚类的方法,使得聚类迭代次数更少、聚类结果更稳定。使用稀疏表示和回归的方法,将高低分辨率的稀疏表示系数为样本,用支持向量回归训练得到高低稀疏系数之间的关系预测高稀疏系数重建出图像。与之前方法相比提高了图像重建的质量,高频细节信息更多。  相似文献   

5.
在图像处理领域,基于稀疏表示理论的图像超分辨力算法、高低分辨力字典与稀疏编码之间的映射关系是其中的2个关键环节。由于丰富多样的图像类型,单一字典并不能很好地表示图像。而在稀疏编码之间的映射关系上,严格相等的约束关系也限制了图像重建的效果。针对上述两个方面,采用包容性更强的多个字典与约束条件更为宽松的全耦合稀疏关系进行图像的超分辨力重建。在图像非局部自相似性的基础上,进行多次自适应聚类;挑选出最优的聚类,通过全耦合稀疏学习的图像超分辨力算法,得到多个字典;最后,对输入的低分辨力图像进行分类重建,得到高分辨力图片。实验结果表明,在图像Leaves,Barbara,Room上,本文的聚类算法比原全耦合稀疏学习算法在峰值信噪比(PSNR)上分别提升了0.51 dB,0.21 dB,0.15 dB。  相似文献   

6.
《现代电子技术》2017,(17):51-55
针对图像混合噪声去除不足的问题,提出一种分组图像块的加权编码方法。首先,从训练图像中利用非局部相似块提取出分组块;然后,用得到的分组块训练非局部自相似先验模型;最后,集成稀疏先验模型和非局部自相似先验模型到正则化项和编码框架中。实验结果表明,提出的方法在重建图像性能上较同类方法有显著提高,获得了更好的图像恢复质量。  相似文献   

7.
基于非局部稀疏编码的超分辨率图像复原   总被引:1,自引:0,他引:1  
基于压缩感知的超分辨率图像复原方法通常采用局部稀疏编码策略,对每一图像块独立编码,易产生人工的分块效应。针对上述问题,该文提出一种基于非局部稀疏编码的超分辨率图像复原方法。该算法在字典训练和图像编码过程中分别运用图像的非局部自相似先验知识,即利用低分辨率图像的插值图像训练字典,并通过计算相似块局部编码的加权平均,得到每一图像块的非局部稀疏编码。仿真实验表明,所提算法能够获得更优的复原效果,并且对于含噪图像具有较强的鲁棒性。  相似文献   

8.
针对传统局部信息模糊C均值聚类算法权重系数仅由像素间欧式距离决定,无法准确衡量和充分利用像素间的相似性,对SAR图像分割不准确的问题,提出了一种全新的局部信息相似性描述方法,并结合图像的非局部信息,对像素到聚类中心的距离和像素隶属度计算方法进行改进,并提出了一种同时包含图像局部和非局部信息的改进SAR图像分割方法。实验表明,与其他模糊聚类方法相比,该方法在抑制SAR图像相干斑噪声的同时,能较好地保护SAR图像目标的边缘和细节,具有很好的SAR图像分割效果。  相似文献   

9.
朱平芳  陈利霞 《电视技术》2016,40(10):33-36
针对混合噪声,结合加权稀疏与变分,提出了新颖的去噪模型.首先,进行PCA训练自适应字典,再结合非局部相似性,利用噪声的特性进行加权编码.最后,结合变分正则项,再利用对偶方法求出恢复后的图像.仿真实验表明,该算法不仅提高了图像的峰值信噪比,而且更好地保留图像的重要特征,提高图像的视觉效果.  相似文献   

10.
传统稀疏表示融合方法,以图像块进行字典训练和稀疏分解,由于没有考虑图像块之间的内在联系,易造成字典原子表征图像特征能力不足、稀疏系数不准确,导致图像融合效果不好.为此,本文提出可见光与红外图像组K-SVD(K-means singular value decomposition)融合方法,利用图像的非局部相似性,将相似...  相似文献   

11.
基于投影的稀疏表示与非局部正则化图像复原方法   总被引:1,自引:0,他引:1       下载免费PDF全文
徐焕宇  孙权森  李大禹  宣丽 《电子学报》2014,42(7):1299-1304
提出一种基于投影的稀疏表示与非局部正则化相结合的图像去模糊、去噪图像复原方法.该方法结合了自适应构造字典的稀疏表示与非局部总变差,提出的正则化模型分解为三个投影子问题进行求解以提高求解效率.实验结果表明,本文所提出的图像复原方法能够有效地保持原图像的纹理细节信息,对于不同程度的退化图像上均有较好的复原结果,在视觉效果和客观评价指标上均优于相比较的现有方法.  相似文献   

12.
 该文基于稀疏编码和集成学习提出了一种新的多示例多标记图像分类方法。首先,利用训练包中所有示例学习一个字典,根据该字典计算示例的稀疏编码系数;然后基于每个包中所有示例的稀疏编码系数计算包特征向量,从而将多示例多标记问题转化为多标记问题;最后利用多标记分类算法进行求解。为了提高分类器的泛化能力,对多个分类器进行集成。在多示例多标记图像数据集上的实验结果表明所提方法与其它方法相比有更好的性能。  相似文献   

13.
To effectively solve the ill-posed image compressive sensing (CS) reconstruction problem, it is essential to properly exploit image prior knowledge. In this paper, we propose an efficient hybrid regularization approach for image CS reconstruction, which can simultaneously exploit both internal and external image priors in a unified framework. Specifically, a novel centralized group sparse representation (CGSR) model is designed to more effectively exploit internal image sparsity prior by suppressing the group sparse coding noise (GSCN), i.e., the difference between the group sparse coding coefficients of the observed image and those of the original image. Meanwhile, by taking advantage of the plug-and-play (PnP) image restoration framework, a state-of-the-art deep image denoiser is plugged into the optimization model of image CS reconstruction to implicitly exploit external deep denoiser prior. To make our hybrid internal and external image priors regularized image CS method (named as CGSR-D-CS) tractable and robust, an efficient algorithm based on the split Bregman iteration is developed to solve the optimization problem of CGSR-D-CS. Experimental results demonstrate that our CGSR-D-CS method outperforms some state-of-the-art image CS reconstruction methods (either model-based or deep learning-based methods) in terms of both objective quality and visual perception.  相似文献   

14.
In the traditional approach of block transform image coding, a large number of bits are allocated to the DC coefficients. A technique called DC coefficient restoration (DCCR) has been proposed to further improve the compression ability of block transform image coding by not transmitting the DC coefficients but estimating them from the transmitted AC coefficients. Images thus generated, however, have inherent errors that degrade the image visual quality. In the paper, a global estimation DCCR scheme is proposed that can eliminate the inherent errors. The scheme estimates all the DC coefficients of the blocks simultaneously by minimising the sum of the energy of all the edge difference vectors of the image. The performance of the global estimation DCCR is evaluated using a mathematical model and experiments. Fast algorithms are also developed for efficient implementation of the proposed scheme.  相似文献   

15.
Monitoring cameras are now widely used to monitor everything from a room in a house to an entire warehouse. However, in real monitoring scenarios, a variety of factors, such as underexposure, optical blurring, defocusing, have an impact on the quality of images, which leads to low-quality and low-resolution (LR) of the individual of interest. Reconstruction of a high-resolution (HR) face image with detailed facial features, from a LR observation based on a set of HR and LR training image pairs, plays an important role in computer vision and face image analysis applications. To super-resolve an HR face given a LR face image, the key issue is how to effectively encode the LR image patch. However, due to stability and accuracy issues, the coding approaches proposed so far are far from satisfactory. In this paper, we present a novel sparse coding method via exploiting the support information on the coding coefficients. According to the distances between the input patch and bases in the dictionary, we first assign different weights to the coding coefficients and then obtain the coding coefficients by solving a weighted sparse problem. Experiments on commonly used databases and some face images on the real monitoring conditions demonstrate that our method outperforms state-of-the-art.  相似文献   

16.
The nonlocal self-similarity of images means that groups of similar patches have low-dimensional property. The property has been previously used for image denoising, with particularly notable success via sparse coding. However, only a few studies have focused on the varying statistics of noise in different similar patches during the iterative denoising process. This has motivated us to introduce an improved weighted sparse coding for gray-level image denoising in this paper. On the basis of traditional sparse coding, we introduce a weight matrix to account for the noise variation characteristics of different similar patches, while introduce another weight matrix to make full use of the sparsity priors of natural images. The Maximum A-Posterior estimation (MAP) is used to obtain the closed-form solution of the proposed method. Experimental results demonstrate the competitiveness of the proposed method compared with that of state-of-the-art methods in both the objective and perceptual quality.  相似文献   

17.
Multi-focus image fusion aims to generate an image with all objects in focus by integrating multiple partially focused images. It is challenging to find an effective focus measure to evaluate the clarity of source images. In this paper, a novel multi-focus image fusion algorithm based on Geometrical Sparse Representation (GSR) over single images is proposed. The main novelty of this work is that it shows the potential of GSR coefficients used for image fusion. Unlike the traditional sparse representation-based (SR) methods, the proposed algorithm does not need to train an overcomplete dictionary and vectorize the signal. In our algorithm, using a single dictionary image, the source images are first represented by geometrical sparse coefficients. Specifically, we employ a weighted GSR model in the sparse coding phase, ensuring the importance of the center pixel. Then, the weighted GSR coefficient is used to measure the activity level of the source image and an average pooling strategy is applied to obtain an initial decision map. Third, the decision map is refined with a simple post-processing. Finally, the fused all-in-focus image is constructed with the refined decision map. Experimental results demonstrate that the proposed method can be competitive with or even superior to the state-of-the-art fusion methods in both subjective and objective comparisons.  相似文献   

18.
This paper presents a new image restoration method for improving the quality of halftoning-Block Truncation Coding (BTC) decoded image in a patch-based manner. The halftoning-BTC decoded image suffers from the halftoning impulse noise which can be effectively reduced and suppressed using the Vector Quantization (VQ)-based and sparsity-based approaches. The VQ-based approach employs the visual codebook generated from the clean image, whereas the sparsity-based approach utilizes the double learned dictionaries in the noise reduction. The sparsity-based approach assumes that the halftoning-BTC decode image and clean image share the same sparsity coefficient. In the sparse coding stage, it uses the halftoning-BTC dictionary, while in the reconstruction stage, it exploits the clean image dictionary. As suggested by the experimental results, the proposed method outperforms in the halftoning-BTC image reconstructed when compared to that of the filtering approaches.  相似文献   

19.
局部非负稀疏编码的高光谱目标检测方法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
基于稀疏编码的高光谱图像处理算法能够挖掘高光谱高维数据空间中潜在的数据相关性,能自然地贴近光谱信号的本质特征。本文提出基于非负稀疏编码的高光谱目标检测算法。与经典稀疏编码模型相比,非负稀疏编码对编码系数进行非负约束,一方面使得线性编码具有明确的物理解释,另一方面增强了系数的可分性与稳健性。算法首先通过双窗口设计构造局部动态字典,然后利用目标和背景在动态字典上编码的稀疏性差异进行阈值分割最后通过统计判决实现目标检测。仿真数据以及真实数据实验结果证明了算法的有效性。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号