首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
利用图像结构信息是字典学习的难点,针对传统非参数贝叶斯算法对图像结构信息利用不充分,以及算法运行效率低下的问题,该文提出一种结构相似性聚类beta过程因子分析(SSC-BPFA)字典学习算法。该算法通过Markov随机场和分层Dirichlet过程实现对图像局部结构相似性和全局聚类差异性的兼顾,利用变分贝叶斯推断完成对概率模型的高效学习,在确保算法收敛性的同时具有聚类的自适应性。实验表明,相比目前非参数贝叶斯字典学习方面的主流算法,该文算法在图像去噪和插值修复应用中具有更高的表示精度、结构相似性测度和运行效率。  相似文献   

2.
The conventional data interpolation methods based on sparse representation usually assume that the signal is sparse under the overcomplete dictionary. Specially, they must confirm the dimensions of dictionary and the signal sparse level in advance. However, it is hard to know them if the signal is complicated or dynamically changing. In this paper, we proposed a nonparametric Bayesian dictionary learning based interpolation method for WSNs missing data, which is the combination of sparse representation and data interpolation. This method need not preset sparse degrees and dictionary dimensions, and our dictionary atoms are drawn from a multivariate normal distribution. In this case, the dictionary size will be learned adaptively by the nonparametric Bayesian method. In addition, we implement the Dirichlet process to exploit the spatial similarity of the sensing data in WSNs, thus to improve the interpolation accuracy. The interpolation model parameters, the optimal dictionary and sparse coefficients, can be inferred by the means of Gibbs sampling. The missing data will be estimated commendably through the derived parameters. The experimental results show that the data interpolation method we proposed outperforms the conventional methods in terms of interpolation accuracy and robustness.  相似文献   

3.
Redundant dictionary learning based image noise reduction methods explore the sparse prior of patches and have proved to lead to state-of-the-art results; however, they do not explore the non-local similarity of image patches. In this paper we exploit both the structural similarities and sparse prior of image patches and propose a new dictionary learning and similarity regularization based image noise reduction method. By formulating the image noise reduction as a multiple variables optimization problem, we alternately optimize the variables to obtain the denoised image. Some experiments are taken on comparing the performance of our proposed method with its counterparts on some benchmark natural images, and the superiorities of our proposed method to its counterparts can be observed in both the visual results and some numerical guidelines.  相似文献   

4.
为了减少人脸超分图像的边缘伪影和图像噪点,利用基于稀疏编码的单幅图像超分辨率重建算法,在字典学习阶段,结合L1范数引入在线字典学习方法,使字典根据当前输入图像块和上次迭代生成的字典逐列更新,得到更加精确的超完备字典对,用于图像重建.实验中进行的仿真结果表明,改进算法超分结果的峰值信噪比(PSNR)和结构相似性(SSIM)比同类型的稀疏编码超分法(SCSR)和应用在线字典学习算法的超分方法(ODLSR)均有较大幅度提升,比后者平均提升0.72 dB和0.0187.同时,视觉上有效地消除了边缘伪影,且在处理含噪人脸图像时,具备更强的去噪能力和更好的鲁棒性.  相似文献   

5.
在图像处理领域,基于稀疏表示理论的图像超分辨力算法、高低分辨力字典与稀疏编码之间的映射关系是其中的2个关键环节。由于丰富多样的图像类型,单一字典并不能很好地表示图像。而在稀疏编码之间的映射关系上,严格相等的约束关系也限制了图像重建的效果。针对上述两个方面,采用包容性更强的多个字典与约束条件更为宽松的全耦合稀疏关系进行图像的超分辨力重建。在图像非局部自相似性的基础上,进行多次自适应聚类;挑选出最优的聚类,通过全耦合稀疏学习的图像超分辨力算法,得到多个字典;最后,对输入的低分辨力图像进行分类重建,得到高分辨力图片。实验结果表明,在图像Leaves,Barbara,Room上,本文的聚类算法比原全耦合稀疏学习算法在峰值信噪比(PSNR)上分别提升了0.51 dB,0.21 dB,0.15 dB。  相似文献   

6.
在信号的稀疏表示方法中,传统的基于变换基的稀疏逼近不能自适应性地提取图像的纹理特征,而基于过完备字典的稀疏逼近算法复杂度过高.针对该问题,文章提出了一种基于小波变换稀疏字典优化的图像稀疏表示方法.该算法在图像小波变换的基础上构建图像过完备字典,利用同一场景图像的小波变换在纹理上具有内部和外部相似的属性,对过完备字典进行灰色关联度的分类,有效提高了图像表示的稀疏性.将该新算法应用于图像信号进行稀疏表示,以及基于压缩感知理论的图像采样和重建实验,结果表明新算法总体上提升了重建图像的峰值信噪比与结构相似度,并能有效缩短图像重建时间.  相似文献   

7.
本文提出了一种基于压缩感知、结构自相似性和字典学习的遥感图像超分辨率方法,其基本思路是建立能够稀疏表示原始高分辨率图像块的字典。实现超分辨率所必需的附加信息来源于遥感图像中广泛存在的自相似结构,该信息可在压缩感知框架下通过字典学习而得到。这里,本文采用K-SVD方法构建字典、并采用OMP方法获取用于稀疏表达的相关系数。与现有基于样本的超分辨率方法的最大不同在于,本文方法仅使用了低分辨率图像及其插值图像,而不需要使用其它高分辨率图像。另外,为了评价方法的效果,本文还引入了一个衡量图像结构自相似性程度的新型指标SSSIM。对比实验结果表明,本文方法具有更好的超分辨率重构效果和运算效率,并且SSSIM指标与超分辨率重构效果具有较强的相关性。   相似文献   

8.
游丽 《红外与激光工程》2022,51(4):20210282-1-20210282-6
提出了一种基于块稀疏贝叶斯学习的合成孔径雷达(Synthetic aperture radar,SAR)图像目标方位角估计方法。SAR图像具有较强的方位角敏感性,因此对于具有某一方位角的SAR图像仅能与其具有相近方位角的样本具有较高的相关性。方法基于稀疏表示的基本思想,首先对所有训练样本按照方位角顺序排列为全局字典。在此条件下,待估计样本在该字典上的线性表示系数具有块稀疏特性,即非零表示系数主要聚集在字典上的某一局部区域。求解得到的块稀疏位置包含的训练样本可以有效地反映待估计样本的方位角信息。采用块稀疏贝叶斯学习(Block sparse Bayesian learning, BSBL)算法求解全局字典上的稀疏表示系数,并根据具有最小重构误差的原则获得最佳的局部分块。在获取最佳分块的基础上,方位角计算方法采用线性加权的方式综合了该分块区间内所有训练样本的方位角信息从而获得更为稳健的估计结果。所提出的方法在充分考察SAR图像方位角敏感性的基础上,综合运用局部区间内样本的有效信息,避免了基于单一样本估计的不确定性。为了验证所提出方法的有效性,基于Moving and stationary target acquisition and recognition (MSTAR)数据集进行了方位角估计实验并与几类经典方法进行对比分析。实验结果验证了所提出方法的性能优势。  相似文献   

9.
In this paper a coupled dictionary learning mechanism with mapping function is proposed in the wavelet domain for the task of Single-Image Super-Resolution. Sparsity is used as the invariant feature for achieving super-resolution. Instead of using a single dictionary multiple compact dictionaries are proposed in the wavelet domain. Such dictionaries will exhibit the properties of the wavelet transform such as compactness, directionality and redundancy. Six pairs of dictionaries are designed using a coupled dictionary mechanism with mapping function which helps in strengthening the similarity between the sparse coefficients. Low-resolution image is assumed as the approximation image of the first-level wavelet decomposition. High resolution is achieved by estimating the wavelet sub-bands of this low-resolution image by dictionary learning and sparsity. The proposed algorithm outperforms a well-known spatial domain and wavelet domain algorithm as evaluated on the existing comparative parameters such as structural similarity index measure and peak signal-to-noise ratio.  相似文献   

10.
In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods.  相似文献   

11.
代晓婷  龚敬  聂生东 《电子学报》2018,46(6):1445-1453
肺部LDCT(Low-Dose Computed Tomography)图像中噪声及条状伪影等异常显著,顶部和底部图像尤为严重.为提高整个肺部LDCT图像的质量,本文提出一种基于结构联合字典的图像降噪方法.首先,利用肺部CT图像的灰度特点,将HRCT(High Resolution Computed Tomography)图像块分类并训练,获得4类字典,通过计算原子的信息熵和HOG(Histogram of Oriented Gradient)特征,得到相应的结构字典,进而构造出结构联合字典;然后,在对肺部LDCT图像进行非局部均值滤波的基础上,将结构联合字典作为全局字典,对图像进行稀疏表示及重构,获得降噪后的图像.为验证算法有效性,选用模拟和临床两类数据进行实验,并与KSVD、AS-LNLM、BF-MCA等3种算法对比.对比发现,本文算法在去除噪声和条状伪影以及保留细节方面效果较好,特别是对序列顶层和底层图像处理优势更加明显.该方法能够显著提升整个肺部LDCT图像的质量.  相似文献   

12.
A novel stochastic approach based on Markov-chain Monte Carlo sampling is investigated for the purpose of image denoising. The additive image denoising problem is formulated as a Bayesian least squares problem, where the goal is to estimate the denoised image given the noisy image as the measurement and an estimated posterior. The posterior is estimated using a nonparametric importance-weighted Markov-chain Monte Carlo sampling approach based on an adaptive Geman-McClure objective function. By learning the posterior in a nonparametric manner, the proposed Markov-chain Monte Carlo denoising (MCMCD) approach adapts in a flexible manner to the underlying image and noise statistics. Furthermore, the computational complexity of MCMCD is relatively low when compared to other published methods with similar denoising performance. The effectiveness of the MCMCD method at image denoising was investigated using additive Gaussian noise, and was found to achieve state-of-the-art denoising performance in terms of both peak signal-to-noise ratio (PSNR) and mean structural similarity (SSIM) metrics when compared to other published methods.  相似文献   

13.
This paper presents a novel method for Bayesian denoising of magnetic resonance (MR) images that bootstraps itself by inferring the prior, i.e., the uncorrupted-image statistics, from the corrupted input data and the knowledge of the Rician noise model. The proposed method relies on principles from empirical Bayes (EB) estimation. It models the prior in a nonparametric Markov random field (MRF) framework and estimates this prior by optimizing an information-theoretic metric using the expectation-maximization algorithm. The generality and power of nonparametric modeling, coupled with the EB approach for prior estimation, avoids imposing ill-fitting prior models for denoising. The results demonstrate that, unlike typical denoising methods, the proposed method preserves most of the important features in brain MR images. Furthermore, this paper presents a novel Bayesian-inference algorithm on MRFs, namely iterated conditional entropy reduction (ICER). This paper also extends the application of the proposed method for denoising diffusion-weighted MR images. Validation results and quantitative comparisons with the state of the art in MR-image denoising clearly depict the advantages of the proposed method.  相似文献   

14.
针对基于稀疏恢复的空时自适应处理(STAP)目标参数估计方法中字典失配导致估计性能下降的问题,该文提出一种基于稀疏贝叶斯字典学习的高精度目标参数估计方法。该方法首先通过目标方位信息补偿多个阵元数据构建联合稀疏恢复数据,然后对补偿后的每个阵元数据利用双线性变换进行加速度和速度项分离。最后构建速度参数和加速度参数的泰勒级数动态字典,对机动目标参数进行高精度贝叶斯字典学习稀疏恢复。仿真实验证明,该方法能有效提高字典失配情况下目标参数估计精度,估计性能优于已有字典固定离散化的稀疏恢复空时目标参数估计方法。  相似文献   

15.
余家林  孙季丰  李万益 《电子学报》2016,44(8):1899-1908
为了准确有效的重构多视角图像中的三维人体姿态,该文提出一种基于多核稀疏编码的人体姿态估计算法.首先,针对连续帧姿态估计的歧义问题,该文设计了一种用于表达多视角图像的HA-SIFT描述子,其中,人体局部拓扑、肢体相对位置及外观信息被同时编码;然后,在多核学习框架下建立同时考虑特征空间内在流形结构与姿态空间几何信息的目标函数,并在希尔伯特空间优化目标函数以更新稀疏编码、过完备字典与多核权值;最后,利用姿态字典原子的线性组合来估计对应未知输入的三维人体姿态.实验结果表明,与核稀疏编码、Laplace稀疏编码及Bayesian稀疏编码相比,文本方法具有更高的估计精度.  相似文献   

16.
Wavelet-based image fusion techniques have been highly successful in combining important features such as edges and textures of source images. In this work, a new discrete wavelet transform (DWT)-based fusion algorithm is proposed using a locally-adaptive multivariate statistical model for the wavelet coefficients of the source images as well as that of the fused image. The multivariate model is proposed based on the fact that the DWT coefficients of source images are correlated not only with each other but also with the fused image. By using this model as a joint prior function, an estimate of the fused coefficients is derived via the Bayesian maximum a posteriori estimation technique. Experimental results show that performance of the proposed fusion method is better than that of the other methods in terms of commonly-used metrics such as structural similarity, peak signal-to-noise ratio, and cross-entropy.  相似文献   

17.
A novel predual dictionary learning algorithm   总被引:1,自引:0,他引:1  
Dictionary learning has been a hot topic fascinating many researchers in recent years. Most of existing methods have a common character that the sequences of learned dictionaries are simpler and simpler regularly by minimizing some cost function. This paper presents a novel predual dictionary learning (PDL) algorithm that updates dictionary via a simple gradient descent method after each inner minimization step of Predual Proximal Point Algorithm (PPPA), which was recently presented by Malgouyres and Zeng (2009) [F. Malgouyres, T. Zeng, A predual proximal point algorithm solving a non negative basis pursuit denoising model, Int. J. Comput. Vision 83 (3) (2009) 294-311]. We prove that the dictionary update strategy of the proposed method is different from the current ones because the learned dictionaries become more and more complex regularly. The experimental results on both synthetic data and real images consistently demonstrate that the proposed approach can efficiently remove the noise while maintaining high image quality and presents advantages over the classical dictionary learning algorithms MOD and K-SVD.  相似文献   

18.
Images can be coded accurately using a sparse set of vectors from a learned overcomplete dictionary, with potential applications in image compression and feature selection for pattern recognition. We present a survey of algorithms that perform dictionary learning and sparse coding and make three contributions. First, we compare our overcomplete dictionary learning algorithm (FOCUSS-CNDL) with overcomplete independent component analysis (ICA). Second, noting that once a dictionary has been learned in a given domain the problem becomes one of choosing the vectors to form an accurate, sparse representation, we compare a recently developed algorithm (sparse Bayesian learning with adjustable variance Gaussians, SBL-AVG) to well known methods of subset selection: matching pursuit and FOCUSS. Third, noting that in some cases it may be necessary to find a non-negative sparse coding, we present a modified version of the FOCUSS algorithm that can find such non-negative codings. Efficient parallel implementations in VLSI could make these algorithms more practical for many applications.  相似文献   

19.
干宗良 《电视技术》2012,36(14):19-23
简要介绍了基于稀疏字典约束的超分辨力重建算法,提出了具有低复杂度的基于K均值聚类的自适应稀疏约束图像超分辨力重建算法。所提算法从两个方面降低其计算复杂度:分类训练字典,对图像块归类重建,降低每个图像块所用字典的大小;对图像块的特征进行分析,自适应地选择重建方法。实验结果表明,提出的快速重建方法在重建质量与原算法相当的前提下,可以较大程度地降低重建时间。  相似文献   

20.
Inverse halftoning is a challenging problem in image processing. Traditionally, this operation is known to introduce visible distortions into reconstructed images. This paper presents a learning-based method that performs a quality enhancement procedure on images reconstructed using inverse halftoning algorithms. The proposed method is implemented using a coupled dictionary learning algorithm, which is based on a patchwise sparse representation. Specifically, the training is performed using image pairs composed by images restored using an inverse halftoning algorithm and their corresponding originals. The learning model, which is based on a sparse representation of these images, is used to construct two dictionaries. One of these dictionaries represents the original images and the other dictionary represents the distorted images. Using these dictionaries, the method generates images with a smaller number of distortions than what is produced by regular inverse halftone algorithms. Experimental results show that images generated by the proposed method have a high quality, with less chromatic aberrations, blur, and white noise distortions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号