首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Images can be coded accurately using a sparse set of vectors from a learned overcomplete dictionary, with potential applications in image compression and feature selection for pattern recognition. We present a survey of algorithms that perform dictionary learning and sparse coding and make three contributions. First, we compare our overcomplete dictionary learning algorithm (FOCUSS-CNDL) with overcomplete independent component analysis (ICA). Second, noting that once a dictionary has been learned in a given domain the problem becomes one of choosing the vectors to form an accurate, sparse representation, we compare a recently developed algorithm (sparse Bayesian learning with adjustable variance Gaussians, SBL-AVG) to well known methods of subset selection: matching pursuit and FOCUSS. Third, noting that in some cases it may be necessary to find a non-negative sparse coding, we present a modified version of the FOCUSS algorithm that can find such non-negative codings. Efficient parallel implementations in VLSI could make these algorithms more practical for many applications.
Kenneth Kreutz-DelgadoEmail:
  相似文献   

2.
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the$ K$-SVD algorithm—generalizing the$ K$-means clustering process.$ K$-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The$ K$-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data.  相似文献   

3.
The pioneering work of Shannon provides fundamental bounds on the rate limitations of communicating information reliably over noisy channels (the channel coding problem), as well as the compressibility of data subject to distortion constraints (the lossy source coding problem). However, Shannon's theory is nonconstructive in that it only establishes the existence of coding schemes that can achieve the fundamental bounds but provides neither concrete codes nor computationally efficient algorithms. In the case of channel coding, the past two decades have witnessed dramatic advances in practical constructions and algorithms, including the invention of turbo codes and the surge of interest in low-density parity check (LDPC) codes. Both these classes of codes are based on sparse graphs and yield excellent error-correction performance when decoded using computationally efficient methods such as the message-passing sum-product algorithm. Moreover, their performance limits are well characterized, at least in the asymptotic limit of large block lengths, via the density evolution method.  相似文献   

4.
李波  李艳  李昕 《光电技术应用》2009,24(3):56-58,62
根据稀疏孔径光学系统成像的图像恢复模型,分析维纳滤波和最小二乘方滤波图像恢复算法的适用条件.针对存在噪声干扰的稀疏孔径光学系统,维纳滤波的理论推导能够达到最优.通过实验对比,指出噪声功率谱未知情况,最小二乘方滤波的恢复结果优于维纳滤波结果.  相似文献   

5.
We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., lscr1 -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.  相似文献   

6.
针对传统高分辨合成孔径雷达(SAR)稀疏自聚焦成像算法难以有效平衡稀疏与聚焦特征的问题,该文提出一种基于交替方向多乘子方法(ADMM)的多任务协同优化学习稀疏自聚焦(MtL-SA)算法.该算法通过引入熵范数表征SAR成像结果聚焦特征,在ADMM优化框架下,利用近端算法求解聚焦特征解析解.针对原熵范数正则优化目标函数的非凸问题,该文合理设计代价函数,从而保证熵范数近端算子的闭合解析解.同时,应用e1范数表征成像结果稀疏特征,并建立面向复数SAR成像数据的复数软阈值近端算子.该文所提MtL-SA成像算法可实现对目标场景后向散射场对应稀疏特征和聚焦特征的解析求解,并有效提升自聚焦算法的可靠性和稳健性.两种特征增强处理相互调和,保证了算法运行过程中有效降低误差传播,进而保证联合特征增强精度.仿真及实测机载SAR成像数据实验,验证了算法的有效性和实用性,同时应用相变分析方法分别定量和定性地分析了该文所提算法相比其他传统算法的优越性.  相似文献   

7.
计算机断层成像是医学检查的常用方法,但是检查中过量的辐射可能对病人造成二次伤害.基于此提出了一种稀疏贝叶斯学习(Sparse Bayesian Learning,SBL)的肺部计算机断层成像(Computed Tomography,CT)图像重构方法,首先应用高斯随机分布矩阵对肺部图像进行测量,并建立基于小波变换的稀疏...  相似文献   

8.
离格(off-grid)波达方向(DOA)估计解决的是实际DOA和假设网格点的失配问题.对于空间紧邻信号的DOA,稀疏的网格点会导致精度和分辨率的下降,密集的网格点虽然可以提高估计精度却显著增加计算负担.针对此问题,该文提出基于稀疏贝叶斯学习(SBL)的空间紧邻信号DOA估计算法,主要包括3个步骤.首先,通过最大化阵列输出的边缘似然函数,推导了信号在拉普拉斯先验下的新不动点迭代方法,进行超参数的预估计,相比其他经典SBL算法提高了收敛速度;其次,利用新网格插值方法优化网格点集,并二次估计噪声方差和信号功率以分辨空间紧邻信号的DOA;最后,推导了似然函数关于角度的最大化公式以改进离格DOA搜索.仿真表明该算法比其他经典SBL类算法对空间紧邻信号的DOA具有更高的精度和分辨率,同时有计算效率的提升.  相似文献   

9.
非均匀纠错Turbo码在图像无线传输中的应用   总被引:1,自引:0,他引:1  
本文根据信源泉编码流中信息比特重要性不均匀特点,采用变码率信道编码、分集发送、分集接收、非均匀迭代译码策略,对传统的Turbo码编译码器结构做了改进,给出了实验结果。通过比较发现,改进算法降低了时延,提高了Turbo码译码算法的纠错性能,特别适合于多媒体非实时无线传输业务。  相似文献   

10.
In this paper, we propose a combinational algorithm for the removal of zero-mean white and homogeneous Gaussian additive noise from a given image. Image denoising is formulated as an optimization problem. This is iteratively solved by a weighted basis pursuit (BP) in the closed affine subspace. The patches extracted from a given noisy image can be sparsely and approximately represented by adaptively choosing a few nearest neighbors. The approximate reconstruction of these denoised patches is performed by the sparse representation on two dictionaries, which are built by a discrete cosine transform and the noisy patches, respectively. Experiments show that the proposed algorithm outperforms both BP denoising and Sparse K-SVD. This is because the underlying structure of natural images is better captured and preserved. The results are comparable to those of the block-matching 3D filtering algorithm.  相似文献   

11.
In this letter, a new adaptive beamforming assisted receiver based on sparse Bayesian learning is proposed. We consider a general probabilistic Bayesian learning framework for obtaining sparse solutions to adaptive beamforming assisted receivers to improve the performance of an adaptive beamforming assisted receiver based on the minimum mean squared error (MMSE) scheme. Simulation experiments show that the bit error rate (BER) performance of the sparse Bayesian beamforming receiver shows an outstanding BER performance compared to MMSE beamforming receivers  相似文献   

12.
针对现有基于嵌套稀疏圆阵DOA估计方法计算复杂度高、超参数无法快速选取问题,提出了一种基于改进嵌套稀疏圆阵的离格稀疏贝叶斯学习(OGSBL)方法.该方法首先将改进嵌套稀疏圆阵接收信号的协方差矩阵进行向量化处理,然后构造扩展的观测矩阵,进而结合离格模型与稀疏贝叶斯学习算法实现欠定的DOA估计.仿真实验结果表明,所提算法降...  相似文献   

13.
以往的跳频信号参数盲估计方法大多难以适应多个信号同时存在的情况,且需要积累一定数量的样本以后才能从中提取所需要的信息.为了稳定实时地跟踪跳频信号的频率,该文提出一种利用贝叶斯稀疏学习的单/多通道跳频信号频率估计和跳变时刻检测方法来实现多跳频信号频率的实时跟踪.首先建立了多跳频信号的稀疏表示模型,然后介绍了多观测贝叶斯稀疏学习算法及跳变时刻实时检测方法,最后仿真结果验证方法的有效性.  相似文献   

14.
帕金森病(Parkinson's Disease,PD)语音诊断存在小样本问题,如果借助相关语音数据集进行迁移学习,容易加重训练集和测试集之间的分布差异,影响分类准确率.为了解决上述矛盾问题,本文提出了两步式稀疏迁移学习算法.该算法分为两大步:第一步算法为语音段特征同时优选的快速卷积稀疏编码算法,构造卷积稀疏编码算子用...  相似文献   

15.
The dyadic wavelet transform is an effective tool for processing piecewise smooth signals; however, its poor frequency resolution (its low Q-factor) limits its effectiveness for processing oscillatory signals like speech, EEG, and vibration measurements, etc. This paper develops a more flexible family of wavelet transforms for which the frequency resolution can be varied. The new wavelet transform can attain higher Q-factors (desirable for processing oscillatory signals) or the same low Q-factor of the dyadic wavelet transform. The new wavelet transform is modestly overcomplete and based on rational dilations. Like the dyadic wavelet transform, it is an easily invertible 'constant-Q' discrete transform implemented using iterated filter banks and can likewise be associated with a wavelet frame for L2(R). The wavelet can be made to resemble a Gabor function and can hence have good concentration in the time-frequency plane. The construction of the new wavelet transform depends on the judicious use of both the transform's redundancy and the flexibility allowed by frequency-domain filter design.  相似文献   

16.
Overcomplete Discrete Wavelet Transforms With Rational Dilation Factors   总被引:2,自引:0,他引:2  
This paper develops an overcomplete discrete wavelet transform (DWT) based on rational dilation factors for discrete-time signals. The proposed overcomplete rational DWT is implemented using self-inverting FIR filter banks, is approximately shift-invariant, and can provide a dense sampling of the time-frequency plane. A straightforward algorithm is described for the construction of minimal-length perfect reconstruction filters with a specified number of vanishing moments; whereas, in the nonredundant rational case, no such algorithm is available. The algorithm is based on matrix spectral factorization. The analysis/synthesis functions (discrete-time wavelets) can be very smooth and can be designed to closely approximate the derivatives of the Gaussian function.  相似文献   

17.
基于稀疏先验的SAR图像目标方位角稳健估计方法   总被引:2,自引:0,他引:2  
稳健的高精度目标方位角估计能有效提高SAR ATR的计算效率和识别性能.SAR图像中目标的近雷达主导边界包含较为精确的目标方位角信息,可用于目标方位角估计.由于目标电磁散射特性以及SAR图像斑点噪声的影响,提取的目标近雷达主导边界很不规则,存在"野值"点.本文根据"野值"点稀疏分布的特性,利用最大后验原理提出了一种稳健的方位角估计方法.该方法能够有效检测和剔除主导边界中的"野值",从而提高目标方位角估计的精度和稳健性.针对仅利用距离主导边界估计带来的目标垂直与水平方位的模糊性,基于分割图像中目标区域长宽比特征提出了一种解模糊的新方法.MSTAR实测数据的实验结果表明提出的算法具有较高的精度与稳健性.  相似文献   

18.
改进的稀疏分布存储器模型及其学习能力分析   总被引:1,自引:0,他引:1       下载免费PDF全文
彭宏京  陈松灿 《电子学报》2002,30(5):774-776
Kanerva的稀疏分布存储器模型 (SDM) ,由于其读写规则采用外积法 ,因此限制了它的应用 .本文对该模型进行改进 ,改变了原来的读写规则 ,保留其稀疏分布式存储的特点 ,得到一个与小脑模型 (CMAC)相似的新模型 ,但它不存在分块效应、不需要HASHING技术 .理论分析和示例表明了该改进模型的合理性和有效性  相似文献   

19.
唐江波  郭威 《电声技术》2011,35(4):47-50
利用离散小波变换提出了一种结合小波变换和独立分量分析的超完备独立分量分析方法.对比现有的Overeomplete ICA算法,该算法利用了全局的观测信息,且实验子过程的有效长度仅为原来的一半.实验表明该算法能有效分离超完备情况下的混合语音.  相似文献   

20.
In this correspondence, the combinatorial properties of traceability codes constructed from error-correcting codes are studied. Necessary and sufficient conditions for traceability codes constructed from maximum-distance separable (MDS) codes are provided. The known sufficient conditions for a traceability code are proven to be also necessary for linear MDS codes  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号