共查询到20条相似文献,搜索用时 0 毫秒
1.
Images can be coded accurately using a sparse set of vectors from a learned overcomplete dictionary, with potential applications
in image compression and feature selection for pattern recognition. We present a survey of algorithms that perform dictionary
learning and sparse coding and make three contributions. First, we compare our overcomplete dictionary learning algorithm
(FOCUSS-CNDL) with overcomplete independent component analysis (ICA). Second, noting that once a dictionary has been learned
in a given domain the problem becomes one of choosing the vectors to form an accurate, sparse representation, we compare a
recently developed algorithm (sparse Bayesian learning with adjustable variance Gaussians, SBL-AVG) to well known methods
of subset selection: matching pursuit and FOCUSS. Third, noting that in some cases it may be necessary to find a non-negative
sparse coding, we present a modified version of the FOCUSS algorithm that can find such non-negative codings. Efficient parallel
implementations in VLSI could make these algorithms more practical for many applications.
相似文献
Kenneth Kreutz-DelgadoEmail: |
2.
《Signal Processing, IEEE Transactions on》2006,54(11):4311-4322
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the$ K$ -SVD algorithm—generalizing the$ K$ -means clustering process.$ K$ -SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The$ K$ -SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data. 相似文献
3.
The pioneering work of Shannon provides fundamental bounds on the rate limitations of communicating information reliably over noisy channels (the channel coding problem), as well as the compressibility of data subject to distortion constraints (the lossy source coding problem). However, Shannon's theory is nonconstructive in that it only establishes the existence of coding schemes that can achieve the fundamental bounds but provides neither concrete codes nor computationally efficient algorithms. In the case of channel coding, the past two decades have witnessed dramatic advances in practical constructions and algorithms, including the invention of turbo codes and the surge of interest in low-density parity check (LDPC) codes. Both these classes of codes are based on sparse graphs and yield excellent error-correction performance when decoded using computationally efficient methods such as the message-passing sum-product algorithm. Moreover, their performance limits are well characterized, at least in the asymptotic limit of large block lengths, via the density evolution method. 相似文献
4.
5.
A Proximal Iteration for Deconvolving Poisson Noisy Images Using Sparse Representations 总被引:1,自引:0,他引:1
《IEEE transactions on image processing》2009,18(2):310-321
We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., lscr1 -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy. 相似文献
6.
针对传统高分辨合成孔径雷达(SAR)稀疏自聚焦成像算法难以有效平衡稀疏与聚焦特征的问题,该文提出一种基于交替方向多乘子方法(ADMM)的多任务协同优化学习稀疏自聚焦(MtL-SA)算法.该算法通过引入熵范数表征SAR成像结果聚焦特征,在ADMM优化框架下,利用近端算法求解聚焦特征解析解.针对原熵范数正则优化目标函数的非凸问题,该文合理设计代价函数,从而保证熵范数近端算子的闭合解析解.同时,应用e1范数表征成像结果稀疏特征,并建立面向复数SAR成像数据的复数软阈值近端算子.该文所提MtL-SA成像算法可实现对目标场景后向散射场对应稀疏特征和聚焦特征的解析求解,并有效提升自聚焦算法的可靠性和稳健性.两种特征增强处理相互调和,保证了算法运行过程中有效降低误差传播,进而保证联合特征增强精度.仿真及实测机载SAR成像数据实验,验证了算法的有效性和实用性,同时应用相变分析方法分别定量和定性地分析了该文所提算法相比其他传统算法的优越性. 相似文献
7.
8.
离格(off-grid)波达方向(DOA)估计解决的是实际DOA和假设网格点的失配问题.对于空间紧邻信号的DOA,稀疏的网格点会导致精度和分辨率的下降,密集的网格点虽然可以提高估计精度却显著增加计算负担.针对此问题,该文提出基于稀疏贝叶斯学习(SBL)的空间紧邻信号DOA估计算法,主要包括3个步骤.首先,通过最大化阵列输出的边缘似然函数,推导了信号在拉普拉斯先验下的新不动点迭代方法,进行超参数的预估计,相比其他经典SBL算法提高了收敛速度;其次,利用新网格插值方法优化网格点集,并二次估计噪声方差和信号功率以分辨空间紧邻信号的DOA;最后,推导了似然函数关于角度的最大化公式以改进离格DOA搜索.仿真表明该算法比其他经典SBL类算法对空间紧邻信号的DOA具有更高的精度和分辨率,同时有计算效率的提升. 相似文献
9.
10.
In this paper, we propose a combinational algorithm for the removal of zero-mean white and homogeneous Gaussian additive noise from a given image. Image denoising is formulated as an optimization problem. This is iteratively solved by a weighted basis pursuit (BP) in the closed affine subspace. The patches extracted from a given noisy image can be sparsely and approximately represented by adaptively choosing a few nearest neighbors. The approximate reconstruction of these denoised patches is performed by the sparse representation on two dictionaries, which are built by a discrete cosine transform and the noisy patches, respectively. Experiments show that the proposed algorithm outperforms both BP denoising and Sparse K-SVD. This is because the underlying structure of natural images is better captured and preserved. The results are comparable to those of the block-matching 3D filtering algorithm. 相似文献
11.
Sooyong Choi Jong-Moon Chung 《Communications Letters, IEEE》2007,11(2):182-184
In this letter, a new adaptive beamforming assisted receiver based on sparse Bayesian learning is proposed. We consider a general probabilistic Bayesian learning framework for obtaining sparse solutions to adaptive beamforming assisted receivers to improve the performance of an adaptive beamforming assisted receiver based on the minimum mean squared error (MMSE) scheme. Simulation experiments show that the bit error rate (BER) performance of the sparse Bayesian beamforming receiver shows an outstanding BER performance compared to MMSE beamforming receivers 相似文献
12.
针对现有基于嵌套稀疏圆阵DOA估计方法计算复杂度高、超参数无法快速选取问题,提出了一种基于改进嵌套稀疏圆阵的离格稀疏贝叶斯学习(OGSBL)方法.该方法首先将改进嵌套稀疏圆阵接收信号的协方差矩阵进行向量化处理,然后构造扩展的观测矩阵,进而结合离格模型与稀疏贝叶斯学习算法实现欠定的DOA估计.仿真实验结果表明,所提算法降... 相似文献
13.
14.
15.
The dyadic wavelet transform is an effective tool for processing piecewise smooth signals; however, its poor frequency resolution (its low Q-factor) limits its effectiveness for processing oscillatory signals like speech, EEG, and vibration measurements, etc. This paper develops a more flexible family of wavelet transforms for which the frequency resolution can be varied. The new wavelet transform can attain higher Q-factors (desirable for processing oscillatory signals) or the same low Q-factor of the dyadic wavelet transform. The new wavelet transform is modestly overcomplete and based on rational dilations. Like the dyadic wavelet transform, it is an easily invertible 'constant-Q' discrete transform implemented using iterated filter banks and can likewise be associated with a wavelet frame for L2(R). The wavelet can be made to resemble a Gabor function and can hence have good concentration in the time-frequency plane. The construction of the new wavelet transform depends on the judicious use of both the transform's redundancy and the flexibility allowed by frequency-domain filter design. 相似文献
16.
This paper develops an overcomplete discrete wavelet transform (DWT) based on rational dilation factors for discrete-time signals. The proposed overcomplete rational DWT is implemented using self-inverting FIR filter banks, is approximately shift-invariant, and can provide a dense sampling of the time-frequency plane. A straightforward algorithm is described for the construction of minimal-length perfect reconstruction filters with a specified number of vanishing moments; whereas, in the nonredundant rational case, no such algorithm is available. The algorithm is based on matrix spectral factorization. The analysis/synthesis functions (discrete-time wavelets) can be very smooth and can be designed to closely approximate the derivatives of the Gaussian function. 相似文献
17.
基于稀疏先验的SAR图像目标方位角稳健估计方法 总被引:2,自引:0,他引:2
稳健的高精度目标方位角估计能有效提高SAR ATR的计算效率和识别性能.SAR图像中目标的近雷达主导边界包含较为精确的目标方位角信息,可用于目标方位角估计.由于目标电磁散射特性以及SAR图像斑点噪声的影响,提取的目标近雷达主导边界很不规则,存在"野值"点.本文根据"野值"点稀疏分布的特性,利用最大后验原理提出了一种稳健的方位角估计方法.该方法能够有效检测和剔除主导边界中的"野值",从而提高目标方位角估计的精度和稳健性.针对仅利用距离主导边界估计带来的目标垂直与水平方位的模糊性,基于分割图像中目标区域长宽比特征提出了一种解模糊的新方法.MSTAR实测数据的实验结果表明提出的算法具有较高的精度与稳健性. 相似文献
18.
19.
利用离散小波变换提出了一种结合小波变换和独立分量分析的超完备独立分量分析方法.对比现有的Overeomplete ICA算法,该算法利用了全局的观测信息,且实验子过程的有效长度仅为原来的一半.实验表明该算法能有效分离超完备情况下的混合语音. 相似文献
20.
Hongxia Jin Mario Blaum 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2007,53(2):804-808
In this correspondence, the combinatorial properties of traceability codes constructed from error-correcting codes are studied. Necessary and sufficient conditions for traceability codes constructed from maximum-distance separable (MDS) codes are provided. The known sufficient conditions for a traceability code are proven to be also necessary for linear MDS codes 相似文献