首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
基于回溯的迭代硬阈值算法   总被引:5,自引:0,他引:5  
杨海蓉  方红  张成  韦穗 《自动化学报》2011,37(3):276-282
针对压缩传感(Compressed sensing, CS)理论中迭代硬阈值(Iterative hard thresholding, IHT)算法迭代次数多和时间长的问题, 提出基于回溯的迭代硬阈值算法(Backtracking-based iterative hard thresholding, BIHT), 该算法通过加入回溯的思想, 优化了IHT算法迭代支撑的选择, 减少支撑被反复选择的次数. 模拟实验表明, 在保证重建质量的前提下, 相比较于IHT和正规化迭代硬阈值(Normalized IHT, NIHT)算法, BIHT算法的重建时间降低了2个数量级. 用本身稀疏的0-1随机信号的重建实验表明, 若测量次数和稀疏度相同, BIHT算法的重建概率高于IHT算法.  相似文献   

2.
Principal component analysis (PCA) approximates a data matrix with a low-rank one by imposing sparsity on its singular values. Its robust variant can cope with spiky noise by introducing an element-wise sparse term. In this paper, we extend such sparse matrix learning methods, and propose a novel framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by a Bayesian regularization effect, called model-induced regularization. Although group LASSO also allows us to design arbitrary types of sparsity on a matrix, SAMF, which is based on the Bayesian framework, provides inference without any requirement for manual parameter tuning. We propose an efficient iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on benchmark datasets and a foreground/background video separation problem.  相似文献   

3.
迭代硬阈值压缩感知重构算法——IIHT   总被引:1,自引:0,他引:1  
研究了压缩感知信号重构算法的理论,针对迭代硬阈值(IHT)重构算法对测量矩阵的过分依赖、计算复杂度高、运算时间长的缺点,通过修订迭代硬阈值重构算法的代价函数和自适应地调整迭代步长的选取原则,设计了一种迭代硬阈值重构算法--IIHT。IIHT算法显著提高了信号精确重构的概率,降低了算法的计算复杂度,进一步减少了算法的运算时间,加快了算法的收敛速度。  相似文献   

4.
The development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis are presented. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying data sets.  相似文献   

5.
We present graphics processing unit (GPU) data structures and algorithms to efficiently solve sparse linear systems that are typically required in simulations of multi‐body systems and deformable bodies. Thereby, we introduce an efficient sparse matrix data structure that can handle arbitrary sparsity patterns and outperforms current state‐of‐the‐art implementations for sparse matrix vector multiplication. Moreover, an efficient method to construct global matrices on the GPU is presented where hundreds of thousands of individual element contributions are assembled in a few milliseconds. A finite‐element‐based method for the simulation of deformable solids as well as an impulse‐based method for rigid bodies are introduced in order to demonstrate the advantages of the novel data structures and algorithms. These applications share the characteristic that a major computational effort consists of building and solving systems of linear equations in every time step. Our solving method results in a speed‐up factor of up to 13 in comparison to other GPU methods.  相似文献   

6.
针对大部分FPGA端上的卷积神经网络(CNN,convolutional neural network)加速器设计未能有效利用稀疏性的问题,从带宽和能量消耗方面考虑,提出了基于线性脉动阵列的2种改进的CNN计算优化方案。首先,卷积转化为矩阵相乘形式以利用稀疏性;其次,为解决传统的并行矩阵乘法器存在较大I/O需求的问题,采用线性脉动阵列改进设计;最后,对比分析了传统的并行矩阵乘法器和2种改进的线性脉动阵列用于CNN加速的利弊。理论证明及分析表明,与并行矩阵乘法器相比,2种改进的线性脉动阵列都充分利用了稀疏性,具有能量消耗少、I/O带宽占用少的优势。  相似文献   

7.
Recommender systems usually employ techniques like collaborative filtering for providing recommendations on items/services. Maximum Margin Matrix Factorization (MMMF) is an effective collaborative filtering approach. MMMF suffers from the data sparsity problem, i.e., the number of items rated by the users are very small as compared to the very large item space. Recently, techniques like cross-domain collaborative filtering (transfer learning) is suggested for addressing the data sparsity problem. In this paper, we propose a model for transfer learning in collaborative filtering through MMMF to address the data sparsity issue. The latent feature matrices involved in MMMF are clustered and combined to generate a cluster-level rating pattern called codebook and a codebook transfer is used for transfer of information. Transferring of codebook and finding the predicted rating matrix is done in a novel way by introducing a softness constraint into the optimization function. We have experimented our methods with different levels of sparsity using benchmark datasets. Results from experiments show that our model approximates the target matrix well.  相似文献   

8.
在信号稀疏度未知的情况下,稀疏度自适应匹配追踪算法(Sparsity Adaptive Matching Pursuit,SAMP)是一种广泛应用的压缩感知重构算法。为了优化SAMP算法的性能,提出了一种改进的稀疏度自适应匹配追踪(Improved Sparsity Adaptive Matching Pursuit,ISAMP)算法。该算法引入广义Dice系数匹配准则,能更准确地从测量矩阵中挑选与残差信号最匹配的原子,利用阈值方法选取预选集,并在迭代过程中采用指数变步长。实验结果表明,在相同的条件下,改进后的算法提高了重构质量和运算速度。  相似文献   

9.
Traditional greedy algorithms need to know the sparsity of the signal in advance, while the sparsity adaptive matching pursuit algorithm avoids this problem at the expense of computational time. To overcome these problems, this paper proposes a variable step size sparsity adaptive matching pursuit (SAMPVSS). In terms of how to select atoms, this algorithm constructs a set of candidate atoms by calculating the correlation between the measurement matrix and the residual and selects the atom most related to the residual. In determining the number of atoms to be selected each time, the algorithm introduces an exponential function. At the beginning of the iteration, a larger step is used to estimate the sparsity of the signal. In the latter part of the iteration, the step size is set to one to improve the accuracy of reconstruction. The simulation results show that the proposed algorithm has good reconstruction effects on both one-dimensional and two-dimensional signals.  相似文献   

10.
基于矩阵分解的推荐算法普遍存在数据稀疏性、冷启动和抗攻击能力差等问题.针对上述问题,文中提出信任加强的矩阵分解推荐算法.首先,借鉴社会心理学中的信任产生原理,提出基于用户信誉度的信任扩展方法,缓解信任数据的稀疏性问题.然后,基于社交同质化原理,利用信任用户对评分矩阵分解过程中的用户潜在因子向量进行扩展,解决评分数据的稀疏性和新用户的冷启动问题.同时,利用信任关系对目标优化函数进行规格化约束,提高评分预测的准确性.基于通用测试数据集Epinions的实验表明,文中方法在推荐性能方面具有明显改善,可以有效解决数据稀疏性问题和冷启动问题.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号