首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
一种可最优化计算特征规模的互信息特征提取   总被引:3,自引:0,他引:3       下载免费PDF全文
利用矩阵特征向量分解,提出一种可最优化计算特征规模的互信息特征提取方法.首先,论述了高斯分布假设下的该互信息判据的类可分特性,并证明了现有典型算法都是本算法的特例;然后,在给出该互信息判据严格的数学意义基础上,提出了基于矩阵特征向量分解计算最优化特征规模算法;最后,通过实际数据验证了该方法的有效性  相似文献   

2.
RSA密码系统的安全性是基于大数分解困难问题.文中简要地介绍了目前攻击RSA密码系统的常用大数分解算法.详细阐述了大数分解法二次筛选法(Quadratic sieve,QS)以及它的改进算法MPQS和PPMPQS的理论基础.根据PPMPQS的原理,设计了一种快速寻找PP关系的方法以及分析了算法复杂度,并利用VC6实现了PPMPQS,成功分解了十进制70位的大数.  相似文献   

3.
两阶段联合聚类协同过滤算法   总被引:2,自引:1,他引:1  
吴湖  王永吉  王哲  王秀利  杜栓柱 《软件学报》2010,21(5):1042-1054
提出一种两阶段评分预测方法.该方法基于一种新的联合聚类算法(BlockClust)和加权非负矩阵分解算法.首先对原始矩阵中的评分模式进行用户和物品两个维度的联合聚类,然后在这些类别的内部通过加权非负矩阵分解方法进行未知评分预测.这种方法的优势在于,首阶段聚类后的矩阵规模远远小于原始评分矩阵,并且同一类别内部的评分具有相似的模式,这样,在大幅度降低预测阶段计算量的同时又提高了非负矩阵分解算法在面对稀疏矩阵预测上的准确度.进一步给出了推荐系统的3种更新模式下如何高效更新预测模型的增量学习方法.在MovieLens数据集上比较了新算法及其他7种相关方法的性能,从而验证了该方法的有效性及其在大型实时推荐系统中的应用价值.  相似文献   

4.
褚一平  陈勤 《微机发展》2005,15(6):91-92,160
IRSA密码系统的安全性是基于大数分解困难问题。文中简要地介绍了目前攻击RSA密码系统的常用大数分解算法。详细阐述了大数分解法二次筛选法(Quadratic sieve,QS)以及它的改进算法MPQS和PPMPQS的理论基础。根据PPMPQS的原理,设计了一种快速寻找PP关系的方法以及分析了算法复杂度,并利用VC6实现了PPMPQS,成功分解了十进制70位的大数。  相似文献   

5.
稀疏约束下非负矩阵分解的增量学习算法   总被引:1,自引:1,他引:0  
王万良  蔡竞 《计算机科学》2014,41(8):241-244
非负矩阵分解(NMF)是一种有效的子空间降维方法。为了改善非负矩阵分解运算规模随训练样本增多而不断增大的现象,同时提高分解后数据的稀疏性,提出了一种稀疏约束下非负矩阵分解的增量学习算法,该算法在稀疏约束的条件下利用前一次分解的结果参与迭代运算,在节省大量运算时间的同时提高了分解后数据的稀疏性。在ORL和CBCL人脸数据库上的实验表明了该算法降维的有效性。  相似文献   

6.
两阶段联合聚类协同过滤算法   总被引:14,自引:1,他引:13  
吴湖  王永吉  王哲  王秀利  杜栓柱 《软件学报》2010,21(4):1042-1054
提出一种两阶段评分预测方法.该方法基于一种新的联合聚类算法(BlockClust)和加权非负矩阵分解算 法.首先对原始矩阵中的评分模式进行用户和物品两个维度的联合聚类,然后在这些类别的内部通过加权非负矩阵 分解方法进行未知评分预测.这种方法的优势在于,首阶段聚类后的矩阵规模远远小于原始评分矩阵,并且同一类别 内部的评分具有相似的模式,这样,在大幅度降低预测阶段计算量的同时又提高了非负矩阵分解算法在面对稀疏矩 阵预测上的准确度.进一步给出了推荐系统的3 种更新模式下如何高效更新预测模型的增量学习方法.在MovieLens数据集上比较了新算法及其他7种相关方法的性能,从而验证了该方法的有效性及其在大型实时推荐系 统中的应用价值.  相似文献   

7.
在对关联规则中的Apriori算法进行了深入研究的基础上,提出了基于矩阵结构的关联规则挖掘算法.由于这个算法只需要对交易数据库进行一次搜索,给出了一种简单有效的逐步缩减交易数据库的方法,能大量减少所需的I/O次数,因此提高了Apriori算法的效率,并改进了数据挖掘算法的性能.  相似文献   

8.
针对现有的非负矩阵分解算法在应用于问题规模逐渐增大的情形时,运算规模随之增大、空间和时间效率不高的情况,提出一种增量式非负矩阵分解算法,使用分块矩阵的思想降低运算规模,利用上一步的分解结果参与运算从而避免重复运算。实验结果表明,该算法对节约计算资源是有效的。  相似文献   

9.
共轭梯度法的GPU实现   总被引:1,自引:0,他引:1       下载免费PDF全文
夏健明  魏德敏 《计算机工程》2009,35(17):274-276
提出基于图形处理单元(GPU)实现矩阵与向量相乘的新算法,只需渲染四边形一次即可实现矩阵与向量乘法。并给出实现向量元素求和的新算法,与缩减算法不同,该算法不要求向量大小为2的幂。基于这2种算法使用OpenGL着色语言(GLSL)编程,用GPU实现求解线性方程组的共轭梯度法。与Krtiger算法相比,该方法所用计算时间更少。  相似文献   

10.
张笑虹  张奇志  周亚丽 《计算机应用研究》2020,37(5):1303-1305,1316
针对推荐系统中的评分预测问题,在矩阵分解的基础上实现了一种修正的二项矩阵分解算法。假设用户对物品的评分基于二项分布,由于用户的评分习惯存在差异,物品的受欢迎程度也存在差异,导致用户—物品评分矩阵存在偏置量。通过引入偏置量对矩阵分解和评分预测进行修正,采用最大后验估计建模,并通过随机梯度下降算法优化模型。实验结果表明,在MovieLens 100K数据集上,引入评分偏置的二项矩阵分解算法在推荐精度、离线计算时间等方面均优于传统的二项矩阵分解算法。  相似文献   

11.
P. Favati  G. Lotti  F. Romani  P. Rózsa 《Calcolo》1991,28(1-2):45-92
The idea of defining the generalized band matrices is based on the recognition that several pattern matrices and their inverses have low rank submatrices in certain domains. Theoretical considerations concerning the generalized band matrices enable us to give uniform treatment for several well known classes of matrices like band matrices, block band matrices, band matrices with low rank corrections, sparse matrices and their inverses. Making use of the new notions of information content and of compact representation of matrices, the concept of proper matrices is extended for generalized band matrices. Some reduction algorithms are presented which help to discover certain hidden structural properties of the generalized band matrices. The theoretical results are enlightened by a large number of figures illustrating numerical examples. Work supported by the Progetto Finalizzato Calcolo Parallelo e Sistemi Informatici of CNR. Visiting Professor at the University of Pisa under the support of GNIM-CNR.  相似文献   

12.
Interval Newton/Generalized Bisection methods reliably find all numerical solutions within a given domain. Both computational complexity analysis and numerical experiments have shown that solving the corresponding interval linear system generated by interval Newton's methods can be computationally expensive (especially when the nonlinear system is large). In applications, many large-scale nonlinear systems of equations result in sparse interval jacobian matrices. In this paper, we first propose a general indexed storage scheme to store sparse interval matrices We then present an iterative interval linear solver that utilizes the proposed index storage scheme It is expected that the newly proposed general interval iterative sparse linear solver will improve the overall performance for interval Newton/Generalized bisection methods when the jacobian matrices are sparse. In section 1, we briefly review interval Newton's methods. In Section 2, we review some currently used storage schemes for sparse systems. In Section 3, we introduce a new index scheme to store general sparse matrices. In Section 4, we present both sequential and parallel algorithms to evaluate a general sparse Jacobian matrix. In Section 5, we present both sequential and parallel algorithms to solve the corresponding interval linear system by the all-row preconditioned scheme. Conclusions and future work are discussed in Section 6.  相似文献   

13.
A new block algorithm for computing a full rank solution of the Sylvester-observer equation arising in state estimation is proposed. The major computational kernels of this algorithm are: 1) solutions of standard Sylvester equations, in each case of which one of the matrices is of much smaller order than that of the system matrix and (furthermore, this small matrix can be chosen arbitrarily), and 2) orthogonal reduction of small order matrices. There are numerically stable algorithms for performing these tasks including the Krylov-subspace methods for solving large and sparse Sylvester equations. The proposed algorithm is also rich in Level 3 Basic Linear Algebra Subroutine (BLAS-3) computations and is thus suitable for high performance computing. Furthermore, the results on numerical experiments on some benchmark examples show that the algorithm has better accuracy than that of some of the existing block algorithms for this problem.  相似文献   

14.
This paper surveys several alternative data structures and algorithms for multiplying sparse upper-triangular matrices over closed semirings, and evaluates their efficiency in computing transitive closures of matrices over the Boolean semiring. Two new variants are introduced that outperform previously known methods on a collection of large data-sets drawn from linguistic applications.  相似文献   

15.
In this paper, we describe scalable parallel algorithms for symmetric sparse matrix factorization, analyze their performance and scalability, and present experimental results for up to 1,024 processors on a Gray T3D parallel computer. Through our analysis and experimental results, we demonstrate that our algorithms substantially improve the state of the art in parallel direct solution of sparse linear systems-both in terms of scalability and overall performance. It is a well known fact that dense matrix factorization scales well and can be implemented efficiently on parallel computers. In this paper, we present the first algorithms to factor a wide class of sparse matrices (including those arising from two- and three-dimensional finite element problems) that are asymptotically as scalable as dense matrix factorization algorithms on a variety of parallel architectures. Our algorithms incur less communication overhead and are more scalable than any previously known parallel formulation of sparse matrix factorization. Although, in this paper, we discuss Cholesky factorization of symmetric positive definite matrices, the algorithms can be adapted for solving sparse linear least squares problems and for Gaussian elimination of diagonally dominant matrices that are almost symmetric in structure. An implementation of one of our sparse Cholesky factorization algorithms delivers up to 20 GFlops on a Gray T3D for medium-size structural engineering and linear programming problems. To the best of our knowledge, this is the highest performance ever obtained for sparse Cholesky factorization on any supercomputer  相似文献   

16.
This paper describes algorithms for non-negative factorization of sparse matrices and tensors, which is a popular technology in artificial intelligence in general and in computer linguistics in particular. It is proposed to use the latent Dirichlet distribution to reduce matrices and tensors to block-diagonal form for parallelizing computations and accelerating non-negative factorization of linguistic matrices and tensors of extremely large dimension. The proposed model also allows to supplement models with new data without performing non-negative factorization of the entire very large tensor anew from the very beginning.  相似文献   

17.
Parallel algorithms for direct methods of analysis and solution of linear algebra problems with sparse symmetric irregularly structured matrices are considered. The performance of such algorithms is investigated. Upper estimates of the speedup and efficiency factors are obtained for a parallel algorithm for triangular decomposition of sparse matrices. Some results of numerical experiments carried out on a MIMD computer are given.  相似文献   

18.
A set of integral operational matrices, which are superior to the conventional and generalized integral operational matrices in the identification of time-varying linear systems via block pulse functions, can be calculated efficiently by the new recursive computational algorithms which are proposed in this paper. As the basis of these algorithms, the constant difference properties of entries of the set of operational matrices are proved. And as the advantage of these algorithms, the reduction of their computational size is discussed. The proposed algorithms are especially useful when the number of block pulses is large in realistic applications of the block pulse function method.  相似文献   

19.
Algorithms are presented for evaluating gradients and Hessians of logarithmic barrier functions for two types of convex cones: the cone of positive semidefinite matrices with a given sparsity pattern and its dual cone, the cone of sparse matrices with the same pattern that have a positive semidefinite completion. Efficient large-scale algorithms for evaluating these barriers and their derivatives are important in interior-point methods for nonsymmetric conic formulations of sparse semidefinite programs. The algorithms are based on the multifrontal method for sparse Cholesky factorization.  相似文献   

20.
A simple direct method and its variants, based on matrix multiplications, to invert square non-singular matrices (need not be positive definite) and to solve systems of linear equations are developed. Since the method and its variants involve only matrix multiplications they are straightforward to parallelise and hence have an advantage over some existing well known direct methods. Theoretical background, analysis of the proposed method and the complexity of the algorithm for sparse and dense matrices are given. Two significantly different parallel algorithms for dense and sparse matrices are developed. In the case of the dense matrix standard parallelisation of matrix multiplications was undertaken. However, for sparse matrices the standard parallel matrix multiplication is not effective which led us to develop a parallel algorithm different from that for the dense matrix. The performances of our algorithms are tested via numerical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号