共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
RSA密码系统的安全性是基于大数分解困难问题.文中简要地介绍了目前攻击RSA密码系统的常用大数分解算法.详细阐述了大数分解法二次筛选法(Quadratic sieve,QS)以及它的改进算法MPQS和PPMPQS的理论基础.根据PPMPQS的原理,设计了一种快速寻找PP关系的方法以及分析了算法复杂度,并利用VC6实现了PPMPQS,成功分解了十进制70位的大数. 相似文献
3.
两阶段联合聚类协同过滤算法 总被引:2,自引:1,他引:1
提出一种两阶段评分预测方法.该方法基于一种新的联合聚类算法(BlockClust)和加权非负矩阵分解算法.首先对原始矩阵中的评分模式进行用户和物品两个维度的联合聚类,然后在这些类别的内部通过加权非负矩阵分解方法进行未知评分预测.这种方法的优势在于,首阶段聚类后的矩阵规模远远小于原始评分矩阵,并且同一类别内部的评分具有相似的模式,这样,在大幅度降低预测阶段计算量的同时又提高了非负矩阵分解算法在面对稀疏矩阵预测上的准确度.进一步给出了推荐系统的3种更新模式下如何高效更新预测模型的增量学习方法.在MovieLens数据集上比较了新算法及其他7种相关方法的性能,从而验证了该方法的有效性及其在大型实时推荐系统中的应用价值. 相似文献
4.
IRSA密码系统的安全性是基于大数分解困难问题。文中简要地介绍了目前攻击RSA密码系统的常用大数分解算法。详细阐述了大数分解法二次筛选法(Quadratic sieve,QS)以及它的改进算法MPQS和PPMPQS的理论基础。根据PPMPQS的原理,设计了一种快速寻找PP关系的方法以及分析了算法复杂度,并利用VC6实现了PPMPQS,成功分解了十进制70位的大数。 相似文献
5.
稀疏约束下非负矩阵分解的增量学习算法 总被引:1,自引:1,他引:0
非负矩阵分解(NMF)是一种有效的子空间降维方法。为了改善非负矩阵分解运算规模随训练样本增多而不断增大的现象,同时提高分解后数据的稀疏性,提出了一种稀疏约束下非负矩阵分解的增量学习算法,该算法在稀疏约束的条件下利用前一次分解的结果参与迭代运算,在节省大量运算时间的同时提高了分解后数据的稀疏性。在ORL和CBCL人脸数据库上的实验表明了该算法降维的有效性。 相似文献
6.
两阶段联合聚类协同过滤算法 总被引:14,自引:1,他引:13
提出一种两阶段评分预测方法.该方法基于一种新的联合聚类算法(BlockClust)和加权非负矩阵分解算
法.首先对原始矩阵中的评分模式进行用户和物品两个维度的联合聚类,然后在这些类别的内部通过加权非负矩阵
分解方法进行未知评分预测.这种方法的优势在于,首阶段聚类后的矩阵规模远远小于原始评分矩阵,并且同一类别
内部的评分具有相似的模式,这样,在大幅度降低预测阶段计算量的同时又提高了非负矩阵分解算法在面对稀疏矩
阵预测上的准确度.进一步给出了推荐系统的3 种更新模式下如何高效更新预测模型的增量学习方法.在MovieLens数据集上比较了新算法及其他7种相关方法的性能,从而验证了该方法的有效性及其在大型实时推荐系
统中的应用价值. 相似文献
7.
8.
9.
10.
11.
The idea of defining the generalized band matrices is based on the recognition that several pattern matrices and their inverses
have low rank submatrices in certain domains. Theoretical considerations concerning the generalized band matrices enable us
to give uniform treatment for several well known classes of matrices like band matrices, block band matrices, band matrices
with low rank corrections, sparse matrices and their inverses.
Making use of the new notions of information content and of compact representation of matrices, the concept of proper matrices
is extended for generalized band matrices.
Some reduction algorithms are presented which help to discover certain hidden structural properties of the generalized band
matrices. The theoretical results are enlightened by a large number of figures illustrating numerical examples.
Work supported by the Progetto Finalizzato Calcolo Parallelo e Sistemi Informatici of CNR.
Visiting Professor at the University of Pisa under the support of GNIM-CNR. 相似文献
12.
Interval Newton/Generalized Bisection methods reliably find all numerical solutions within a given domain. Both computational complexity analysis and numerical experiments have shown that solving the corresponding interval linear system generated by interval Newton's methods can be computationally expensive (especially when the nonlinear system is large). In applications, many large-scale nonlinear systems of equations result in sparse interval jacobian matrices. In this paper, we first propose a general indexed storage scheme to store sparse interval matrices We then present an iterative interval linear solver that utilizes the proposed index storage scheme It is expected that the newly proposed general interval iterative sparse linear solver will improve the overall performance for interval Newton/Generalized bisection methods when the jacobian matrices are sparse. In section 1, we briefly review interval Newton's methods. In Section 2, we review some currently used storage schemes for sparse systems. In Section 3, we introduce a new index scheme to store general sparse matrices. In Section 4, we present both sequential and parallel algorithms to evaluate a general sparse Jacobian matrix. In Section 5, we present both sequential and parallel algorithms to solve the corresponding interval linear system by the all-row preconditioned scheme. Conclusions and future work are discussed in Section 6. 相似文献
13.
A new block algorithm for computing a full rank solution of the Sylvester-observer equation arising in state estimation is proposed. The major computational kernels of this algorithm are: 1) solutions of standard Sylvester equations, in each case of which one of the matrices is of much smaller order than that of the system matrix and (furthermore, this small matrix can be chosen arbitrarily), and 2) orthogonal reduction of small order matrices. There are numerically stable algorithms for performing these tasks including the Krylov-subspace methods for solving large and sparse Sylvester equations. The proposed algorithm is also rich in Level 3 Basic Linear Algebra Subroutine (BLAS-3) computations and is thus suitable for high performance computing. Furthermore, the results on numerical experiments on some benchmark examples show that the algorithm has better accuracy than that of some of the existing block algorithms for this problem. 相似文献
14.
This paper surveys several alternative data structures and algorithms for multiplying sparse upper-triangular matrices over closed semirings, and evaluates their efficiency in computing transitive closures of matrices over the Boolean semiring. Two new variants are introduced that outperform previously known methods on a collection of large data-sets drawn from linguistic applications. 相似文献
15.
Gupta A. Karypis G. Kumar V. 《Parallel and Distributed Systems, IEEE Transactions on》1997,8(5):502-520
In this paper, we describe scalable parallel algorithms for symmetric sparse matrix factorization, analyze their performance and scalability, and present experimental results for up to 1,024 processors on a Gray T3D parallel computer. Through our analysis and experimental results, we demonstrate that our algorithms substantially improve the state of the art in parallel direct solution of sparse linear systems-both in terms of scalability and overall performance. It is a well known fact that dense matrix factorization scales well and can be implemented efficiently on parallel computers. In this paper, we present the first algorithms to factor a wide class of sparse matrices (including those arising from two- and three-dimensional finite element problems) that are asymptotically as scalable as dense matrix factorization algorithms on a variety of parallel architectures. Our algorithms incur less communication overhead and are more scalable than any previously known parallel formulation of sparse matrix factorization. Although, in this paper, we discuss Cholesky factorization of symmetric positive definite matrices, the algorithms can be adapted for solving sparse linear least squares problems and for Gaussian elimination of diagonally dominant matrices that are almost symmetric in structure. An implementation of one of our sparse Cholesky factorization algorithms delivers up to 20 GFlops on a Gray T3D for medium-size structural engineering and linear programming problems. To the best of our knowledge, this is the highest performance ever obtained for sparse Cholesky factorization on any supercomputer 相似文献
16.
This paper describes algorithms for non-negative factorization of sparse matrices and tensors, which is a popular technology in artificial intelligence in general and in computer linguistics in particular. It is proposed to use the latent Dirichlet distribution to reduce matrices and tensors to block-diagonal form for parallelizing computations and accelerating non-negative factorization of linguistic matrices and tensors of extremely large dimension. The proposed model also allows to supplement models with new data without performing non-negative factorization of the entire very large tensor anew from the very beginning. 相似文献
17.
Algorithms of parallel computations for linear algebra problems with irregularly structured matrices
Parallel algorithms for direct methods of analysis and solution of linear algebra problems with sparse symmetric irregularly
structured matrices are considered. The performance of such algorithms is investigated. Upper estimates of the speedup and
efficiency factors are obtained for a parallel algorithm for triangular decomposition of sparse matrices. Some results of
numerical experiments carried out on a MIMD computer are given. 相似文献
18.
A set of integral operational matrices, which are superior to the conventional and generalized integral operational matrices in the identification of time-varying linear systems via block pulse functions, can be calculated efficiently by the new recursive computational algorithms which are proposed in this paper. As the basis of these algorithms, the constant difference properties of entries of the set of operational matrices are proved. And as the advantage of these algorithms, the reduction of their computational size is discussed. The proposed algorithms are especially useful when the number of block pulses is large in realistic applications of the block pulse function method. 相似文献
19.
Martin S. Andersen Joachim Dahl Lieven Vandenberghe 《Optimization methods & software》2013,28(3):396-423
Algorithms are presented for evaluating gradients and Hessians of logarithmic barrier functions for two types of convex cones: the cone of positive semidefinite matrices with a given sparsity pattern and its dual cone, the cone of sparse matrices with the same pattern that have a positive semidefinite completion. Efficient large-scale algorithms for evaluating these barriers and their derivatives are important in interior-point methods for nonsymmetric conic formulations of sparse semidefinite programs. The algorithms are based on the multifrontal method for sparse Cholesky factorization. 相似文献
20.
《Advances in Engineering Software》1999,30(9-11):839-845
A simple direct method and its variants, based on matrix multiplications, to invert square non-singular matrices (need not be positive definite) and to solve systems of linear equations are developed. Since the method and its variants involve only matrix multiplications they are straightforward to parallelise and hence have an advantage over some existing well known direct methods. Theoretical background, analysis of the proposed method and the complexity of the algorithm for sparse and dense matrices are given. Two significantly different parallel algorithms for dense and sparse matrices are developed. In the case of the dense matrix standard parallelisation of matrix multiplications was undertaken. However, for sparse matrices the standard parallel matrix multiplication is not effective which led us to develop a parallel algorithm different from that for the dense matrix. The performances of our algorithms are tested via numerical examples. 相似文献