首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
The eigenvalues and eigenvectors of a matrix have many applications in engineering and science, such us studying and solving structural problems in both the treatment of signal or image processing, and the study of quantum mechanics. One of the most important aspects of an algorithm is the speed of execution, especially when it is used in large arrays. For this reason, in this paper the authors propose a new methodology using a genetic algorithm to compute all the eigenvectors and eigenvalues in real symmetric and Hermitian matrices. The algorithm uses a general-purpose library developed by the authors for genetic algorithms (GALGA). The speed of execution and the influence of population size have been studied. Moreover, the algorithm has been tested in different matrices and population sizes by comparing the speed of execution to the number of the eigenvectors. This new methodology is faster than the previous algorithm developed by the authors and all eigenvectors can be obtained with it. In addition, the performance using the Coope matrix has been tested contrasting the results with another technique published in the scientific literature.  相似文献   

2.
黄金峰  张合新  张植 《控制与决策》2012,27(8):1226-1230
针对传统递推子空间算法采用固定遗忘因子而存在易受噪声干扰等问题,提出一种新的变因子递推子空间辨识算法.该算法首先引入变因子构造和更新输入输出Hankel矩阵以及观测向量;然后利用变因子改进梯度型子空间跟踪算法估计系统的广义能观测矩阵,并由广义能观测矩阵来估计系统矩阵;最后运用系统矩阵的特征值空间欧氏距离信息实现变因子更新步骤,使算法具有自适应性.将所提出算法应用于一类时不变系统和慢变系统模型,数值仿真结果表明该算法跟踪速度较快且跟踪效果良好.  相似文献   

3.
Sorting on STAR     
This paper gives timing comparisons for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's sorting algorithm, which makes especially good use of vector operations but has a complexity of N (log N)2 as compared to a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.  相似文献   

4.
Two parallel block tridiagonalization algorithms and implementations for dense real symmetric matrices are presented. Block tridiagonalization is a critical pre-processing step for the block tridiagonal divide-and-conquer algorithm for computing eigensystems and is useful for many algorithms desiring the efficiencies of block structure in matrices. For an “effectively” sparse matrix, which frequently results from applications with strong locality properties, a heuristic parallel algorithm is used to transform it into a block tridiagonal matrix such that the eigenvalue errors remain bounded by some prescribed accuracy tolerance. For a dense matrix without any usable structure, orthogonal transformations are used to reduce it to block tridiagonal form using mostly level 3 BLAS operations. Numerical experiments show that block tridiagonal structure obtained from this algorithm directly affects the computational complexity of the parallel block tridiagonal divide-and-conquer eigensolver. Reduction to block tridiagonal form provides significantly lower execution times, as well as memory traffic and communication cost, over the traditional reduction to tridiagonal form for eigensystem computations.  相似文献   

5.
格基约化算法是求解格上最短向量问题(SVP)的一类算法,在格理论中有重要地位,尤其在格理论构造的公钥密码中发挥重要作用.目前公认效率最高的主流算法是Blockwise-Korkine-Zolotarev(BKZ)及其改进形式BKZ 2.0,主要思想是分块约化,调用多项式次的局部格上SVP算法.但是BKZ类算法仍然存在约化程度不够充分、在高维度格中约化效率不高的问题,也存在多种改进的算法.本文在已有算法的基础上,对BKZ结构进行优化,并应用筛法的最新研究成果,设计了一种新的综合算法——Blockwise-Sieving-Reduction(BSR).在预处理阶段,将格矩阵划分后分别进行BKZ预处理,该过程可直接进行并行化.在格基约化阶段,该算法结合BKZ算法与筛法的优点,使用分块逐次增大的多轮BKZ算法进行预处理,并在BKZ结构中使用改进的筛法替代原有的枚举子过程,通过插入向量改进局部格的性质,提高了BKZ算法的效率,使之能在更大的分块下求解SVP.针对更高维度的格矩阵,设计了递归调用的算法变种,称为i-BSR算法,该算法使用了渐进约化等实现技术,可以进行更大维度格的约化.从理论角度进行分析,论证了该算法可以进行格基约化并求格上短向量.实验结果表明,该算法在较大分块下,能够以可接受的时间代价完成SVP求解,且得到的向量优于已有算法的实验结果,新算法得到的首向量长度可以缩短至BKZ 2.0的90%.  相似文献   

6.
For a self-adjoint linear operator with a discrete spectrum or a Hermitian matrix, the “extreme” eigenvalues define the boundaries of clusters in the spectrum of real eigenvalues. The outer extreme ones are the largest and the smallest eigenvalues. If there are extended intervals in the spectrum in which no eigenvalues are present, the eigenvalues bounding these gaps are the inner extreme eigenvalues.We will describe a procedure for detecting the extreme eigenvalues that relies on the relationship between the acceleration rate of polynomial acceleration iteration and the norm of the matrix via the spectral theorem, applicable to normal matrices. The strategy makes use of the fast growth rate of Chebyshev polynomials to distinguish ranges in the spectrum of the matrix which are devoid of eigenvalues.The method is numerically stable with regard to the dimension of the matrix problem and is thus capable of handling matrices of large dimension. The overall computational cost is quadratic in the size of a dense matrix; linear in the size of a sparse matrix. We verify computationally that the algorithm is accurate and efficient, even on large matrices.  相似文献   

7.
《Parallel Computing》2007,33(7-8):521-540
This paper presents several new variants of the single-vector Arnoldi algorithm for computing approximations to eigenvalues and eigenvectors of a non-symmetric matrix. The context of this work is the efficient implementation of industrial-strength, parallel, sparse eigensolvers, in which robustness is of paramount importance, as well as efficiency. For this reason, Arnoldi variants that employ Gram-Schmidt with iterative reorthogonalization are considered. The proposed algorithms aim at improving the scalability when running in massively parallel platforms with many processors. The main goal is to reduce the performance penalty induced by global communications required in vector inner products and norms. In the proposed algorithms, this is achieved by reorganizing the stages that involve these operations, particularly the orthogonalization and normalization of vectors, in such a way that several global communications are grouped together while guaranteeing that the numerical stability of the process is maintained. The numerical properties of the new algorithms are assessed by means of a large set of test matrices. Also, scalability analyses show a significant improvement in parallel performance.  相似文献   

8.
一种计算矩阵特征值特征向量的神经网络方法   总被引:1,自引:0,他引:1  
当把Oja学习规则描述的连续型全反馈神经网络(Oja-N)用于求解矩阵特征值特征向量时,网络初始向量需位于单位超球面上,这给应用带来不便.由此,提出一种求解矩阵特征值特征向量的神经网络(1yNN)方法.在lyNN解析解基础上得到了以下结果:初始向量属于任意特征值对应特征向量张成的子空间,则网络平衡向量也将属于该空间;分析了lyNN收敛于矩阵最大特征值对应特征向量的初始向量取值条件;明确了lyNN收敛于矩阵不同特征值的特征子空间时,网络初始向量的最大取值空间;网络初始向量与已知特征向量垂直,则lyNN平衡解向量将垂直于该特征向量;证明了平衡解向量位于由非零初始向量确定的超球面上的结论.基于以上分析,设计了用lyNN求矩阵特征值特征向量的具体算法,实例演算验证了该算法的有效性.1yNN不出现有限溢,而基于Oja-N的方法在矩阵负定、初始向量位于单位超球面外时必出现有限溢,算法失效.与基于优化的方法相比,lyNN实现容易,计算量较小.  相似文献   

9.
There exist algorithms, also called “fast” algorithms, which exploit the special structure of Toeplitz matrices so that, e.g., allow to solve a linear system of equations in O(n 2) flops. However, some implementations of classical algorithms that do not use this structure (O(n 3) flops) highly reduce the time to solution when several cores are available. That is why it is necessary to work on “fast” algorithms so that they do not lose track of the benefits of new hardware/software. In this work, we propose a new approach to the Generalized Schur Algorithm, a very known algorithm for the solution of Toeplitz systems, to work on a Block–Toeplitz matrix. Our algorithm is based on matrix–matrix multiplications, thus allowing to exploit an efficient implementation of this operation if it exists. Our algorithm also makes use of the thread level parallelism featured by multicores to decrease execution time.  相似文献   

10.
We introduce a new method to reduce a real matrix to a real Schur form by a sequence of similarity transformations that are 3D orthogonal transformations. Two significant features of this method are that: all the transformed matrices and all the computations are done in the real field; and it can be easily parallelized. We call the algorithm that uses this method the real two-zero (RTZ) algorithm. We describe both serial and parallel implementations of the RTZ algorithm. Our tests indicate that the rate of convergence to a real Schur form is quadratic for real near-normal matrices with real distinct eigenvalues. Suppose n is the order of a real matrix A. In order to choose a sequence of 3D orthogonal transformations on A, we need to determine some ordering on triples in T={(k,l,m)|1⩽k相似文献   

11.
An algorithm for the determination of the eigenvalues of tridiagonal symmetric interval matrices is presented. The intervals of the eigenvalues are not overestimated, but exactly calculated. The algorithm is an efficient one, because it only needs twice as many operations as the Sturm algorithm, which is used for real matrices  相似文献   

12.
在分析一类离散事件动态系统的运行周期及稳定性时,必须求解极大代数意义下矩阵的特征值及特征向量,这一直被认为是十分困难和繁复的工作.本文给出了求任一方阵特征值及特征向量的十分简单易行的方法以及有关的定理.  相似文献   

13.
在分析一类离散事件动态系统的运行周期及稳定性时,必须求解极大代数意义下矩阵的 特征值及特征向量,这一直被认为是十分困难和繁复的工作.本文给出了求任一方阵特征值 及特征向量的十分简单易行的方法以及有关的定理.  相似文献   

14.
《国际计算机数学杂志》2012,89(12):1849-1863
This paper presents a computational procedure for finding eigenvalues of a real matrix based on Alternate Quadrant Interlocking Factorization, a parallel direct method developed by Rao in 1994 for the solution of the general linear system Ax=b. The computational procedure is similar to LR algorithm as studied by Rutishauser in 1958 for finding eigenvalues of a general matrix. After a series of transformations the eigenvalues are obtained from simple 2×2 matrices derived from the main and cross diagonals of the limit matrix. A sufficient condition for the convergence of the computational procedure is proved. Numerical examples are given to demonstrate the method.  相似文献   

15.
提出一种融合邻域寻优与θ-PSO算法的矩阵特征值求解新方法,将矩阵特征值的求解问题转化为最优化问题。与需要多次运行程序分别求解不同范围的特征值算法相比,该方法可以一次性求出矩阵的全部特征根。仿真实验表明,该算法编程实现方便,对于不同类型的矩阵均可以应用,求解精度高,收敛速度快,大概在10~15代左右就可以收敛,完全可以满足工程实践运算中对精度和速度的要求。  相似文献   

16.
为了解决增量式最小二乘孪生支持向量回归机存在构成的核矩阵无法很好地逼近原核矩阵的问题,提出了一种增量式约简最小二乘孪生支持向量回归机(IRLSTSVR)算法。该算法首先利用约简方法,判定核矩阵列向量之间的相关性,筛选出用于构成核矩阵列向量的样本作为支持向量以降低核矩阵中列向量的相关性,使得构成的核矩阵能够更好地逼近原核矩阵,保证解的稀疏性。然后通过分块矩阵求逆引理高效增量更新逆矩阵,进一步缩短了算法的训练时间。最后在基准测试数据集上验证算法的可行性和有效性。实验结果表明,与现有的代表性算法相比,IRLSTSVR算法能够获得稀疏解和更接近离线算法的泛化性能。  相似文献   

17.
Many practical applications include matrix operations as essential procedures. In addition, recent studies of matrix operations rely on parallel processing to reduce any calculation delays. Because these operations are highly data intensive, many studies have investigated work distribution techniques and data access latency to accelerate algorithms. However, previous studies have not considered hardware architectural features adequately, although they greatly affect the performance of matrix operations. Thus, the present study considers the architectural characteristics that affect the performance of matrix operations on real multicore processors. We use matrix multiplication, LU decomposition, and Cholesky factorization as the test applications, which are well-known data-intensive mathematical algorithms in various fields. We argue that applications only access matrices in a particular direction, and we propose that the canonical data layout is the optimal matrix data layout compared with the block data layout. In addition, the tiling algorithm is utilized to increase the temporal data locality in multilevel caches and to balance the workload as evenly as possible in multicore environments. Our experimental results show that applications using the canonical data layout with tiling have an 8.23% faster execution time and 3.91% of last level cache miss rate compared with applications executed with the block data layout.  相似文献   

18.
The first algorithm is a generalization of the iterative method of successive coordinate overrelaxation. Instead of computing a single eigenvalue and its corresponding eigenvector at a time, p vectors are iterated simultaneously in such a way that they converge ultimately towards the eigenvectors corresponding to the p smallest eigenvalues. This method of simultaneous group coordinate overrelaxation has the advantage of taking into account the sparseness of the matrices A and B to full extent, to operate on the originally given matrices and to require a relatively small working storage space.The second algorithm is a variant of the well-known bisection method. It is shown that the inertia theorem for quadratic forms can be applied to localize the eigenvalues. Hence, an appropriate congruence transformation of the matrix A ? βB, which is assumed now to be banded, allows one to determine the number of eigenvalues that are smaller than μ. In order to increase the numerical stability of the reduction process, a partial pivoting is applied consisting of auxiliary congruence transformations in such a way that the band width is only doubled. This approach to the bisection method is conceptually much simpler - the actual algorithm preserves the symmetry and requires a smaller working storage space for the computation of the eigenvectors by inverse vector iteration in comparison with the well-known realizations of the bisection method.  相似文献   

19.
In the design of decentralized networked systems, it is important to know whether a given network topology can sustain stable dynamics. We consider a basic version of this problem here: given a vector space of sparse real matrices, does it contain a stable (Hurwitz) matrix? Said differently, is a feedback channel (corresponding to a non-zero entry) necessary for stabilization or can it be done without? We provide in this paper a set of necessary conditions and a set of sufficient conditions for the existence of stable matrices in a vector space of sparse matrices. We further prove some properties of the set of sparse matrix spaces that contain Hurwitz matrices. The conditions we exhibit are most easily stated in the language of graph theory, which we thus adopt in this paper.  相似文献   

20.
A numerical comparison is made of most published methods for solving the linear matrix equations which arise when a quadratic form Liapunov function is applied to a constant linear system (continuous or discrete, real or complex). Generally, for the real equations direct methods are satisfactory for systems of order ten or less, whereas for larger order systems iterative methods (based upon expressing the solution in terms of an infinite series) are to be preferred. For the complex equations the most convenient numerical method uses an explicit representation for the solution in terms of the eigenvalues and vectors of the system matrix. If the system matrix is in companion form then algorithms taking account of this structure offer minor improvements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号