首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
将求解高阶矩阵的最大特征值及其对应的特征向量问题转化为高阶非线性方程组的求解问题。在此基础上,提出了求解矩阵最大特征值及其对应特征向量的拟Newton法,给出求解矩阵最大特征值及其单位化向量重新整理后的Broyden方法公式、BFS方法公式、DFP方法公式及其对应的Broyden算法,BFS算法,DFP算法。以层次分析法中高阶判断矩阵为例验证了该方法的可行性,说明了该方法相对收敛速度快的优势。  相似文献   

2.
提出一种基于进化策略求解矩阵特征值及特征向量的新方法。该方法在进化过程中通过重组、突变、选择对个体进行训练学习,向最优解逼近。当达到预先给定的误差时,程序终止,得到最优解。实验结果表明,与传统方法相比,该方法的收敛速度较快,求解精度提高了10倍。该算法能够快速有效地获得任意矩阵对应的特征值及特征向量。  相似文献   

3.
求解大型结构矩阵低阶特征值和特征向量,子空间迭代法是有效方法之一,而且其应用也比较普遍。为了提高计算效率和加速迭代步的收敛,本文对1)初始向量的选择及[-1,1]随机向量的应用,2)原点移位加速迭代收敛及移位因子的估计,3)最小计算量和子空间尺寸的优化选择,4)Sturm序列的应用等问题进行了数值研究,取得了满意效果。  相似文献   

4.
求解矩阵特征值和特征向量的PSO算法   总被引:3,自引:1,他引:2       下载免费PDF全文
提出一种基于粒子群优化算法的求解方法,将线性方程组的求解转化为无约束优化问题加以解决,采用粒子群优化算法求解矩阵特征值和特征向量。仿真实验结果表明,该方法求解精度高、收敛速度快,能够在10代左右收敛,可以有效获得任意矩阵的特征值和特征向量。  相似文献   

5.
相干信号源DOA估计ESPRIT改进算法研究   总被引:1,自引:0,他引:1  
ESPRIT算法是空间谱估计中的经典算法,算法仅适用于非相干信号。提出了一种基于SVD思想的ESPRIT改进算法,算法对接收数据阵列的协方差矩阵进行特征值分解,取能代表信号全部信息的最大特征值所对应的特征向量,并将它按照一定的方式进行矩阵重构,对重构后的矩阵进行SVD分解,从中取出信号子空间,然后按照LS—ESPRIT算法求解。通过大量仿真结果表明:该算法适用于解相干信号,并且效果优于基于空间平滑思想的改进ESPRIT算法;在解非相干信号的时候,改进算法效果优于常规的LS—ESPRIT算法。  相似文献   

6.
Hopfield神经网络参数设置   总被引:8,自引:0,他引:8  
以TSP问题入手,详细分析了Hopfield神经网络行为特征。采用了加强能量函数,比H-T模式更有效。从几何学角度分析了权值矩阵的特征值所对应的子空间,从而获得设置网络参数的标准。模拟结果显示,新的网络参数能保证网络收敛到有效解。  相似文献   

7.
郑锋  程勉  高为炳 《自动化学报》1995,21(3):257-265
求解特征矩阵是镇定时滞系统的关键问题,本文给出了系统的特征根的代数重复度与几 何重复度均为一般值情况下特征矩阵的求法,即把它归结为求解一组线性代数方程的问题,并 得到了该方程组有解及对应于同一特征值的解向量组线性独立的充分条件.此外,还提出了 一种算法,用以处理系统对应于不同特征值的左特征向量线性相关情况下系统的镇定问题.  相似文献   

8.
邹士新  张妍 《计算机仿真》2010,27(1):43-45,75
提出了一种新的基于拷贝场噪声子空间扰动约束的稳健匹配场定位方案。在每个搜索位置处,随机扰动环境参数以生成拷贝场协方差矩阵,通过特征值分解得到拷贝场信号子空间与拷贝场噪声子空间,用于约束的子空间由拷贝场噪声子空间张成。对实测数据协方差阵进行同样处理,但选择其最大特征值对应的特征向量作为真实的信号向量,与约束子空间形成定位模糊表面。使用仿真与实验数据验证了算法,结果表明算法同时具有高分辨与稳健的特性。  相似文献   

9.
肖自红 《计算机工程与应用》2012,48(25):149-153,173
理解复杂网络的关键在于迅速精确地发现网络中的社团结构。基于图理论的谱聚类算法是一种有效并全局收敛的优秀社团发现算法,其计算量集中于特征值和特征向量的计算。结合常系数线性常微分方程的解与系数矩阵特征值的关系,提出了基于微分方程的谱聚类社团发现算法(AMCF和LMCF);这两种算法避免了矩阵的特征值和特征向量的复杂计算过程,为社团发现算法提供了新的思路。理论分析和实验验证了算法的有效性。  相似文献   

10.
一种全局收敛的PCA神经网络学习算法   总被引:2,自引:1,他引:2  
主元分析(PCA)也称为K-L变换是进行特征提取的一种重要方法。近年来,为了处理海量数据,许多基于Hebbian学习算法的PCA神经网络被提出来。传统的算法,通常不能保证其收敛性或者收敛速度较慢。基于CRLS神经网络,本文提出了一种新的确保权向量收敛的学习算法,本算法无须在计算中规格化权向量。同时也证明了该学习算法使得权向量收敛到最大特征值所对应的特征向量。实验表明,与传统的CRLS神经网络比较,本文算法准确性得到极大提高。  相似文献   

11.
用神经网络计算矩阵特征值与特征向量   总被引:13,自引:0,他引:13  
该文研究用神经网格求解一般实对称矩阵的全部特征向量的问题。详细讨论了网络的平均态度合的结构并建立了平衡态集合的构造定理。通过求解简单的一维微分方程求出了网络的解析表达式。这一表达式是由对称矩阵的特征值与特征向量表达的、因而非常清晰利用解的解析表达式分析了网络的解的全局渐近行为。提出了用一些单位向量作为网络初始值计算对称矩阵的全部特征值与特征向量的具体算法。  相似文献   

12.
引言科学工程计算的核心问题之一是数值求解大规模线性方程组,即给定n阶非奇异的非对1期贾仲孝等:解大规模非对称线性方程组的Lanczos方法和精化Lanczos方法称矩阵A和n维向量b,求一个。维向量x,使得Ax=b.(l)观察到该问题可以转化为  相似文献   

13.
Generalized eigenvector plays an essential role in the signal processing field. In this paper, we present a novel neural network learning algorithm for estimating the generalized eigenvector of a Hermitian matrix pencil. Differently from some traditional algorithms, which need to select the proper values of learning rates before using, the proposed algorithm does not need a learning rate and is very suitable for real applications. Through analyzing all of the equilibrium points, it is proven that if and only if the weight vector of the neural network is equal to the generalized eigenvector corresponding to the largest generalized eigenvalue of a Hermitian matrix pencil, the proposed algorithm reaches to convergence status. By using the deterministic discrete-time (DDT) method, some convergence conditions, which can be satisfied with probability 1, are also obtained to guarantee its convergence. Simulation results show that the proposed algorithm has a fast convergence speed and good numerical stability. The real application demonstrates its effectiveness in tracking the optimal vector of beamforming.   相似文献   

14.
Quick extraction of the largest modulus eigenvalues of a real antisymmetric matrix is important for some engineering applications. As neural network runs in concurrent and asynchronous manner in essence, using it to complete this calculation can achieve high speed. This paper introduces a concise functional neural network (FNN), which can be equivalently transformed into a complex differential equation, to do this work. After obtaining the analytic solution of the equation, the convergence behaviors of this FNN are discussed. Simulation result indicates that with general initial complex values, the network will converge to the complex eigenvector which corresponds to the eigenvalue whose imaginary part is positive, and modulus is the largest of all eigenvalues. Comparing with other neural networks designed for the like aim, this network is applicable to real skew matrices.  相似文献   

15.
In order to exploit effectively the power of array and vector processors for the numerical solution of linear algebraic problems it is desirable to express algorithms principally in terms of vector and matrix operations. Algorithms which manipulate vectors and matrices at component level are best suited for execution on single processor hardware. Often, however, it is difficult, if not impossible, to construct efficient versions of such algorithms which are suitable foe execution on parallwl hardware. A method for computing the eigenvalues of real unsymmetric matrices with real eigenvalue spectra is presented. The method is an extension of the one described in ref. [1]. The algorithm makes heavy use of vector inner product evaluations. The manipulation of individual components of vectors and matrices is kept to a minimum. Essentially, the method involves the construction of a sequence of biorthogonal transformation matrices the combined effect of which is to diagonalise the matrix. The eigenvalues of the matrix are diagonal elements of the final diagonalised form. If the eigenvectors of the matrix are also required the algorithm may be extended in a straightforward way. The effectiveness of the algorithm is demonstrated by an application of sequential version to several small matrices and some comments are made about the time complexity of the parallel version.  相似文献   

16.
An iteration method which is not sensitive to small errors in the eigenvalues is developed for finding eigenvectors. The method finds the eigenvector of both the normal and transposed matrix. These eigenvectors can then be used in the Rayleigh quotient to improve the value for the eigenvalue.  相似文献   

17.
Dr. R. Krawczyk 《Computing》1969,4(4):281-293
Zusammenfassung Reelle, einfache Eigenwerte und die dazugehörigen reellen Eigenvektoren einer quadratischen Matrix werden mit Hilfe einer Intervallarithmetik unter Berücksichtigung aller Rundungsfehler abgeschätzt. Für diese Fehlerabschätzung benötigt man als Daten die Näherungswerte eines Eigenwertes und des entsprechenden Eigenvektors, welche man sich nach irgendeinem numerischen Verfahren verschafft hat. Als Ergebnis erhält man ein Intervall und einen Intervallvektor, welche den zugehörigen exakten Eigenwert bzw. den exakten Eigenvektor enthalten. Gleichzeitig ergibt sich daraus auch die Existenz eines reellen Eigenwertes.
Error estimates for real eigenvalues and eigenvectors of matrices
Summary Bounds including round off errors are given for real simple eigenvalues and corresponding eigenvectors of a square matrix by using interval-arithmetics. For this error estimate approximate values are needed for an eigenvalue and the corresponding eigenvector which can be gained by an arbitrary numerical method. The result is an interval and an intervalvector containing the exact eigenvalue and eigenvector respectively. Also the existence of a real eigenvalue is verified.
  相似文献   

18.
Spectral clustering based on matrix perturbation theory   总被引:5,自引:1,他引:5  
This paper exposes some intrinsic characteristics of the spectral clustering method by using the tools from the matrix perturbation theory. We construct a weight ma- trix of a graph and study its eigenvalues and eigenvectors. It shows that the num- ber of clusters is equal to the number of eigenvalues that are larger than 1, and the number of points in each of the clusters can be approximated by the associated eigenvalue. It also shows that the eigenvector of the weight matrix can be used directly to perform clustering; that is, the directional angle between the two-row vectors of the matrix derived from the eigenvectors is a suitable distance measure for clustering. As a result, an unsupervised spectral clustering algorithm based on weight matrix (USCAWM) is developed. The experimental results on a number of artificial and real-world data sets show the correctness of the theoretical analysis.  相似文献   

19.
Derivatives of eigenvalues and eigenvectors have become increasingly important in the development of modern numerical methods for areas such as structural design optimization, dynamic system identification and dynamic control, and the development of effective and efficient methods for the calculation of such derivatives has remained to be an active research area for several decades. In this paper, a practical algorithm has been developed for efficiently computing eigenvector derivatives of generalized symmetric eigenvalue problems. For eigenvector derivative of a separate mode, the computation only requires the knowledge of eigenvalue and eigenvector of the mode itself and an inverse of system matrix accounts for most computation cost involved. In the case of two close modes, the modal information of both modes is required and the eigenvector derivatives can be accurately determined simultaneously at minor additional computational cost. Further, the proposed method has been extended to the case of practical structural design where structural modifications are made locally and the eigenderivatives of the modes concerned before are still of interest. By combining the proposed algorithm together with the proposed inverse iteration technique and singular value decomposition theory, eigenproperties and their derivatives can be very efficiently computed. Numerical results from a practical finite element model have demonstrated the practicality of the proposed method. The proposed method can be easily incorporated into commercial finite element packages to improve the computational efficiency of eigenderivatives needed for practical applications.  相似文献   

20.
Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号