首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 205 毫秒
1.
在分析一类离散事件动态系统的运行周期及稳定性时,必须求解极大代数意义下矩阵的特征值及特征向量,这一直被认为是十分困难和繁复的工作.本文给出了求任一方阵特征值及特征向量的十分简单易行的方法以及有关的定理.  相似文献   

2.
一种求解复Hermite矩阵特征值的方法   总被引:3,自引:0,他引:3  
介绍几种求解矩阵特征值和特征向量的经典算法及各自优缺点,通过理论推导,提出了一种性能稳健的方法,可以求解信号处理中常见的复Hermite阵.将对复Hermite矩阵求特征值和特征向量的问题转化为求解实对称阵的特征值和特征向量,而实对称阵的求解采用一种改进的三对角Householder法.最后把结果与Matlab仿真结果比较,可以看出该方法有很高的精确度.  相似文献   

3.
一种计算矩阵特征值特征向量的神经网络方法   总被引:1,自引:0,他引:1  
当把Oja学习规则描述的连续型全反馈神经网络(Oja-N)用于求解矩阵特征值特征向量时,网络初始向量需位于单位超球面上,这给应用带来不便.由此,提出一种求解矩阵特征值特征向量的神经网络(1yNN)方法.在lyNN解析解基础上得到了以下结果:初始向量属于任意特征值对应特征向量张成的子空间,则网络平衡向量也将属于该空间;分析了lyNN收敛于矩阵最大特征值对应特征向量的初始向量取值条件;明确了lyNN收敛于矩阵不同特征值的特征子空间时,网络初始向量的最大取值空间;网络初始向量与已知特征向量垂直,则lyNN平衡解向量将垂直于该特征向量;证明了平衡解向量位于由非零初始向量确定的超球面上的结论.基于以上分析,设计了用lyNN求矩阵特征值特征向量的具体算法,实例演算验证了该算法的有效性.1yNN不出现有限溢,而基于Oja-N的方法在矩阵负定、初始向量位于单位超球面外时必出现有限溢,算法失效.与基于优化的方法相比,lyNN实现容易,计算量较小.  相似文献   

4.
在不相关图像投影分析的基础上,重点分析特征值及特征向量的扰动特性,指出病态特征值所对应的特征向量会受到较大扰动.因此,若以该特征向量作为投影轴进行投影,则所得到的特征矢量不能提供有效的鉴别信息.由此,提出不相关鉴别矢量集的优化方法.在ORL人脸库上的实验结果表明,利用该优化方法可简化投影矩阵,从而提高特征提取的效率并使识别率的稳定性得到改善.同时,本文提出的基于扰动分析的优化方法同样适用于对其它线性鉴别矢量集进行优化.  相似文献   

5.
应用Hessian矩阵的(特征值,特征向量)参数组的关系提取鼻梁的中脊线,并区分出中脊线左右二边的毛囊识别区域.然后,在Hessian矩阵特征值符号的基础上,加入最大特征值对应的特征向量的方向和梯度方向的关系作为毛囊检测的条件.在103人的数据库中,得到的识别正确率为86.26%.实验结果表明,可以把鼻部毛囊的特征识别用作高效的人体身份认证技术之一.  相似文献   

6.
徐心和  于海斌 《信息与控制》1990,19(6):35-38,54
9 系统的分析与补偿9.1 特征值与特征向量和传统的控制理论一样,系统渐近行为的分析是可从自治系统 x=Ax 的特征值与特征向量的研究中得到。定义9.1(特征值与特征向量)对于矩阵 A(γ,δ),如果存在一非零多项式向量 x(γ,δ)及二整数 m,n,满足x(γ,δ)=A(γ,δ)x(γ,δ)(modγ~nδ~m)则此二整数比 λ=m/n 为其特征值,而 x(γ,δ)为其特征向量。  相似文献   

7.
五对角矩阵的特征值反问题   总被引:1,自引:0,他引:1  
本文讨论了一类由五个特征值和相应特征向量构造实对称五对角矩阵的特征值反问题.研究了解的存在性以及存在解的充分必要条件,而且给出了算法和数值例子.  相似文献   

8.
关于反馈系统的特征结构配置   总被引:2,自引:0,他引:2  
本文研究了反馈系统的特征值——特征向量配置或称为特征结构配置(eigenstructureassignment)问题。文中给出了可配置条件以及反馈规律的一般表示式;当配置的特征值——特征向量对数小于系统的维数时,还给出了闭环系统其余特征值之和可以任意配置的充分必要条件。  相似文献   

9.
在分析一类离散事件动态系统的运行周期性及稳定性时,必须求解极大代数意义下矩阵的特征值。这一直被认为是十分困难的工作。直至目前为止,尚无一种能确定任一方阵全部特征值及特征向量的简易方法。本文对极大代数意义下任一方阵的幂矩阵的周期性特征进行了深入分析。本文的结果为寻求计算特征值及特征向量的新算法提供了十分有效的途径。  相似文献   

10.
提出一种基于进化策略求解矩阵特征值及特征向量的新方法。该方法在进化过程中通过重组、突变、选择对个体进行训练学习,向最优解逼近。当达到预先给定的误差时,程序终止,得到最优解。实验结果表明,与传统方法相比,该方法的收敛速度较快,求解精度提高了10倍。该算法能够快速有效地获得任意矩阵对应的特征值及特征向量。  相似文献   

11.
In this paper, we address the matrix chain multiplication problem, i.e., the multiplication of several matrices. Although several studies have investigated the problem, our approach has some different points. First, we propose MapReduce algorithms that allow us to provide scalable computation for large matrices. Second, we transform the matrix chain multiplication problem from sequential multiplications of two matrices into a single multiplication of several matrices. Since matrix multiplication is associative, this approach helps to improve the performance of the algorithms. To implement the idea, we adopt multi-way join algorithms in MapReduce that have been studied in recent years. In our experiments, we show that the proposed algorithms are fast and scalable, compared to several baseline algorithms.  相似文献   

12.
In this paper, we describe scalable parallel algorithms for symmetric sparse matrix factorization, analyze their performance and scalability, and present experimental results for up to 1,024 processors on a Gray T3D parallel computer. Through our analysis and experimental results, we demonstrate that our algorithms substantially improve the state of the art in parallel direct solution of sparse linear systems-both in terms of scalability and overall performance. It is a well known fact that dense matrix factorization scales well and can be implemented efficiently on parallel computers. In this paper, we present the first algorithms to factor a wide class of sparse matrices (including those arising from two- and three-dimensional finite element problems) that are asymptotically as scalable as dense matrix factorization algorithms on a variety of parallel architectures. Our algorithms incur less communication overhead and are more scalable than any previously known parallel formulation of sparse matrix factorization. Although, in this paper, we discuss Cholesky factorization of symmetric positive definite matrices, the algorithms can be adapted for solving sparse linear least squares problems and for Gaussian elimination of diagonally dominant matrices that are almost symmetric in structure. An implementation of one of our sparse Cholesky factorization algorithms delivers up to 20 GFlops on a Gray T3D for medium-size structural engineering and linear programming problems. To the best of our knowledge, this is the highest performance ever obtained for sparse Cholesky factorization on any supercomputer  相似文献   

13.
If the leading matrix of a linear differential system is nonsingular, then its determinant is known to bear useful information about solutions of the system. Of interest is also the frontal matrix. However, each of these matrices (we call them revealing matrices) may occur singular. In the paper, a brief survey of algorithms for transforming a system of full rank to a system with a nonsingular revealing matrix of a desired type is given. The same transformations can be used to check whether the rank of the original system is full. A Maple implementation of these algorithms (package EGRR) is discussed, and results of comparison of estimates of their complexity with actual operation times on a number of examples are presented.  相似文献   

14.
The textured, iterative approximation algorithms are a class fast linear equation solvers and differ from the classical iterative algorithms fundamentally in their approximations of system matrices. The textured approach uses different approximations of a system matrix in a round-robin fashion while the classical approaches use a single fixed approximation. It therefore has a better approximation of system matrix and a potentially faster speed. In this paper we prove that the convergent speed of the textured iterative algorithms for linear equations with a class of tridiagonal system matrices is strictly faster than the corresponding classical iterative algorithms. We also give the spectral radii of the textured iterative and classical algorithms for this class of linear equations. These results provide some insights and theoretical supports for the textured iterative algorithms  相似文献   

15.
提出两种基于矩阵分解的DLDA特征抽取算法。通过引入QR分解和谱分解(SF)两种矩阵分析方法,在DLDA鉴别准则下,对散布矩阵实现降维,从而得到描述人脸图像样本更有效和稳定的分类信息。该方法通过对两种矩阵分解过程的分析,证明在传统Fisher鉴别分析方法中,矩阵分解同样可以模拟PCA过程对样本进行降维,从而克服了小样本问题。在ORL人脸数据库上的实验结果验证了该算法的有效性。  相似文献   

16.
Intensity modulated radiation therapy (IMRT) is one of the most effective modalities for modern cancer treatment. The key to successful IMRT treatment hinges on the delivery of a two-dimensional discrete radiation intensity matrix using a device called a multileaf collimator (MLC). Mathematically, the delivery of an intensity matrix using an MLC can be viewed as the problem of representing a non-negative integral matrix (i.e., the intensity matrix) by a linear combination of certain special non-negative integral matrices called segments, where each such segment corresponds to one of the allowed states of the MLC. The problem of representing the intensity matrix with the minimum number of segments is known to be NP-complete. In this paper, we present two approximation algorithms for this matrix representation problem. To the best of our knowledge, these are the first algorithms to achieve non-trivial performance guarantees for multi-row intensity matrices.  相似文献   

17.
18.
In this paper, we survey several recent results that highlight an interplay between a relatively new class of quasiseparable matrices and univariate polynomials. Quasiseparable matrices generalize two classical matrix classes, Jacobi (tridiagonal) matrices and unitary Hessenberg matrices that are known to correspond to real orthogonal polynomials and Szegö polynomials, respectively. The latter two polynomial families arise in a wide variety of applications, and their short recurrence relations are the basis for a number of efficient algorithms. For historical reasons, algorithm development is more advanced for real orthogonal polynomials. Recent variations of these algorithms tend to be valid only for the Szegö polynomials; they are analogues and not generalizations of the original algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号