首页 | 本学科首页   官方微博 | 高级检索  
     

一种计算矩阵特征值特征向量的神经网络方法
引用本文:刘怡光,游志胜,曹丽萍,蒋欣荣.一种计算矩阵特征值特征向量的神经网络方法[J].软件学报,2005,16(6):1064-1072.
作者姓名:刘怡光  游志胜  曹丽萍  蒋欣荣
作者单位:1. 四川大学,计算机学院,图像图形研究所,四川,成都,610064
2. 四川大学,公共管理学院,四川,成都,610064
基金项目:Supported by the National Natural Science Foundation of China under Grant No.69732010 (国家自然科学基金); the Innovation Fund for Small Technology-Based Firms of Ministry of Science and Technology of China under Grant No.03C26225100257 (国家科技部科技型中小企业技术创新基金)
摘    要:当把Oja学习规则描述的连续型全反馈神经网络(Oja-N)用于求解矩阵特征值特征向量时,网络初始向量需位于单位超球面上,这给应用带来不便.由此,提出一种求解矩阵特征值特征向量的神经网络(1yNN)方法.在lyNN解析解基础上得到了以下结果:初始向量属于任意特征值对应特征向量张成的子空间,则网络平衡向量也将属于该空间;分析了lyNN收敛于矩阵最大特征值对应特征向量的初始向量取值条件;明确了lyNN收敛于矩阵不同特征值的特征子空间时,网络初始向量的最大取值空间;网络初始向量与已知特征向量垂直,则lyNN平衡解向量将垂直于该特征向量;证明了平衡解向量位于由非零初始向量确定的超球面上的结论.基于以上分析,设计了用lyNN求矩阵特征值特征向量的具体算法,实例演算验证了该算法的有效性.1yNN不出现有限溢,而基于Oja-N的方法在矩阵负定、初始向量位于单位超球面外时必出现有限溢,算法失效.与基于优化的方法相比,lyNN实现容易,计算量较小.

关 键 词:神经网络  对称矩阵  特征值  特征向量  有限溢
文章编号:1000-9825/2005/16(06)1064
收稿时间:2003/9/22 0:00:00
修稿时间:5/8/2004 12:00:00 AM

A Neural Network Algorithm for Computing Matrix Eigenvalues and Eigenvectors
LIU Yi-Guang,YOU Zhi-Sheng,CAO Li-Ping and JIANG Xin-Rong.A Neural Network Algorithm for Computing Matrix Eigenvalues and Eigenvectors[J].Journal of Software,2005,16(6):1064-1072.
Authors:LIU Yi-Guang  YOU Zhi-Sheng  CAO Li-Ping and JIANG Xin-Rong
Abstract:While using continuous time neural network described by the E.Oja. learning rule (Oja-N) for computing real symmetrical matrix eigenvalues and eigenvectors, the initial vector must be on Rn unit hyper-sphere surface, otherwise, the network may produce limit-time overflow. In order to get over this defect, a new neural network (lyNN) algorithm is proposed. By using the analytic solution of the differential equation of lyNN, the following results are received: If initial vector belongs to a space corresponding to certain eigenvector, the lyNN equilibrium vector will converge in this space; If initial vector does not fall into the space corresponding to any eigenvector, the equilibrium vector will belong to the space spanned by eigenvectors corresponding to the maximum eigenvalue. The initial vector maximum space for the lyNN equilibrium vector will fall into space spanned by eigenvectors corresponding to any eigenvalue received. If the initial vector is perpendicular to a known eigenvector, so is the equilibrium vector. The equilibrium vector is on the hyper-sphere surface decided by the initial vector. By using the above results, a method for computing real symmetric matrix eigenvalues and eigenvectors using lyNN is proposed, the validity of this algorithm is exhibited by two examples, indicating that this algorithm does not bring about limit-time overflow. But for Oja-N, if the initial vector is outside the unit hyper-sphere and the matrix is negatively determinant, the neural network will consequentially produce limit-time overflow. Compared with other algorithms based on optimization, lyNN can be realized directly and its computing weight is lighter.
Keywords:neural network  symmetric matrix  eigenvalue  eigenvector  limit-time overflow
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号