首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
大型复线性方程组预处理双共轭梯度法   总被引:2,自引:0,他引:2  
当复线性方程组的规模较大或系数矩阵的条件数很大时,系数矩阵易呈现病态特性,双共轭梯度法存在不收敛和收敛速度慢的潜在问题,采用适当的预处理技术,可以改善矩阵病态特性,加快收敛速度。从实型不完全Cholesky分解预处理方法出发,构造了一种针对复线性方程组的预处理方法,结合双共轭梯度法,给出了一种预处理双共轭梯度法。数值算例表明该算法求解速度快,可靠高效,能够应用于大型复线性方程组的求解。  相似文献   

2.
本文提出了一种新颖的混沌神经元模型,其激励函数由Gauss函数和Sigmoid函数组成,分又图和Lyapunov指数的计算袁明其具有复杂的混沌动力学特性。在此基础上构成一种暂态混沌神经网络,将大范围的倍周期倒分叉过程的混沌搜索和最优解邻域内的类似Hopfield网络的梯度搜索相结合,应用于函数优化计算问题的求解。实验证明,它具有较
较强的全局寻优能力和较快的收敛速度。  相似文献   

3.
对神经网络中的LMBP(Levenberg-Marquardt BP)算法的收敛速度慢进行分析,针对矩阵JTJ+µI求逆过程运算量过大而造成收敛速度慢的缺陷,根据无约束优化理论,提出一种基于共轭梯度方法的改进LMBP网络学习算法,利用求解大规模线性方程组的共轭梯度方法,避免了烦琐的求逆过程,降低了计算复杂度,加快了网络的收敛速度,通过Matlab仿真,比较了算法的收敛速度,证明了方法的有效性。  相似文献   

4.
采用变参数激励函数的人工神经网络   总被引:4,自引:0,他引:4  
隐层神经元采用相同的激励函数会限制网络的非线性表达能力,因此,提出变参数激励函数,在学习中同时调整网络权值和激励函数的参数,增强网络的表达能力。并使用遗传算法与MBP结合的学习算法训练网络,此方法具有全局收敛能力和很高的精度。  相似文献   

5.
提出了一种同时具有迟滞和混沌特性的神经元模型,并利用该模型构造出神经网络,用于求解优化计算等问题.通过在神经元中引入自反馈,使得神经元具有混沌特性.将神经元的激励函数改为具有上升分支和下降分支的迟滞函数,从而将迟滞特性引入神经元和神经网络中.结合模拟退火机制,在优化计算初期,利用混沌特性可提高网络的遍历寻优能力,利用迟滞特性可在一定程度上克服假饱和现象,提高网络的寻优速度:在优化计算末期,网络蜕变为普通的Hopfiled型神经网络,按照梯度寻优方式收敛到某局部最优解.可通过构造能量函数的方法,将图像识别中的特征点匹配等问题转化为优化计算问题,从而可采用该神经网络进行问题求解.仿真结果验证了该方法的有效性.  相似文献   

6.
基于工作站机群并行求解有限元线性方程组   总被引:2,自引:0,他引:2  
随着计算机高速网络技术的发展,工作站机群正在成为并行计算的主要平台.有限元线性方程组在土木工程结构分析中是最常见的问题.预处理共轭梯度法(PCGM)是求解线性方程组的迭代方法.对预处理共轭梯度法进行并行化并在两个不同的机群上实现,对存储方式进行详细分析,编程中采用了稀疏矩阵向量相乘的优化技术.数值结果表明,设计的并行算法具有良好的加速比和并行效率,说明并行计算能更快地求解大规模问题.  相似文献   

7.
在LMBP算法训练过程中,大型矩阵的求逆运算限制算法的收敛速度,本文针对这一特点,在训练网络的权值和偏移值时采用求解大规模线性方程组的超记忆梯度算法,避免矩阵求逆耗时的缺点,同时对原有的步长因子进行自适应改变,并通过网络修剪对隐层神经元结构进行优化.最后以某型号设备齿轮箱为例进行仿真.结果表明,本文的改进算法能够明显缩短训练时间,并且经过此算法训练的网络有较高的故障诊断性能.  相似文献   

8.
提出了一种基于梯度投影矩阵下的求解线性约束下规划问题的神经网络模型,并导出了线性约束下规划问题的稳定解法,该网络模型既适合于求解线性约束下线性或非二次规划问题,又适合于求解线性或非线性方程组,与其它规划问题的神经网络相比,更具有一般性。  相似文献   

9.
提出了采用粒子群算法求解线性方程组和非线性方程组的智能算法。采用粒子群算法求解方程组具有形式简单、收敛迅速和容易理解等特点,且能在一次计算中多次发现方程组的解,可以解决非线性方程组多解的求解问题,为线性方程组和非线性方程组的求解提供了一种新的方法。  相似文献   

10.
序列泛函网络模型及其学习算法与应用   总被引:4,自引:0,他引:4  
通过对泛函网络的分析,提出了一种序列泛函网络模型及学习算法,而网络的泛函参数利用梯度下降法来进行学习.在此基础上,给出了9种典型泛函方程对应的序列泛函网络求解模型以及一种基于序列泛函网络学习算法的求解泛函方程方法.通过算例进行仿真实验,结果表明,该方法十分有效,具有收敛速度快、计算精度高、泛化性能好等特点,解决了传统的数值方法难以求解泛函方程这个问题.该方法可用于一般泛函方程求解问题.  相似文献   

11.
By adding different activation functions, a type of gradient-based neural networks is developed and presented for the online solution of Lyapunov matrix equation. Theoretical analysis shows that any monotonically-increasing odd activation function could be used for the construction of neural networks, and the improved neural models have the global convergence performance. For the convenience of hardware realization, the schematic circuit is given for the improved neural solvers. Computer simulation results further substantiate that the improved neural networks could solve the Lyapunov matrix equation with accuracy and effectiveness. Moreover, when using the power-sigmoid activation functions, the improved neural networks have superior convergence when compared to linear models.  相似文献   

12.
Four gradient-based recurrent neural networks for computing the Drazin inverse of a square real matrix are developed. Theoretical analysis shows that any monotonically-increasing odd activation function ensures the global convergence performance of defined neural network models. The computer simulation results further substantiate that the considered neural networks could compute the Drazin inverse with accuracy and effectiveness. Moreover, the presented neural networks show superior convergence in the case when the power-sigmoid activation functions are used compared to linear models.  相似文献   

13.
Fourier三角基神经元网络的权值直接确定法   总被引:1,自引:0,他引:1  
根据Fourier变换理论,本文构造出一类基于三角正交基的前向神经网络模型。该模型由输入层、隐层、输出层构成,其输入层和输出层采用线性激励函数,以一组三角正交基为其隐层神经元的激励函数。依据误差回传算法(即BP算法),推导了权值修正的迭代公式。针对BP迭代法收敛速度慢、逼近目标函数精度较低的缺点,进一步提出基于伪逆的权值直接确定法,该方法避免了权值反复迭代的冗长过程。仿真和预测结果表明,该方法比传统的BP迭代法具有更快的计算速度和更高的仿真与测试精度。  相似文献   

14.
Lei  Zhang  Jiali  Pheng Ann   《Neurocomputing》2009,72(16-18):3809
Multistability is an important dynamical property in neural networks in order to enable certain applications where monostable networks could be computationally restrictive. This paper studies some multistability properties for a class of bidirectional associative memory recurrent neural networks with unsaturating piecewise linear transfer functions. Based on local inhibition, conditions for globally exponential attractivity are established. These conditions allow coexistence of stable and unstable equilibrium points. By constructing some energy-like functions, complete convergence is studied.  相似文献   

15.
Yi Z  Tan KK  Lee TH 《Neural computation》2003,15(3):639-662
Multistability is a property necessary in neural networks in order to enable certain applications (e.g., decision making), where monostable networks can be computationally restrictive. This article focuses on the analysis of multistability for a class of recurrent neural networks with unsaturating piecewise linear transfer functions. It deals fully with the three basic properties of a multistable network: boundedness, global attractivity, and complete convergence. This article makes the following contributions: conditions based on local inhibition are derived that guarantee boundedness of some multistable networks, conditions are established for global attractivity, bounds on global attractive sets are obtained, complete convergence conditions for the network are developed using novel energy-like functions, and simulation examples are employed to illustrate the theory thus developed.  相似文献   

16.
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples  相似文献   

17.
A new equalization model for digital communication systems is proposed, based on a multi-layer perceptron (MLP) artificial neural network with a backpropagation algorithm. Unlike earlier techniques, the proposed model, called the bidimensional neural equalizer, is composed of two independent MLP networks that operate in parallel for each dimension of the digital modulation scheme. A heuristic method to combine the errors of the two MLP networks is also proposed, with the aim of reducing the convergence time. Simulations performed for linear and nonlinear channels demonstrated that the new model could improve performance in terms of the bit error rate and the convergence time, compared to existing models.  相似文献   

18.
针对传统时间序列预测模型不适应非线性预测而适应非线性预测的BP算法存在收敛速度慢,且容易陷入局部极小等问题,提出一种基于构造性神经网络的时间序列混合预测模型。采用构造性神经网络模型(覆盖算法)得出的类别值对统计时间序列模型的预测值进行修正,建立一种同时考虑时间序列自身周期变化和外生变量因子对时间序列未来变化趋势影响的混合预测模型,涵盖了实际问题的线性和非线性两方面,提高了预测精度。将该模型应用到粮食产量的预测中,取得了较好的预测效果。  相似文献   

19.
针对传统时间序列预测模型不适应非线性预测而适应非线性预测的 BP算法存在收敛速度慢 ,且容易陷入局部极小等问题 ,提出一种基于构造性神经网络的时间序列混合预测模型。采用构造性神经网络模型 (覆盖算法 )得出的类别值对统计时间序列模型的预测值进行修正 ,建立一种同时考虑时间序列自身周期变化和外生变量因子对时间序列未来变化趋势影响的混合预测模型 ,涵盖了实际问题的线性和非线性两方面 ,提高了预测精度。将该模型应用到粮食产量的预测中 ,取得了较好的预测效果。  相似文献   

20.
A novel neuron model and its learning algorithm are presented. They provide a novel approach for speeding up convergence in the learning of layered neural networks and for training networks of neurons with a nondifferentiable output function by using the gradient descent method. The neuron is called a saturating linear coupled neuron (sl-CONE). From simulation results, it is shown that the sl-CONE has a high convergence rate in learning compared with the conventional backpropagation algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号