首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
本文提出了一种新颖的混沌神经元模型,其激励函数由Gauss函数和Sigmoid函数组成,分又图和Lyapunov指数的计算袁明其具有复杂的混沌动力学特性。在此基础上构成一种暂态混沌神经网络,将大范围的倍周期倒分叉过程的混沌搜索和最优解邻域内的类似Hopfield网络的梯度搜索相结合,应用于函数优化计算问题的求解。实验证明,它具有较
较强的全局寻优能力和较快的收敛速度。  相似文献   

2.
周婷  贾振红  刘秀玲 《计算机应用》2007,27(12):2910-2912
混沌神经网络能有效地解决函数优化问题。通过把sigmoid函数转化为墨西哥帽小波函数,而单一化退火因子函数被分段指数模拟退火函数所取代,提出了一种新型的混沌神经网络。与传统的混沌神经网络相比,该网络具有更强的全局寻优能力。仿真结果表明,小波混沌神经网络在搜索全局最优解的速度和精确度上都明显优于传统的混沌神经网络。  相似文献   

3.
混沌神经网络在求解优化问题中的应用   总被引:1,自引:0,他引:1  
本文运用GCM混沌神经网络对Hopfield神经网络在求解优化方面的问题进行了改进。通过混沌遍历,可使Hopfield网络在整个相空间进行搜索,从而避免网络在运行过程中陷入局部极小值。通过对一个对弈的实例进行实验,结果显示Hopfield网络的寻优特性获得了较大改进。  相似文献   

4.
混沌遗传算法及其在函数优化中的应用   总被引:11,自引:0,他引:11  
将混沌优化和遗传算法结合起来,提出了混沌遗传算法(CGA,Chaos Genetic Algorithm),并将其应用于函数优化问题的求解。通过在种群进化的不同阶段引入混沌优化操作,大大提升了遗传算法的整体性能。实验结果表明,与标准遗传算法(SGA)相比,该算法能更有效地求得全局最优解,具有更快的收敛速度。  相似文献   

5.
径向基函数神经网络的一种构造算法   总被引:4,自引:1,他引:4  
提出了径向基函数(RBF)神经网络参数的一种新的学习算法——分类优化迭代算法。在此基础上,设计了RBF网络的一种构造算法。仿真结果表明了本文方法的有效性。  相似文献   

6.
小波Hopfield神经网络及其在优化中的应用   总被引:3,自引:1,他引:3  
通过把Hopfield神经网络的sigmoid激励函数替换为Morlet小波函数,提出了一种新型的Hopfield神经网络——小波Hopfield神经网络(WHNN)。由于Morlet小波函数具有良好的局部逼近能力和较高的非线性度,因此WHNN在非线性函数寻优上表现出令人满意的较高精确度的效果。一个典型的函数优化例子表明小波Hopfield神经网络比Hopfield神经网络有较高的精确度。  相似文献   

7.
本文提出了用人工神经网络求解具有约束条件的非线性优化问题的具体方法,分析了神经网络能量函数的构成形式,并在常规的Hopfield网络模型的基础上构造了一个非全局连接的神经网络动力学模型。这种修改的Hopfield网络克服了常规的Hopfield网络在求解非线性优化问题时权值不好映射的困难,具有结构清晰,易于软件模拟和硬件实现的优点。  相似文献   

8.
如何设计高效、安全的带秘密密钥的单向函数一直是现代密码学研究中的一个热点。首先用神经网络来训练一维非线性分段映射产生混沌序列,并利用该模型产生的非线性序列构造带秘密密钥的Hash函数,该算法的优点之一是神经网络隐式混沌映射关系使直接获取映射关系变得困难,实验结果表明,这种算法具有对初值有高度敏感性、很好的单向性、弱碰撞性,较基于单一混沌映射的Hash函数具有更强的保密性能,且实现简单。  相似文献   

9.
遗传算法在神经网络优化中的应用   总被引:8,自引:4,他引:8  
罗文辉 《控制工程》2003,10(5):401-403
把遗传算法和神经网络结合起来,形成以遗传算法与神经网络相结合的进化神经网络。介绍了遗传算法的基本原理。讨论了用遗传算法优化网络结构和基于遗传算法的神经网络权值优化问题。并通过实验仿真将该算法与BP算法进行比较,从而验证了该算法的可行性与有效性。  相似文献   

10.
一种混沌Hopfiele网络及其在优化计算中的应用   总被引:2,自引:1,他引:2  
文章讨论了神经网络算法在约束优化问题中的应用,提出了一种混沌神经网络模型。在Hopfield网络中引入混沌机制,首先在混沌动态下搜索,然后利用HNN梯度优化搜索。对非线性函数的优化问题仿真表明算法具有很强的克服陷入局部极小能力。  相似文献   

11.
改进的混沌优化方法及其应用   总被引:8,自引:0,他引:8  
提出一种改进的混沌优化方法,该方法利用混沌变量对当前点进行扰动,并且通过时变参数逐渐减小搜索进程中的扰动幅度,同时以一定方式确定了时变参数的初值。用改进后的方法对连续对象的全局优化问题进行优化,仿真结果表明,该方法可以显著提高收敛速度和精确性。  相似文献   

12.
Recently, cellular neural networks (CNNs) have been demonstrated to be a highly effective paradigm applicable in a wide range of areas. Typically, CNNs can be implemented using VLSI circuits, but this would unavoidably require additional hardware. On the other hand, we can also implement CNNs purely by software; this, however, would result in very low performance when given a large CNN problem size. Nowadays, conventional desktop computers are usually equipped with programmable graphics processing units (GPUs) that can support parallel data processing. This paper introduces a GPU-based CNN simulator. In detail, we carefully organize the CNN data as 4-channel textures, and efficiently implement the CNN computation as fragment programs running in parallel on a GPU. In this way, we can create a high performance but low-cost CNN simulator. Experimentally, we demonstrate that the resultant GPU-based CNN simulator can run 8–17 times faster than a CPU-based CNN simulator.  相似文献   

13.
Compared with other feed-forward neural networks, radial basis function neural networks (RBFNN) have many advantages which make them more suitable for nonlinear system modeling, and they have recently received considerable attention. In this paper, a RBFNN is employed to model strongly nonlinear systems. First, the problems of nonlinear system modeling are analyzed, and then the structure of the RBFNN as well as the training algorithm are improved to solve these problems. Finally, an industrial high-purity distillation column, which is a strongly nonlinear system, is successfully modeled with the improved RBFNN. Owing to the complexities of a nonlinear system, it is necessary to use a real-time model correction method to modify the parameters of the RBFNN model in real time. One efficient method is proposed in this paper. The idea is to employ the Givens transformation to modify the parameters of the RBFNN-based model. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996  相似文献   

14.
Global exponential stability problems are investigated for cellular neural networks (CNN) with multiple time-varying delays. Several new criteria in linear matrix inequality form or in algebraic form are presented to ascertain the uniqueness and global exponential stability of the equilibrium point for CNN with multiple time-varying delays and with constant time delays. The proposed method has the advantage of considering the difference of neuronal excitatory and inhibitory effects, which is also computationally efficient as it can be solved numerically using the recently developed interior-point algorithm or be checked using simple algebraic calculation. In addition, the proposed results generalize and improve upon some previous works. Two numerical examples are used to show the effectiveness of the obtained results.  相似文献   

15.
Global exponential stability problems are investigated for cellular neural networks (CNN) with multiple time-varying delays. Several new criteria in linear matrix inequality form or in algebraic form are presented to ascertain the uniqueness and global exponential stability of the equilibrium point for CNN with multiple time-varying delays and with constant time delays. The proposed method has the advantage of considering the difference of neuronal excitatory and inhibitory effects, which is also computationally efficient as it can be solved numerically using the recently developed interior-point algorithm or be checked using simple algebraic calculation. In addition, the proposed results generalize and improve upon some previous works. Two numerical examples are used to show the effectiveness of the obtained results.  相似文献   

16.
利用遗传模拟退火算法优化神经网络结构   总被引:1,自引:0,他引:1       下载免费PDF全文
常用的神经网络是通过固定的网络结构得到最优权值,使网络的实用性受到影响。引入了一种基于方向的交叉算子和变异算子,同时把模拟退火算法引入了遗传算法,结合遗传算法和模拟退火算法的优点,提出了一种优化神经网络结构的遗传——模拟退火混合算法,实现了网络结构和权值的同时优化。仿真实验表明,与遗传算法和模拟退火算法相比,该算法优化的神经网络收敛速度较快、预测精度较高,提高了网络的处理能力。  相似文献   

17.
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.  相似文献   

18.
一种优化神经网络结构的遗传禁忌算法   总被引:2,自引:0,他引:2  
王淑玲  李振涛  邢棉 《计算机应用》2007,27(6):1426-1429
常用的神经网络是通过固定的网络结构得到最优权值,使网络的实用性受到影响。引入一种基于方向的交叉算子和禁忌变异算子,同时把禁忌算法(TS)引入标准遗传算法,结合标准遗传算法和禁忌算法的优点,提出一种优化神经网络结构的遗传禁忌混合算法,实现了网络结构和权值同时优化。仿真实验表明,与遗传算法和禁忌算法相比,该算法优化的神经网络收敛速度较快、预测精度较高,提高了网络的处理能力。  相似文献   

19.
Ant colony optimization (ACO) is an optimization technique that was inspired by the foraging behaviour of real ant colonies. Originally, the method was introduced for the application to discrete optimization problems. Recently we proposed a first ACO variant for continuous optimization. In this work we choose the training of feed-forward neural networks for pattern classification as a test case for this algorithm. In addition, we propose hybrid algorithm variants that incorporate short runs of classical gradient techniques such as backpropagation. For evaluating our algorithms we apply them to classification problems from the medical field, and compare the results to some basic algorithms from the literature. The results show, first, that the best of our algorithms are comparable to gradient-based algorithms for neural network training, and second, that our algorithms compare favorably with a basic genetic algorithm.
Christian BlumEmail:
  相似文献   

20.
为提高神经网络的逼近能力,提出一种基于序列输入的神经网络模型及算法。模型隐层为序列神经元,输出层为普通神经元。输入为多维离散序列,输出为普通实值向量。先将各维离散输入序列值按序逐点加权映射,再将这些映射结果加权聚合之后映射为隐层序列神经元的输出,最后计算网络输出。采用Levenberg-Marquardt算法设计了该模型学习算法。仿真结果表明,当输入节点和序列长度比较接近时,模型的逼近能力明显优于普通神经网络。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号