首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper deals with problems of stability and the stabilization of discrete-time neural networks. Neural structures under consideration belong to the class of the so-called locally recurrent globally feedforward networks. The single processing unit possesses dynamic behavior. It is realized by introducing into the neuron structure a linear dynamic system in the form of an infinite impulse response filter. In this way, a dynamic neural network is obtained. It is well known that the crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates stability conditions for the analyzed class of neural networks. Moreover, a stabilization problem is defined and solved as a constrained optimization task. In order to tackle this problem two methods are proposed. The first one is based on a gradient projection (GP) and the second one on a minimum distance projection (MDP). It is worth noting that these methods can be easily introduced into the existing learning algorithm as an additional step, and suitable convergence conditions can be developed for them. The efficiency and usefulness of the proposed approaches are justified by using a number of experiments including numerical complexity analysis, stabilization effectiveness, and the identification of an industrial process  相似文献   

2.
基于局部进化的Hopfield神经网络的优化计算方法   总被引:4,自引:0,他引:4       下载免费PDF全文
提出一种基于局部进化的Hopfield神经网络优化计算方法,该方法将遗传算法和Hopfield神经网络结合在一起,克服了Hopfield神经网络易收敛到局部最优值的缺点,以及遗传算法收敛速度慢的缺点。该方法首先由Hopfield神经网络进行状态方程的迭代计算降低网络能量,收敛后的Hopfield神经网络在局部范围内进行遗传算法寻优,以跳出可能的局部最优值陷阱,再由Hopfield神经网络进一步迭代优化。这种局部进化的Hopfield神经网络优化计算方法尤其适合于大规模的优化问题,对图像分割问题和规模较大的200城市旅行商问题的优化计算结果表明,其全局收敛率和收敛速度明显提高。  相似文献   

3.
Systems based on artificial neural networks have high computational rates owing to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. Simulated examples are presented and compared with other neural networks. The results demonstrate that the proposed method gives a significant improvement.  相似文献   

4.
An engineering annealing method for optimal solutions of cellular neural networks is presented. Cellular neural networks are very promising in solving many scientific problems in image processing, pattern recognition, and optimization by the use of stored program with predetermined templates. Hardware annealing, which is a paralleled version of mean-field annealing in analog networks, is a highly efficient method of finding optimal solutions of cellular neural networks. It does not require any stochastic procedure and henceforth can be very fast. The generalized energy function of the network is first increased by reducing the voltage gain of each neuron. Then, the hardware annealing searches for the globally minimum energy state by continuously increasing the gain of neurons. The process of global optimization by the proposed annealing can be described by the eigenvalue problems in the time-varying dynamic system. In typical nonoptimization problems, it also provides enough stimulation to frozen neurons caused by ill-conditioned initial states.  相似文献   

5.
提出了一种用于求解离散时间大系统动态递阶优化问题的神经网络模型 (LHONN),该网络以全集成化为特征:1)各子系统的动态方程嵌入相应的局部优化网络中, 使得网络结构具有较低的维数,易于硬件实现;2)其上级协调网络和局部优化网络的求解过 程同时进行,优化求解速度高,适宜于实时系统优化.  相似文献   

6.
Hopfield-型网络求解优化问题的一般演化规则   总被引:1,自引:0,他引:1  
基于离散Hopfield-型网络和延迟离散Hopfield-型网络求解优化问题提出了两种一般 演化规则,演化序列的动态阈值是这些规则的重要特征,并获得了收敛性定理.推广了已有的 离散Hopfield-型网络和延迟离散Hopfield-型网络的收敛性结果,给出了能量函数局部极大值 点与延迟离散Hopfield-型网络的稳定态的关系的充分必要条件.鉴于延迟离散Hopfield-型网 络更有效地应用于优化计算问题,给出了一般分解策略.实验表明与离散Hopfield-型网络的 算法相比,文中提出的算法既有较高的收敛率又缩短了演化时间  相似文献   

7.
Fernando A.  Amit   《Neurocomputing》2009,72(16-18):3863
This paper presents two neural networks to find the optimal point in convex optimization problems and variational inequality problems, respectively. The domain of the functions that define the problems is a convex set, which is determined by convex inequality constraints and affine equality constraints. The neural networks are based on gradient descent and exact penalization and the convergence analysis is based on a control Liapunov function analysis, since the dynamical system corresponding to each neural network may be viewed as a so-called variable structure closed loop control system.  相似文献   

8.
Global exponential stability is a desirable property for dynamic systems. The paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in the paper further demonstrate the superior convergence properties of the existing neural networks for optimization.  相似文献   

9.
Abstract: Artificial neural networks are bio-inspired mathematical models that have been widely used to solve complex problems. The training of a neural network is an important issue to deal with, since traditional gradient-based algorithms become easily trapped in local optimal solutions, therefore increasing the time taken in the experimental step. This problem is greater in recurrent neural networks, where the gradient propagation across the recurrence makes the training difficult for long-term dependences. On the other hand, evolutionary algorithms are search and optimization techniques which have been proved to solve many problems effectively. In the case of recurrent neural networks, the training using evolutionary algorithms has provided promising results. In this work, we propose two hybrid evolutionary algorithms as an alternative to improve the training of dynamic recurrent neural networks. The experimental section makes a comparative study of the algorithms proposed, to train Elman recurrent neural networks in time-series prediction problems.  相似文献   

10.
We present a general methodology for designing optimization neural networks. We prove that the neural networks constructed by using the proposed method are guaranteed to be globally convergent to solutions of problems with bounded or unbounded solution sets, in contrast with the gradient methods whose convergence is not guaranteed. We show that the proposed method contains both the gradient methods and nongradient methods employed in existing optimization neural networks as special cases. Based on the theoretical results of the proposed method, we study the convergence and stability of general gradient models in the case of unisolated solutions. Using the proposed method, we derive some new neural network models for a very large class of optimization problems, in which the equilibrium points correspond to exact solutions and there is no variable parameter. Finally, some numerical examples show the effectiveness of the method.  相似文献   

11.
Simulation optimization studies the problem of optimizing simulation-based objectives. This field has a strong history in engineering but often suffers from several difficulties including being time-consuming and NP-hardness. Simulation optimization is a new and hot topic in the field of system simulation and operational research. This paper presents a hybrid approach that combines Evolutionary Algorithms with neural networks (NNs) for solving simulation optimization problems. In this hybrid approach, we use NNs to replace the known simulation model for evaluating subsequent iterative solutions. Further, we apply the dynamic structure-based neural networks to learn and replace the known simulation model. The determination of dynamic structure-based neural networks is the kernel of this paper. The final experimental results demonstrated that the proposed approach can find optimal or close-to-optimal solutions and is superior to other recent algorithms in simulation optimization.  相似文献   

12.
席裕庚 《自动化学报》2013,39(11):1758-1768
随着通信技术和网络技术的飞速发展, 在社会、经济乃至日常生活领域中, 出现了越来越多的复杂动态网络. 网络科学作为一门新兴的交叉学科, 对复杂动态网络性能特征、演化进程和控制方法的研究已取得了丰富的成果. 大系统控制论以高维动态大系统的行为分析和控制优化为主要研究内容, 应该能为复杂网络的研究提供有益的借鉴. 本文针对复杂动态网络研究的一些热点问题, 探索了用大系统控制理论和方法解决复杂网络结构分析和控制的可能性, 分析了面临的困难和应对的思路. 针对大规模复杂动态网络的控制和优化, 提出了集网络科学的宏观分析方法、控制科学的定量设计方法和信息科学的智能处理方法于一体的多层递阶结构.  相似文献   

13.
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.  相似文献   

14.
This paper studies the output convergence of a class of recurrent neural networks with time-varying inputs. The model of the studied neural networks has different dynamic structure from that in the well known Hopfield model, it does not contain linear terms. Since different structures of differential equations usually result in quite different dynamic behaviors, the convergence of this model is quite different from that of Hopfield model. This class of neural networks has been found many successful applications in solving some optimization problems. Some sufficient conditions to guarantee output convergence of the networks are derived.  相似文献   

15.
Up to now, there have been many attempts in the use of artificial neural networks (ANNs) for solving optimization problems and some types of ANNs, such as Hopfield network and Boltzmann machine, have been applied for combinatorial optimization problems. However, there are some restrictions in the use of ANNs as optimizers. For example: (1) ANNs cannot optimize continuous variable problems; (2) discrete problems should be mapped into the neural networks’ architecture; and (3) most of the existing neural networks are applicable only for a class of smooth optimization problems and global convexity conditions on the objective functions and constraints are required. In this paper, we introduce a new procedure for stochastic optimization by a recurrent ANN. The concept of fractional calculus is adopted to propose a novel weight updating rule. The introduced method is called fractional-neuro-optimizer (FNO). This method starts with an initial solution and adjusts the network’s weights by a new heuristic and unsupervised rule to reach a good solution. The efficiency of FNO is compared to the genetic algorithm and particle swarm optimization techniques. Finally, the proposed FNO is used for determining the parameters of a proportional–integral–derivative controller for an automatic voltage regulator power system and is applied for designing the water distribution networks.  相似文献   

16.
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.  相似文献   

17.
In many applications, a class of optimization problems called quadratic programming with a special quadratic constraint (QPQC) often occurs, such as in the fields of maximum entropy spectral estimation, FIR filter design with time–frequency constraint and design of an FIR filter bank with perfect reconstruction property. In order to deal with this kind of optimization problems and be inspired by the computational virtue of analog or dynamic neural networks, a feedback neural network is proposed for solving for this class of QPQC computation problems in real time in this paper. The stability, convergence and computational performance of the proposed neural network have also been analyzed and proved in detail so as to theoretically guarantee the computational effectiveness and capability of the network. From the theoretical analyses it turns out that the solution of a QPQC problem is just the generalized minimum eigenvector of the objective matrix with respect to the constrained matrix. A number of simulation experiments have been given to further support our theoretical analysis and illustrate the computational performance of the proposed network.  相似文献   

18.
Feedback neural networks enjoy considerable popularity as a means of approximately solving combinatorial optimization problems. It is now well established how to map problems onto networks so that invalid solutions are never found. It is not as clear how the networks' solutions compare in terms of quality with those obtained using other optimization techniques; such issues are addressed in this paper. A linearized analysis of annealed network dynamics allows a prototypical network solution to be identified in a pertinent eigenvector basis. It is possible to predict the likely quality of this solution by examining optimal solutions in the same basis. Applying this methodology to traveling salesman problems, it appears that neural networks are well suited to the solution of Euclidean but not random problems; this is confirmed by extensive experiments. The failure of a network to adequately solve even 10-city problems is highly significant.  相似文献   

19.
基于灰关联分析方法   总被引:1,自引:0,他引:1  
针对一致关联度算法不具有普遍性和动态改变惯性权的自适应粒子群算法(DCW)不易跳出局部收敛能力的缺陷,本文提出了完全关联度算法和自适应变异的动态粒子群优化算法。完全关联度算法主要用来选择软测量的辅助变量。在改进的粒子群优化算法中,除了采用动态惯性权重外,还引入了自适应学习因子和新的变异算子。为了构造一种性能较好的神经网络,采用改进的粒子群优化算法来优化神经网络所有的权值参数,并将提出的软测量建模方法预测延迟焦化的汽油干点,实验结果表明,与DCW算法优化神经网络(DCWNN)的建模方法相比,该算法不仅具有较好的泛化性能,而且具有较高的精度和良好的应用前景。  相似文献   

20.
For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号