首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve general convex nonlinear programming (GCNLP) problems. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the GCNLP problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

2.
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild conditions by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.  相似文献   

3.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

4.
A novel neural network for nonlinear convex programming   总被引:5,自引:0,他引:5  
In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.  相似文献   

5.
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.  相似文献   

6.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

7.
This paper presents a new neural network model for solving constrained variational inequality problems by converting the necessary and sufficient conditions for the solution into a system of nonlinear projection equations. Five sufficient conditions are provided to ensure that the proposed neural network is stable in the sense of Lyapunov and converges to an exact solution of the original problem by defining a proper convex energy function. The proposed neural network includes an existing model, and can be applied to solve some nonmonotone and nonsmooth problems. The validity and transient behavior of the proposed neural network are demonstrated by some numerical examples.   相似文献   

8.
Design and analysis of maximum Hopfield networks   总被引:7,自引:0,他引:7  
Since McCulloch and Pitts presented a simplified neuron model (1943), several neuron models have been proposed. Among them, the binary maximum neuron model was introduced by Takefuji et al. and successfully applied to some combinatorial optimization problems. Takefuji et al. also presented a proof for the local minimum convergence of the maximum neural network. In this paper we discuss this convergence analysis and show that this model does not guarantee the descent of a large class of energy functions. We also propose a new maximum neuron model, the optimal competitive Hopfield model (OCHOM), that always guarantees and maximizes the decrease of any Lyapunov energy function. Funabiki et al. (1997, 1998) applied the maximum neural network for the n-queens problem and showed that this model presented the best overall performance among the existing neural networks for this problem. Lee et al. (1992) applied the maximum neural network for the bipartite subgraph problem showing that the solution quality was superior to that of the best existing algorithm. However, simulation results in the n-queens problem and in the bipartite subgraph problem show that the OCHOM is much superior to the maximum neural network in terms of the solution quality and the computation time.  相似文献   

9.

This paper proposes a simplified neural network for generalized least absolute deviation by transforming its optimization conditions into a system of double projection equations. The proposed network is proved to be stable in the sense of Lyapunov and converges to an exact optimization solution of the original problem for any starting point. Compared with the existing neural networks for generalized least absolute deviation, the new model has the least neurons and low complexity and is suitable to parallel implementation. The validity and transient behavior of the proposed neural network are demonstrated by numerical examples.

  相似文献   

10.

This paper offers a recurrent neural network to support vector machine (SVM) learning in stochastic support vector regression with probabilistic constraints. The SVM is first converted into an equivalent quadratic programming (QP) formulation in linear and nonlinear cases. An artificial neural network for SVM learning is then proposed. The presented neural network framework guarantees obtaining the optimal solution of the SVM problem. The existence and convergence of the trajectories of the network are studied. The Lyapunov stability for the considered neural network is also shown. The efficiency of the proposed method is shown by three illustrative examples.

  相似文献   

11.
针对部分系统存在输入约束和不可测状态的最优控制问题,本文将强化学习中基于执行–评价结构的近似最优算法与反步法相结合,提出了一种最优跟踪控制策略.首先,利用神经网络构造非线性观测器估计系统的不可测状态.然后,设计一种非二次型效用函数解决系统的输入约束问题.相比现有的最优方法,本文提出的最优跟踪控制方法不仅具有反步法在处理...  相似文献   

12.
A neural network model is presented for solving nonlinear bilevel programming problem, which is a NP-hard problem. The proposed neural network is proved to be Lyapunov stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem. The asymptotic properties of the neural network are analyzed and the condition for asymptotic stability, solution feasibility and solution optimality are derived. The transient behavior of the neural network is simulated and the validity of the network is verified with numerical examples.  相似文献   

13.
In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t rarr infin. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate.  相似文献   

14.
提出了解决一类带等式与不等式约束的非光滑非凸优化问题的神经网络模型。证明了当目标函数有下界时,神经网络的解轨迹在有限时间收敛到可行域。同时,神经网络的平衡点集与优化问题的关键点集一致,且神经网络最终收敛于优化问题的关键点集。与传统基于罚函数的神经网络模型不同,提出的模型无须计算罚因子。最后,通过仿真实验验证了所提出模型的有效性。  相似文献   

15.
Wireless video sensor networks (WVSNs) have attracted a lot of interest because of the enhancements that they offer to existing wireless sensor networks applications and their numerous potential in other research areas. However, the introduction of video raises new challenges. The transmission of video and imaging data requires both energy efficiency and quality of service (QoS) assurance in order to ensure the efficient use of sensor resources as well as the integrity of the collected information. To this end, this paper proposes a joint power, rate and lifetime management algorithm in WVSNs based on the network utility maximization framework. The optimization problem is always nonconcave, which makes the problem difficult to solve. This paper makes progress in solving this type of optimization problems using particle swarm optimization (PSO). Based on the movement and intelligence of swarms, PSO is a new evolution algorithm to look for the most fertile feeding location. It can solve discontinuous, nonconvex and nonlinear problems efficiently. First, since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the paper introduces chaos mapping into PSO with adaptive inertia weight factor to avoid the disadvantage of original PSO of easily getting to the local optimal solution in the later evolution period and keep the rapid convergence performance. Second, based on the distribution characteristics of the actual network, we decompose the resource control problem into a number of sub-problems using the hierarchical thought, where each user corresponds to a subsystem which is solved using the proposed CPSO3 method. Through the cooperative coevolution theory, these sub-optimization problems interact with each other to obtain the optimum of the system. Numerical examples show that our algorithm can guarantee fast convergence and fairness within a few iterations. Besides, it is demonstrated that our algorithm can solve the nonconvex optimization problems very efficiently.  相似文献   

16.
A shortest path routing algorithm using the Hopfield neural network with a modified Lyapunov function is proposed. The modified version of the Lyapunov energy function for an optimal routing problem is proposed for determining routing order for a source and multiple destinations. The proposed energy function mainly prevents the solution path from having loops and partitions. Experiments are performed on 3000 networks of up to 50 nodes with randomly selected link costs. The performance of the proposed algorithm is compared with several conventional algorithms including Ali and Kamoun's, Park and Choi's, and Ahn and Ramakrishna's algorithms in terms of the route optimality and convergence rate. The results show that the proposed algorithm outperforms conventional methods in all cases of experiments. The proposed algorithm particularly shows significant improvements on the route optimality and convergence rate over conventional algorithms when the size of the network approaches 50 nodes.  相似文献   

17.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach.  相似文献   

18.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

19.
This paper presents a dynamic optimization scheme for solving degenerate convex quadratic programming (DCQP) problems. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, a neural network model based on a dynamic system model is constructed. The equilibrium point of the model is proved to be equivalent to the optimal solution of the DCQP problem. It is also shown that the network model is stable in the Lyapunov sense and it is globally convergent to an exact optimal solution of the original problem. Several practical examples are provided to show the feasibility and the efficiency of the method.  相似文献   

20.
1 Introduction Optimization problems arise in a broad variety of scientific and engineering applica- tions. For many practice engineering applications problems, the real-time solutions of optimization problems are mostly required. One possible and very pr…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号