首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a dynamic optimization scheme for solving degenerate convex quadratic programming (DCQP) problems. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, a neural network model based on a dynamic system model is constructed. The equilibrium point of the model is proved to be equivalent to the optimal solution of the DCQP problem. It is also shown that the network model is stable in the Lyapunov sense and it is globally convergent to an exact optimal solution of the original problem. Several practical examples are provided to show the feasibility and the efficiency of the method.  相似文献   

2.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach.  相似文献   

3.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

4.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

5.
A novel neural network for nonlinear convex programming   总被引:5,自引:0,他引:5  
In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.  相似文献   

6.
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.  相似文献   

7.
This paper presents a recurrent neural network for solving nonconvex nonlinear optimization problems subject to nonlinear inequality constraints. First, the p-power transformation is exploited for local convexification of the Lagrangian function in nonconvex nonlinear optimization problem. Next, the proposed neural network is constructed based on the Karush–Kuhn–Tucker (KKT) optimality conditions and the projection function. An important property of this neural network is that its equilibrium point corresponds to the optimal solution of the original problem. By utilizing an appropriate Lyapunov function, it is shown that the proposed neural network is stable in the sense of Lyapunov and convergent to the global optimal solution of the original problem. Also, the sensitivity of the convergence is analysed by changing the scaling factors. Compared with other existing neural networks for such problem, the proposed neural network has more advantages such as high accuracy of the obtained solutions, fast convergence, and low complexity. Finally, simulation results are provided to show the benefits of the proposed model, which compare to or outperform existing models.  相似文献   

8.
1 Introduction Optimization problems arise in a broad variety of scientific and engineering applica- tions. For many practice engineering applications problems, the real-time solutions of optimization problems are mostly required. One possible and very pr…  相似文献   

9.
Feng  Jiqiang  Qin  Sitian  Shi  Fengli  Zhao  Xiaoyue 《Neural computing & applications》2018,30(11):3399-3408

In this paper, a recurrent neural network with a new tunable activation is proposed to solve a kind of convex quadratic bilevel programming problem. It is proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov, and the state of the proposed neural network converges to an equilibrium point in finite time. In contrast to the existing related neurodynamic approaches, the proposed neural network in this paper is capable of solving the convex quadratic bilevel programming problem in finite time. Moreover, the finite convergence time can be quantitatively estimated. Finally, two numerical examples are presented to show the effectiveness of the proposed recurrent neural network.

  相似文献   

10.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

11.
A neural network for solving convex nonlinear programming problems is proposed in this paper. The distinguishing features of the proposed network are that the primal and dual problems can be solved simultaneously, all necessary and sufficient optimality conditions are incorporated, and no penalty parameter is involved. Based on Lyapunov, LaSalle and set stability theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the set of its equilibrium points, regardless of whether a convex nonlinear programming problem has unique or infinitely many optimal solutions. Numerical simulation results also show that the proposed network is feasible and efficient. In addition, a general method for transforming non-linear programming problems into unconstrained problems is also proposed. ID="A1" Correspondence and offprint requests to: Dr Z Chen, Department of Electronic Engineering, Brunel University, Uxbridge, Middle-sex, UK  相似文献   

12.
针对线性约束的非线性规划的求解问题,利用罚函数求解优化问题的思想将其转化为二次凸规划,基于神经网络的结构特性,定义所需的能量函数,从而使网络收敛于唯一稳定点最终实现线性约束的非线性规划的求解。实验仿真结果表明,该方法是有效和正确的,且能推广到含参的非线性规划和多目标规划中去。  相似文献   

13.
A novel neural network approach is proposed for solving linear bilevel programming problem. The proposed neural network is proved to be Lyapunov stable and capable of generating optimal solution to the linear bilevel programming problem. The numerical result shows that the neural network approach is feasible and efficient.  相似文献   

14.
This paper presents a new neural network model for solving constrained variational inequality problems by converting the necessary and sufficient conditions for the solution into a system of nonlinear projection equations. Five sufficient conditions are provided to ensure that the proposed neural network is stable in the sense of Lyapunov and converges to an exact solution of the original problem by defining a proper convex energy function. The proposed neural network includes an existing model, and can be applied to solve some nonmonotone and nonsmooth problems. The validity and transient behavior of the proposed neural network are demonstrated by some numerical examples.   相似文献   

15.
This article considers the fixed-time distributed optimization problem of multi-agent systems with external disturbances, in which the global optimization objective is a convex combination of local objective functions. To solve this issue, a directed communication network is carefully designed, and an integral sliding mode control protocol is proposed based on the gradient of global objective function first. Moreover, two distributed optimal protocols are designed by using the gradient and the Hessian matrix of local objective function, respectively. By employing Lyapunov stability theory, graph theory, convex analysis, and inequality techniques, we prove that all proposed protocols can make agents achieve consensus and converge accurately to the optimal solution of the considered problem in some fixed-time intervals. Finally, some numerical simulations are given to verify the feasibility of the theoretical results.  相似文献   

16.
In this paper, we revisit the mean-variance model of Markowitz and the construction of the risk-return efficient frontier. A few other models, such as the mean absolute deviation, the minimax and maximin, and models with diagonal quadratic form as objectives, which use alternative metrics for risk are also introduced. Then we present a neurodynamic model for solving these kinds of problems. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The validity and transient behavior of the neural network are demonstrated by using several examples of portfolio selection.  相似文献   

17.
A neural network model is presented for solving nonlinear bilevel programming problem, which is a NP-hard problem. The proposed neural network is proved to be Lyapunov stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem. The asymptotic properties of the neural network are analyzed and the condition for asymptotic stability, solution feasibility and solution optimality are derived. The transient behavior of the neural network is simulated and the validity of the network is verified with numerical examples.  相似文献   

18.
In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network  相似文献   

19.
A neural network approach is presented for solving mathematical programs with equilibrium constraints (MPEC). The proposed neural network is proved to be Lyapunov stable and capable of generating approximal optimal solution to the MPEC problem. The asymptotic properties of the neural network are analyzed and the condition for asymptotic stability, solution feasibility and solution optimality are derived and the transient behavior of the neural network is simulated and the validity of the network is verified with numerical examples.  相似文献   

20.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号