首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recently, a projection neural network for solving monotone variational inequalities and constrained optimization problems was developed. In this paper, we propose a general projection neural network for solving a wider class of variational inequalities and related optimization problems. In addition to its simple structure and low complexity, the proposed neural network includes existing neural networks for optimization, such as the projection neural network, the primal-dual neural network, and the dual neural network, as special cases. Under various mild conditions, the proposed general projection neural network is shown to be globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special cases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.  相似文献   

2.
This paper presents a recurrent neural-network model for solving a special class of general variational inequalities (GVIs), which includes classical VIs as special cases. It is proved that the proposed neural network (NN) for solving this class of GVIs can be globally convergent, globally asymptotically stable, and globally exponentially stable under different conditions. The proposed NN can be viewed as a modified version of the general projection NN existing in the literature. Several numerical examples are provided to demonstrate the effectiveness and performance of the proposed NN.  相似文献   

3.
In recent years, a projection neural network was proposed for solving linear variational inequality (LVI) problems and related optimization problems, which required the monotonicity of LVI to guarantee its convergence to the optimal solution. In this paper, we present a new result on the global exponential convergence of the projection neural network. Unlike existing convergence results for the projection neural network, our main result does not assume the monotonicity of LVI problems. Therefore, the projection neural network can be further guaranteed to solve a class of non-monotone LVI and non-convex optimization problems. Numerical examples illustrate the effectiveness of the obtained result.  相似文献   

4.
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild conditions by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.  相似文献   

5.
In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network  相似文献   

6.
This paper proposes a new cooperative projection neural network (CPNN), which combines automatically three individual neural network models with a common projection term. As a special case, the proposed CPNN can include three recent recurrent neural networks for solving monotone variational inequality problems with limit or linear constraints, respectively. Under the monotonicity condition of the corresponding Lagrangian mapping, the proposed CPNN is theoretically guaranteed to solve monotone variational in...  相似文献   

7.
This paper presents a new neural network model for solving constrained variational inequality problems by converting the necessary and sufficient conditions for the solution into a system of nonlinear projection equations. Five sufficient conditions are provided to ensure that the proposed neural network is stable in the sense of Lyapunov and converges to an exact solution of the original problem by defining a proper convex energy function. The proposed neural network includes an existing model, and can be applied to solve some nonmonotone and nonsmooth problems. The validity and transient behavior of the proposed neural network are demonstrated by some numerical examples.   相似文献   

8.
Xia Y  Ye D 《Neural computation》2008,20(9):2227-2237
Recently the extended projection neural network was proposed to solve constrained monotone variational inequality problems and a class of constrained nonmonotontic variational inequality problems. Its exponential convergence was developed under the positive definiteness condition of the Jacobian matrix of the nonlinear mapping. This note proposes new results on the exponential convergence of the output trajectory of the extended projection neural network under the weak conditions that the Jacobian matrix of the nonlinear mapping may be positive semidefinite or not. Therefore, new results further demonstrate that the extended projection neural network has a fast convergence rate when solving a class of constrained monotone variational inequality problems and nonmonotonic variational inequality problems. Illustrative examples show the significance of the obtained results.  相似文献   

9.
Recently, a neutral-type delayed projection neural network (NDPNN) was developed for solving variational inequality problems. This paper addresses the global stability and convergence of the NDPNN and presents new results for it to solve linear variational inequality (LVI). Compared with existing convergence results for neural networks to solve LVI, our results do not require the LVI that is monotone so as to guarantee the NDPNN that can solve a class of non-monotone LVI. All the results are expressed in terms of linear matrix inequalities, which can be easily checked. Simulation examples demonstrate the effectiveness of the obtained results.  相似文献   

10.
A linear variational inequality is a uniform approach for some important problems in optimization and equilibrium problems. We give a neural network model for solving asymmetric linear variational inequalities. The model is based on a simple projection and contraction method. Computer simulation is performed for linear programming (LP) and linear complementarity problems (LCP). The test results for the LP problem demonstrate that our model converges significantly faster than the three existing neural network models examined in a comparative study paper.  相似文献   

11.
Most existing neural networks for solving linear variational inequalities (LVIs) with the mapping Mx + p require positive definiteness (or positive semidefiniteness) of M. In this correspondence, it is revealed that this condition is sufficient but not necessary for an LVI being strictly monotone (or monotone) on its constrained set where equality constraints are present. Then, it is proposed to reformulate monotone LVIs with equality constraints into LVIs with inequality constraints only, which are then possible to be solved by using some existing neural networks. General projection neural networks are designed in this correspondence for solving the transformed LVIs. Compared with existing neural networks, the designed neural networks feature lower model complexity. Moreover, the neural networks are guaranteed to be globally convergent to solutions of the LVI under the condition that the linear mapping Mx + p is monotone on the constrained set. Because quadratic and linear programming problems are special cases of LVI in terms of solutions, the designed neural networks can solve them efficiently as well. In addition, it is discovered that the designed neural network in a specific case turns out to be the primal-dual network for solving quadratic or linear programming problems. The effectiveness of the neural networks is illustrated by several numerical examples.  相似文献   

12.
In this paper, we propose two new projection methods for solving variational inequality problems (VI). The method is simple; it uses only function evaluation and projection onto the feasible set. Under the conditions that the underlying function is continuous and satisfies some generalized monotonicity assumption, the methods are proven to converge to a solution of variational inequality globally. Some preliminary computational results are reported to illustrate the efficiency of the methods.  相似文献   

13.
Recently, several recurrent neural networks for solving constraint optimization problems were developed. In this paper, we propose a novel approach to the use of a projection neural network for solving real time identification and control of time varying systems. In addition to low complexity and simple structure, the proposed neural network can solve wider classes of time varying systems compare with other neural networks that are used for optimization such as Hopfield neural networks. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network compared with a Hopfield neural network.  相似文献   

14.
This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.  相似文献   

15.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

16.
A new neural network for solving linear programming problems with bounded variables is presented. The network is shown to be completely stable and globally convergent to the solutions to the linear programming problems. The proposed new network is capable of achieving the exact solutions, in contrast to existing optimization neural networks which need a suitable choice of the network parameters and thus can obtain only approximate solutions. Furthermore, both the primal problems and their dual problems are solved simultaneously by the new network.  相似文献   

17.
Global exponential stability is a desirable property for dynamic systems. The paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in the paper further demonstrate the superior convergence properties of the existing neural networks for optimization.  相似文献   

18.
It is well known that least absolute deviation (LAD) criterion or L(1)-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing.  相似文献   

19.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve general convex nonlinear programming (GCNLP) problems. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the GCNLP problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

20.
This paper presents a dynamic optimization scheme for solving degenerate convex quadratic programming (DCQP) problems. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, a neural network model based on a dynamic system model is constructed. The equilibrium point of the model is proved to be equivalent to the optimal solution of the DCQP problem. It is also shown that the network model is stable in the Lyapunov sense and it is globally convergent to an exact optimal solution of the original problem. Several practical examples are provided to show the feasibility and the efficiency of the method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号