首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

2.
Recently, a neutral-type delayed projection neural network (NDPNN) was developed for solving variational inequality problems. This paper addresses the global stability and convergence of the NDPNN and presents new results for it to solve linear variational inequality (LVI). Compared with existing convergence results for neural networks to solve LVI, our results do not require the LVI that is monotone so as to guarantee the NDPNN that can solve a class of non-monotone LVI. All the results are expressed in terms of linear matrix inequalities, which can be easily checked. Simulation examples demonstrate the effectiveness of the obtained results.  相似文献   

3.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

4.
S. Q. Zhu 《Computing》1995,54(3):251-272
This paper deals with numerical methods for solving linear variational inequalities on an arbitrary closed convex subsetC of ? n . Although there were numerous iterations studied for the caseC=? + n , few were proposed for the case whenC is a closed convex subset. The essential difficulty in this case is the nonlinearities ofC's boundaries. In this paper iteration processes are designed for solving linear variational inequalities on an arbitrary closed convex subsetC. In our algorithms the computation of a linear variational inequality is decomposed into a sequence of problems of projecting a vector to the closed convex subsetC, which are computable as long as the equations describing the boundaries are given. In particular, using our iterations one can easily compute a solution whenC is one of the common closed convex subsets such as cube, ball, ellipsoid, etc. The non-accurate iteration, the estimate of the solutions on unbounded domains and the theory of approximating the boundaries are also established. Moreover, a necessary and sufficient condition is given for a vector to be an approximate solution. Finally, some numerical examples are presented, which show that the designed algorithms are effective and efficient. The exposition of this paper is self-contained.  相似文献   

5.
针对线性约束的非线性规划的求解问题,利用罚函数求解优化问题的思想将其转化为二次凸规划,基于神经网络的结构特性,定义所需的能量函数,从而使网络收敛于唯一稳定点最终实现线性约束的非线性规划的求解。实验仿真结果表明,该方法是有效和正确的,且能推广到含参的非线性规划和多目标规划中去。  相似文献   

6.
A novel neural network for nonlinear convex programming   总被引:5,自引:0,他引:5  
In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.  相似文献   

7.
Recently, a projection neural network for solving monotone variational inequalities and constrained optimization problems was developed. In this paper, we propose a general projection neural network for solving a wider class of variational inequalities and related optimization problems. In addition to its simple structure and low complexity, the proposed neural network includes existing neural networks for optimization, such as the projection neural network, the primal-dual neural network, and the dual neural network, as special cases. Under various mild conditions, the proposed general projection neural network is shown to be globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special cases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.  相似文献   

8.
This paper presents a recurrent neural-network model for solving a special class of general variational inequalities (GVIs), which includes classical VIs as special cases. It is proved that the proposed neural network (NN) for solving this class of GVIs can be globally convergent, globally asymptotically stable, and globally exponentially stable under different conditions. The proposed NN can be viewed as a modified version of the general projection NN existing in the literature. Several numerical examples are provided to demonstrate the effectiveness and performance of the proposed NN.  相似文献   

9.
In recent years, a projection neural network was proposed for solving linear variational inequality (LVI) problems and related optimization problems, which required the monotonicity of LVI to guarantee its convergence to the optimal solution. In this paper, we present a new result on the global exponential convergence of the projection neural network. Unlike existing convergence results for the projection neural network, our main result does not assume the monotonicity of LVI problems. Therefore, the projection neural network can be further guaranteed to solve a class of non-monotone LVI and non-convex optimization problems. Numerical examples illustrate the effectiveness of the obtained result.  相似文献   

10.
提出了一种基于0.618法求解具有线性约束的二次规划问题的神经网络学习新算法。与已有的求解线性约束的二次规划问题的神经网络学习算法相比,该算法的适用范围更广,计算精度更高。其目的是为具有线性约束的二次规划问题的求解提供一种新方法。仿真实验验证了新算法的有效性。  相似文献   

11.
This paper focuses on the adaptive finite-time neural network control problem for nonlinear stochastic systems with full state constraints. Adaptive controller and adaptive law are designed by backstepping design with log-type barrier Lyapunov function. Radial basis function neural networks are employed to approximate unknown system parameters. It is proved that the tracking error can achieve finite-time convergence to a small region of the origin in probability and the state constraints are confirmed in probability. Different from deterministic nonlinear systems, here the stochastic system is affected by two random terms including continuous Brownian motion and discontinuous Poisson jump process. Therefore, it will bring difficulties to the controller design and the estimations of unknown parameters. A simulation example is given to illustrate the effectiveness of the designed control method.  相似文献   

12.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

13.
In this paper, we deal with the regularity for nonlinear variational inequalities of second order in Hilbert spaces with more general conditions on the nonlinear terms and without condition of the compactness of the principal operators. We also obtain the norm estimate of a solution of the given nonlinear equation on C([0,T];V)∩C1((0,T];H)∩C2((0,T];V) by using the results of its corresponding hyperbolic semilinear part.  相似文献   

14.
This paper deals with the characterization of the stability and instability matrices for a class of unilaterally constrained dynamical systems, represented as linear evolution variational inequalities (LEVI). Such systems can also be seen as a sort of differential inclusion, or (in special cases) as linear complementarity systems, which in turn are a class of hybrid dynamical systems. Examples show that the stability of the unconstrained system and that of the constrained system, may drastically differ. Various criteria are proposed to characterize the stability or the instability of LEVI.  相似文献   

15.
Most existing neural networks for solving linear variational inequalities (LVIs) with the mapping Mx + p require positive definiteness (or positive semidefiniteness) of M. In this correspondence, it is revealed that this condition is sufficient but not necessary for an LVI being strictly monotone (or monotone) on its constrained set where equality constraints are present. Then, it is proposed to reformulate monotone LVIs with equality constraints into LVIs with inequality constraints only, which are then possible to be solved by using some existing neural networks. General projection neural networks are designed in this correspondence for solving the transformed LVIs. Compared with existing neural networks, the designed neural networks feature lower model complexity. Moreover, the neural networks are guaranteed to be globally convergent to solutions of the LVI under the condition that the linear mapping Mx + p is monotone on the constrained set. Because quadratic and linear programming problems are special cases of LVI in terms of solutions, the designed neural networks can solve them efficiently as well. In addition, it is discovered that the designed neural network in a specific case turns out to be the primal-dual network for solving quadratic or linear programming problems. The effectiveness of the neural networks is illustrated by several numerical examples.  相似文献   

16.
In this paper, we present a self-adaptive projection and contraction (SAPC) method for solving symmetric linear variational inequalities. Preliminary numerical tests show that the proposed method is efficient and effective and depends only slightly on its initial parameter. The global convergence of the new method is also addressed.  相似文献   

17.
Xia Y  Kamel MS 《Neural computation》2008,20(3):844-872
The constrained L(1) estimation is an attractive alternative to both the unconstrained L(1) estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L(1) estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L(1) estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L(1) estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L(1) estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.  相似文献   

18.

This paper presents a novel method for designing an adaptive control system using radial basis function neural network. The method is capable of dealing with nonlinear stochastic systems in strict-feedback form with any unknown dynamics. The proposed neural network allows the method not only to approximate any unknown dynamic of stochastic nonlinear systems, but also to compensate actuator nonlinearity. By employing dynamic surface control method, a common problem that intrinsically exists in the back-stepping design, called “explosion of complexity”, is resolved. The proposed method is applied to the control systems comprising various types of the actuator nonlinearities such as Prandtl–Ishlinskii (PI) hysteresis, and dead-zone nonlinearity. The performance of the proposed method is compared to two different baseline methods: a direct form of backstepping method, and an adaptation of the proposed method, named APIC-DSC, in which the neural network is not contributed in compensating the actuator nonlinearity. It is observed that the proposed method improves the failure-free tracking performance in terms of the Integrated Mean Square Error (IMSE) by 25%/11% as compared to the backstepping/APIC-DSC method. This depression in IMSE is further improved by 76%/38% and 32%/49%, when it comes with the actuator nonlinearity of PI hysteresis and dead-zone, respectively. The proposed method also demands shorter adaptation period compared with the baseline methods.

  相似文献   

19.
A neural network approach is presented for solving mathematical programs with equilibrium constraints (MPEC). The proposed neural network is proved to be Lyapunov stable and capable of generating approximal optimal solution to the MPEC problem. The asymptotic properties of the neural network are analyzed and the condition for asymptotic stability, solution feasibility and solution optimality are derived and the transient behavior of the neural network is simulated and the validity of the network is verified with numerical examples.  相似文献   

20.
Tiwari (2004) proved that the termination problem of a class of linear programs (loops with linear loop conditions and updates) over the reals is decidable through Jordan forms and eigenvector computation. Braverman (2006) proved that it is also decidable over the integers. Following their work, we consider the termination problems of three more general classes of programs which are loops with linear updates and three kinds of polynomial loop conditions, i.e., strict constraints, non-strict constraints and both strict and non-strict constraints, respectively. First, we prove that the termination problems of such loops over the integers are all undecidable. Then, for each class we provide an algorithm to decide the termination of such programs over the reals. The algorithms are complete for those programs satisfying a property, Non-Zero Minimum.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号