首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

2.
In this paper, a discrete-time recurrent neural network with global exponential stability is proposed for solving linear constrained quadratic programming problems. Compared with the existing neural networks for quadratic programming, the proposed neural network in this paper has lower model complexity with only one-layer structure. Moreover, the global exponential stability of the neural network can be guaranteed under some mild conditions. Simulation results with some applications show the performance and characteristic of the proposed neural network.  相似文献   

3.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach.  相似文献   

4.
Feng  Jiqiang  Qin  Sitian  Shi  Fengli  Zhao  Xiaoyue 《Neural computing & applications》2018,30(11):3399-3408

In this paper, a recurrent neural network with a new tunable activation is proposed to solve a kind of convex quadratic bilevel programming problem. It is proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov, and the state of the proposed neural network converges to an equilibrium point in finite time. In contrast to the existing related neurodynamic approaches, the proposed neural network in this paper is capable of solving the convex quadratic bilevel programming problem in finite time. Moreover, the finite convergence time can be quantitatively estimated. Finally, two numerical examples are presented to show the effectiveness of the proposed recurrent neural network.

  相似文献   

5.
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.  相似文献   

6.
This paper presents two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots. Three recurrent neural networks are applied for determining a joint velocity vector with its maximum absolute value component being minimal among all possible joint velocity vectors corresponding to the desired end-effector velocity. In each proposed neural network approach, two cooperating recurrent neural networks are used. The first approach employs two Tank-Hopfield networks for linear programming. The second approach employs two two-layer recurrent neural networks for quadratic programming and linear programming, respectively. Both the minimal 2-norm and infinity-norm of joint velocity vector can be obtained from the output of the recurrent neural networks. Simulation results demonstrate that the proposed approaches are effective with the second approach being better in terms of accuracy and optimality  相似文献   

7.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

8.
A multi-layer feedforward neural network model based predictive control scheme is developed for a multivariable nonlinear steel pickling process in this paper. In the acid baths three variables under controlled are the hydrochloric acid concentrations. The baths exhibit the normal features of an industrial system such as nonlinear dynamics and multi-effects among variables. In the modeling, multiple input, single-output recurrent neural network subsystem models are developed using input–output data sets obtaining from mathematical model simulation. The Levenberg–Marquardt algorithm is used to train the process models. In the control (MPC) algorithm, the feedforward neural network models are used to predict the state variables over a prediction horizon within the model predictive control algorithm for searching the optimal control actions via sequential quadratic programming. The proposed algorithm is tested for control of a steel pickling process in several cases in simulation such as for set point tracking, disturbance, model mismatch and presence of noise. The results for the neural network model predictive control (NNMPC) overall show better performance in the control of the system over the conventional PI controller in all cases.  相似文献   

9.
A dual neural network for kinematic control of redundant robotmanipulators   总被引:3,自引:0,他引:3  
The inverse kinematics problem in robotics can be formulated as a time-varying quadratic optimization problem. A new recurrent neural network, called the dual network, is presented in this paper. The proposed neural network is composed of a single layer of neurons, and the number of neurons is equal to the dimensionality of the workspace. The proposed dual network is proven to be globally exponentially stable. The proposed dual network is also shown to be capable of asymptotic tracking for the motion control of kinematically redundant manipulators.  相似文献   

10.
针对不同于传统基于梯度法的递归神经网络定义一种基于标量范数取值的非负能量函数,通过定义一种基于向量取值的不定无界的误差函数,构建了一种能实时求解具有线性等式约束的凸二次规划问题。基于Simulink仿真平台的计算机实验结果表明,该新型神经网络模型能够准确有效地求解此类二次规划问题。  相似文献   

11.
Two classes of high-performance neural networks for solving linear and quadratic programming problems are given. We prove that the new system converges globally to the solutions of the linear and quadratic programming problems. In a neural network, network parameters are usually not specified. The proposed models can overcome numerical difficulty caused by neural networks with network parameters and obtain desired approximate solutions of the linear and quadratic programming problems.  相似文献   

12.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

13.

This paper offers a recurrent neural network to support vector machine (SVM) learning in stochastic support vector regression with probabilistic constraints. The SVM is first converted into an equivalent quadratic programming (QP) formulation in linear and nonlinear cases. An artificial neural network for SVM learning is then proposed. The presented neural network framework guarantees obtaining the optimal solution of the SVM problem. The existence and convergence of the trajectories of the network are studied. The Lyapunov stability for the considered neural network is also shown. The efficiency of the proposed method is shown by three illustrative examples.

  相似文献   

14.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

15.
Neural network for quadratic optimization with bound constraints   总被引:20,自引:0,他引:20  
A recurrent neural network is presented which performs quadratic optimization subject to bound constraints on each of the optimization variables. The network is shown to be globally convergent, and conditions on the quadratic problem and the network parameters are established under which exponential asymptotic stability is achieved. Through suitable choice of the network parameters, the system of differential equations governing the network activations is preconditioned in order to reduce its sensitivity to noise and to roundoff errors. The optimization method employed by the neural network is shown to fall into the general class of gradient methods for constrained nonlinear optimization and, in contrast with penalty function methods, is guaranteed to yield only feasible solutions.  相似文献   

16.
The design, analysis, and application of a new recurrent neural network for quadratic programming, called simplified dual neural network, are discussed. The analysis mainly concentrates on the convergence property and the computational complexity of the neural network. The simplified dual neural network is shown to be globally convergent to the exact optimal solution. The complexity of the neural network architecture is reduced with the number of neurons equal to the number of inequality constraints. Its application to k-winners-take-all (KWTA) operation is discussed to demonstrate how to solve problems with this neural network  相似文献   

17.
提出了一种基于0.618法求解具有线性约束的二次规划问题的神经网络学习新算法。与已有的求解线性约束的二次规划问题的神经网络学习算法相比,该算法的适用范围更广,计算精度更高。其目的是为具有线性约束的二次规划问题的求解提供一种新方法。仿真实验验证了新算法的有效性。  相似文献   

18.
田大钢 《自动化学报》2003,29(2):219-226
通过一种新的对偶形式,得到一种新的易于实现的解线性规划问题的神经网络,证明了 网络具有全局指数收敛性,使得线性规划问题的神经网络解法趋于完善.  相似文献   

19.
当神经网络应用于最优化计算时,理想的情形是只有一个全局渐近稳定的平衡点,并且以指数速度趋近于平衡点,从而减少神经网络所需计算时间.研究了带时变时滞的递归神经网络的全局渐近稳定性.首先将要研究的模型转化为描述系统模型,然后利用Lyapunov-Krasovskii稳定性定理、线性矩阵不等式(LMI)技术、S过程和代数不等式方法,得到了确保时变时滞递归神经网络渐近稳定性的新的充分条件,并将它应用于常时滞神经网络和时滞细胞神经网络模型,分别得到了相应的全局渐近稳定性条件.理论分析和数值模拟显示,所得结果为时滞递归神经网络提供了新的稳定性判定准则.  相似文献   

20.
针对线性约束的非线性规划的求解问题,利用罚函数求解优化问题的思想将其转化为二次凸规划,基于神经网络的结构特性,定义所需的能量函数,从而使网络收敛于唯一稳定点最终实现线性约束的非线性规划的求解。实验仿真结果表明,该方法是有效和正确的,且能推广到含参的非线性规划和多目标规划中去。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号