共查询到20条相似文献,搜索用时 0 毫秒
1.
A high-performance feedback neural network for solving convex nonlinear programming problems 总被引:1,自引:0,他引:1
Yee Leung Kai-Zhou Chen Xing-Bao Gao 《Neural Networks, IEEE Transactions on》2003,14(6):1469-1477
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples. 相似文献
2.
Youshen Xia 《Neural Networks, IEEE Transactions on》1996,7(2):525-529
Presents a new neural network which improves existing neural networks for solving general linear programming problems. The network, without setting parameter, uses only simple hardware in which no analog multipliers are required, and is proved to be completely stable to the exact solutions. Moreover, using this network the author can solve linear programming problems and its dual simultaneously, and cope with problems with nonunique solutions whose set is allowed to be unbounded. 相似文献
3.
Yee Leung Kai-Zhou Chen Yong-Chang Jiao Xing-Bao Gao Kwong Sak Leung 《Neural Networks, IEEE Transactions on》2001,12(5):1074-1083
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient. 相似文献
4.
Y Xia 《Neural Networks, IEEE Transactions on》1996,7(6):1544-1548
A new neural network for solving linear and quadratic programming problems is presented and is shown to be globally convergent. The new neural network improves existing neural networks for solving these problems: it avoids the parameter turning problem, it is capable of achieving the exact solutions, and it uses only simple hardware in which no analog multipliers for variables are required. Furthermore, the network solves both the primal problems and their dual problems simultaneously. 相似文献
5.
This paper is concentrated on two types of fuzzy linear programming problems. First type with fuzzy coefficients in the objective function and the second type with fuzzy right-hand side values and fuzzy variables. Considering fuzzy derivative and fuzzy differential equations, these kinds of problems are solved using a fuzzy neural network model. To show the applicability of the method, it is applied to solve the fuzzy shortest path problem and the fuzzy maximum flow problem. Numerical results illustrate the method accuracy and it’s simple implementation. 相似文献
6.
A neural network for linear matrix inequality problems 总被引:1,自引:0,他引:1
Chun-Liang Lin Chi-Chih Lai Teng-Hsien Huang 《Neural Networks, IEEE Transactions on》2000,11(5):1078-1092
Gradient-type Hopfield networks have been widely used in optimization problems solving. The paper presents a novel application by developing a matrix oriented gradient approach to solve a class of linear matrix inequalities (LMIs), which are commonly encountered in the robust control system analysis and design. The solution process is parallel and distributed in neural computation. The proposed networks are proven to be stable in the large. Representative LMIs such as generalized Lyapunov matrix inequalities, simultaneous Lyapunov matrix inequalities, and algebraic Riccati matrix inequalities are considered. Several examples are provided to demonstrate the proposed results. To verify the proposed control scheme in real-time applications, a high-speed digital signal processor is used to emulate the neural-net-based control scheme. 相似文献
7.
R Perfetti 《Neural Networks, IEEE Transactions on》1995,6(5):1287-1291
This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. 相似文献
8.
A novel neural network approach is proposed for solving linear bilevel programming problem. The proposed neural network is proved to be Lyapunov stable and capable of generating optimal solution to the linear bilevel programming problem. The numerical result shows that the neural network approach is feasible and efficient. 相似文献
9.
《Computers & Mathematics with Applications》2007,53(9):1439-1454
In this paper linear and quadratic programming problems are solved using a novel recurrent artificial neural network. The new model is simpler and converges very fast to the exact primal and dual solutions simultaneously. The model is based on a nonlinear dynamical system, using arbitrary initial conditions. In order to construct an economy model, here we avoid using analog multipliers. The dynamical system is a time dependent system of equations with the gradient of specific Lyapunov energy function in the right hand side. Block diagram of the proposed neural network model is given. Fourth order Runge–Kutta method with controlled step size is used to solve the problem numerically. Global convergence of the new model is proved, both theoretically and numerically. Numerical simulations show the fast convergence of the new model for the problems with a unique solution or infinitely many. This model converges to the exact solution independent of the way that we may choose the starting points, i.e. inside, outside or on the boundaries of the feasible region. 相似文献
10.
Qingshan Liu Jinde Cao Youshen Xia 《Neural Networks, IEEE Transactions on》2005,16(4):834-843
In this paper, we present a delayed neural network approach to solve linear projection equations. The Lyapunov-Krasovskii theory for functional differential equations and the linear matrix inequality (LMI) approach are employed to analyze the global asymptotic stability and global exponential stability of the delayed neural network. Compared with the existing linear projection neural network, theoretical results and illustrative examples show that the delayed neural network can effectively solve a class of linear projection equations and some quadratic programming problems. 相似文献
11.
The constrained L(1) estimation is an attractive alternative to both the unconstrained L(1) estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L(1) estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L(1) estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L(1) estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L(1) estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms. 相似文献
12.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach. 相似文献
13.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network. 相似文献
14.
Neural Processing Letters - 相似文献
15.
基于神经元的自反馈项可产生混沌的现象,将Gauss小波函数作为混沌神经元的自反馈项。分析了Gauss小波的尺度和平移参数对神经元动力学的影响,提出了自反馈连接权和Gauss小波尺度双重模拟退火的混沌神经元。构建了混沌神经网络模型,分析了由Gauss小波函数产生的附加能量函数对网络优化能力的影响。优化问题的仿真结果表明,该网络能够以较快的速度找到优化问题的全局最优解。 相似文献
16.
Neural network for solving extended linear programming problems. 总被引:5,自引:0,他引:5
Y Xia 《Neural Networks, IEEE Transactions on》1997,8(3):803-806
A neural network for solving extended linear programming problems is presented and is shown to be globally convergent to exact solutions. The proposed neural network only uses simple hardware in which no analog multiplier for variables is required, and has no parameter tuning problem. Finally, an application of the neural network to the L(1 )-norm minimization problem is given. 相似文献
17.
F. Rostami 《国际计算机数学杂志》2018,95(3):528-539
Artificial neural networks afford great potential in learning and stability against small perturbations of input data. Using artificial intelligence techniques and modelling tools offers an ever-greater number of practical applications. In the present study, an iterative algorithm, which was based on the combination of a power series method and a neural network approach, was used to approximate a solution for high-order linear and ordinary differential equations. First, a suitable truncated series of the solution functions were substituted into the algorithm's equation. The problem considered here had a solution as a series expansion of an unknown function, and the proper implementation of an appropriate neural architecture led to an estimate of the unknown series coefficients. To prove the applicability of the concept, some illustrative examples were provided to demonstrate the precision and effectiveness of this method. Comparing the proposed methodology with other available traditional techniques showed that the present approach was highly accurate. 相似文献
18.
A recurrent neural network for solving nonlinear convex programs subject to linear constraints 总被引:1,自引:0,他引:1
Youshen Xia Jun Wang 《Neural Networks, IEEE Transactions on》2005,16(2):379-386
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network. 相似文献
19.
A new neural network for solving linear programming problems with bounded variables is presented. The network is shown to be completely stable and globally convergent to the solutions to the linear programming problems. The proposed new network is capable of achieving the exact solutions, in contrast to existing optimization neural networks which need a suitable choice of the network parameters and thus can obtain only approximate solutions. Furthermore, both the primal problems and their dual problems are solved simultaneously by the new network. 相似文献
20.
ZHANG Junying WANG Defeng SHI Meihong & WANG Joseph YueKey Lab on Radar Signal Processing Xidian University Xi''''an China Computer Department Xi''''an Engineering Science Technology Institute Xi''''an China Institute of Computer Science Xidian University Xi''''an China Department of Electrical Computer Engineering Virginia Polytechnic Institute State University Alexandria VA USA 《中国科学F辑(英文版)》2004,47(1):20-33
This paper presents a coupled neural network, called output-threshold coupled neural network (OTCNN), which can mimic the autowaves in the present pulsed coupled neural networks (PCNNs), by the construction of mutual coupling between neuron outputs and the threshold of a neuron. Based on its autowaves, this paper presents a method for finding the shortest path in shortest time with OTCNNs. The method presented here features much fewer neurons needed, simplicity of the structure of the neurons and the networks, and large scale of parallel computation. It is shown that OTCNN is very effective in finding the shortest paths from a single start node to multiple destination nodes for asymmetric weighted graph, with a number of iterations proportional only to the length of the shortest paths, but independent of the complexity of the graph and the total number of existing paths in the graph. Finally, examples for finding the shortest path are presented. 相似文献