首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
The softassign quadratic assignment algorithm is a discrete-time, continuous-state, synchronous updating optimizing neural network. While its effectiveness has been shown in the traveling salesman problem, graph matching, and graph partitioning in thousands of simulations, its convergence properties have not been studied. Here, we construct discrete-time Lyapunov functions for the cases of exact and approximate doubly stochastic constraint satisfaction, which show convergence to a fixed point. The combination of good convergence properties and experimental success makes the softassign algorithm an excellent choice for neural quadratic assignment optimization.  相似文献   

2.
A dual neural network for kinematic control of redundant robotmanipulators   总被引:3,自引:0,他引:3  
The inverse kinematics problem in robotics can be formulated as a time-varying quadratic optimization problem. A new recurrent neural network, called the dual network, is presented in this paper. The proposed neural network is composed of a single layer of neurons, and the number of neurons is equal to the dimensionality of the workspace. The proposed dual network is proven to be globally exponentially stable. The proposed dual network is also shown to be capable of asymptotic tracking for the motion control of kinematically redundant manipulators.  相似文献   

3.

针对传统混沌时间序列预测模型的复杂性、低精度性和低时效性的缺点, 在倒差商连分式基础上提出全参数连分式模型, 并利用量子粒子群优化算法优化模型参数, 将参数优化问题转化为多维空间上的函数优化问题. 以二阶强迫布鲁塞尔振子和三维二次自治广义Lorenz 系统为模型, 通过四阶Runge-Kutta 法产生混沌时间序列, 并利用基于量子粒子群优化算法的全参数连分式、BP 神经网络和RBF 神经网络分别对混沌时间序列进行单步和多步预测. 仿真结果表明, 基于量子粒子群优化算法的全参数连分式结构简单、精度高、效率高, 该预测模型可被推广和应用.

  相似文献   

4.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

5.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

6.
A neural network approach to job-shop scheduling   总被引:6,自引:0,他引:6  
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.  相似文献   

7.
Grasping and manipulation force distribution optimization of multi-fingered robotic hands can be formulated as a problem for minimizing an objective function subject to form-closure constraints, kinematics, and balance constraints of external force. In this paper we present a novel neural network for dexterous hand-grasping inverse kinematics mapping used in force optimization. The proposed optimization is shown to be globally convergent to the optimal grasping force. The approach followed here is to let an artificial neural network (ANN) learn the nonlinear inverse kinematics functional relating the hand joint positions and displacements to object displacement. This is done by considering the inverse hand Jacobian, in addition to the interaction between hand fingers and the object. The proposed neural-network approach has the advantages that the complexity for implementation is reduced, and the solution accuracy is increased, by avoiding the linearization of quadratic friction constraints. Simulation results show that the proposed neural network can achieve optimal grasping force.  相似文献   

8.
This paper proposes a method based on quadratic programming (QP) and augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem with piecewise quadratic cost functions and prohibited zones. The ALHN method is a continuous Hopfield neural network with its energy function based on augmented Lagrange function which can properly deal with constrained optimization problems. In the proposed method, the QP method is firstly used to determine the fuel cost curve for each unit and initialize for the ALHN method, then a heuristic search is used for repairing prohibited zone violations, and the ALHN method is finally applied for solving the problem if any violations found. The proposed method has been tested on different systems and the obtained results are compared to those from many other methods in the literature. The result comparison has indicated that the proposed method has obtained better solution quality than many other methods. Therefore, the proposed QP-ALHN method could be a favorable method for solving the ED problem with piecewise quadratic cost functions and prohibited zones.  相似文献   

9.
This paper presents a dynamic optimization scheme for solving degenerate convex quadratic programming (DCQP) problems. According to the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, a neural network model based on a dynamic system model is constructed. The equilibrium point of the model is proved to be equivalent to the optimal solution of the DCQP problem. It is also shown that the network model is stable in the Lyapunov sense and it is globally convergent to an exact optimal solution of the original problem. Several practical examples are provided to show the feasibility and the efficiency of the method.  相似文献   

10.
求解线性约束二次优化问题的神经计算模型   总被引:1,自引:0,他引:1       下载免费PDF全文
本文提出了一种求解线性约束二次优化问题的神经模型 ,研究了该神经网络的稳定性和收敛性 ,给出了电路框图 ,并通过算例证明了该神经网络的可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号