首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
针对线性约束的非线性规划的求解问题,利用罚函数求解优化问题的思想将其转化为二次凸规划,基于神经网络的结构特性,定义所需的能量函数,从而使网络收敛于唯一稳定点最终实现线性约束的非线性规划的求解。实验仿真结果表明,该方法是有效和正确的,且能推广到含参的非线性规划和多目标规划中去。  相似文献   

2.
提出了一种基于梯度投影矩阵下的求解线性约束下规划问题的神经网络。针对解的稳定性问题,导出了该网络相关参数之间的关系。由文中定义可知,该网络不但适合于求解线性约束下线性或非二次规划问题,而且也用于求解线性或非线性方程组问题,比其它规划问题的神经网络方法更具有一般性。  相似文献   

3.
提出了一种基于0.618法求解具有线性约束的二次规划问题的神经网络学习新算法。与已有的求解线性约束的二次规划问题的神经网络学习算法相比,该算法的适用范围更广,计算精度更高。其目的是为具有线性约束的二次规划问题的求解提供一种新方法。仿真实验验证了新算法的有效性。  相似文献   

4.
提出了一种基于梯度投影矩阵下的求解线性约束下规划问题的神经网络模型,并导出了线性约束下规划问题的稳定解法,该网络模型既适合于求解线性约束下线性或非二次规划问题,又适合于求解线性或非线性方程组,与其它规划问题的神经网络相比,更具有一般性。  相似文献   

5.
通常的递归神经网络计算方法采用渐近收敛的网络模型,误差函数渐近收敛于零,理论上需经过无穷长的计算时间才能获得被求解问题的精确解。文中提出了一种终态递归神经网络模型,该网络形式新颖,具有有限时间收敛特性,用于解决时变矩阵计算问题时可使得计算过程快速收敛,且计算精度高。该网络的另一特点是动态方程右端函数值有限,易于实现。首先,分析渐近收敛网络模型在时变计算问题求解方面的缺陷,说明引入终态网络模型的必要性;然后,给出终态网络动态方程,推导出该网络收敛时间的具体表达式。对于时变矩阵逆和广义逆求解,定义一个误差函数,并依据误差函数构造终态递归神经网络进行求解,使计算过程在有限时间内收敛便能得到精确解。在将任意初始位置下的冗余机械臂轨迹规划任务转换为二次规划问题后,利用所提出的神经网络进行计算,得出的关节角轨迹导致末端执行器完成封闭轨迹跟踪,且关节角严格返回初始位置,以实现可重复运动。使用MATLAB/SIMULINK对时变矩阵计算问题和机器人轨迹规划任务分别进行仿真,通过比较分别采用渐近网络模型和终态网络模型时的计算过程与结果可以看出,使用终态网络模型的计算过程收敛快且显著提高了计算精度。对不同时变计算问题的求解体现了所提神经网络的应用背景。  相似文献   

6.
韩振宇  李树荣 《控制与决策》2012,27(9):1370-1375
针对有约束条件的非线性最优控制问题,提出一种基于拟线性化和Haar函数的数值求解方法.首先将最优控制问题转化为一系列的二次规划问题,并使用系数未知的Haar函数对问题中的状态变量进行近似;然后应用拟线性化法将原非线性最优控制问题转化为相应的一系列受限的二次最优控制问题进行求解;最后基于所提出的方法对2个受限非线性最优控制问题进行求解,并通过仿真结果表明了采用所提出的算法求解最优控制问题的有效性.  相似文献   

7.
孔颖  孙明轩 《计算机科学》2018,45(12):201-205
为解决冗余机械臂在运动过程中出现的关节角漂移现象,提出了一种终态吸引优化指标,形成冗余机械臂重复运动规划的二次优化方法。采用具有有限值激活函数的终态神经网络来求解,在初值位置偏移目标位置的情形下,实现冗余机械臂有限时间收敛的重复运动规划任务。同时,分别以新型的终态神经网络(TNN)和其加速网络(ATNN)求解运动规划问题,该网络求解方法具有终态吸引特性,能够在有限的时间内得到有效解。相比具有渐近收敛动态特性的神经网络(ANN),终态神经网络方法不仅改变了收敛速度,而且提高了收敛的精度。基于冗余机械臂PUMA560的计算机仿真结果表明了所提方法的有效性和实时性。  相似文献   

8.
基于动态规划的约束优化问题多参数规划求解方法及应用   总被引:1,自引:0,他引:1  
结合动态规划和单步多参数二次规划, 提出一种新的约束优化控制问题多参数规划求解方法. 一方面能得到约束线性二次优化控制问题最优控制序列与状态之间的显式函数关系, 减少多参数规划问题求解的工作量; 另一方面能够同时求解得到状态反馈最优控制律. 应用本文提出的多参数二次规划求解方法, 建立无限时间约束优化问题状态反馈显式最优控制律. 针对电梯机械系统振动控制模型做了数值仿真计算.  相似文献   

9.
求解混合约束非线性规划的神经网络模型   总被引:1,自引:0,他引:1  
陶卿  任富兴  孙德敏 《软件学报》2002,13(2):304-310
通过巧妙构造Liapunov函数,提出一种大范围收敛的求解优化问题的连续神经网络模型.它具有良好的功能和性能,可以求解具有等式和不等式约束的非线性规划问题.该模型是Newton最速下降法对约束问题的推广,能有效地提高解的精度.即使对正定二次规划问题,它也比现有的模型结构简单.  相似文献   

10.
基于二次型规划的平面冗余机械臂的自运动   总被引:1,自引:0,他引:1  
提出一种基于二次型性能指标的方法,用于规划平面冗余机械臂的自运动轨迹.鉴于实际的机械臂都存在关节物理约束,该自运动规划方案考虑了关节极限和关节速度极限的躲避.提出了基于线性变分不等式的原对偶神经网络,并将其作为所对应的二次型规划方案的实时求解器.仿真结果证实了该基于神经网络的自运动规划方案的有效性.  相似文献   

11.
Two classes of high-performance neural networks for solving linear and quadratic programming problems are given. We prove that the new system converges globally to the solutions of the linear and quadratic programming problems. In a neural network, network parameters are usually not specified. The proposed models can overcome numerical difficulty caused by neural networks with network parameters and obtain desired approximate solutions of the linear and quadratic programming problems.  相似文献   

12.
神经网络的规划学习算法   总被引:11,自引:1,他引:10  
张铃  张钹 《计算机学报》1994,17(9):669-675
本文利用二次规划方法,讨论对神经网络训练样本的吸引半径的优化问题,并借用二交规划中的RPA算法求该优化解,得到一种新的神经网络基于规划的学习算法,其次,将规划学习算法与现有的几种常见算法进行比较,指出该算法的特点。  相似文献   

13.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

14.
Discusses the learning problem of neural networks with self-feedback connections and shows that when the neural network is used as associative memory, the learning problem can be transformed into some sort of programming (optimization) problem. Thus, the rather mature optimization technique in programming mathematics can be used for solving the learning problem of neural networks with self-feedback connections. Two learning algorithms based on programming technique are presented. Their complexity is just polynomial. Then, the optimization of the radius of attraction of the training samples is discussed using quadratic programming techniques and the corresponding algorithm is given. Finally, the comparison is made between the given learning algorithm and some other known algorithms  相似文献   

15.
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.  相似文献   

16.
In this paper, a discrete-time recurrent neural network with global exponential stability is proposed for solving linear constrained quadratic programming problems. Compared with the existing neural networks for quadratic programming, the proposed neural network in this paper has lower model complexity with only one-layer structure. Moreover, the global exponential stability of the neural network can be guaranteed under some mild conditions. Simulation results with some applications show the performance and characteristic of the proposed neural network.  相似文献   

17.
In this paper, a novel intelligent computational approach is developed for finding the solution of nonlinear singular system governed by boundary value problems of Flierl–Petviashivili equations using artificial neural networks optimized with genetic algorithms, sequential quadratic programming technique, and their combinations. The competency of artificial neural network for universal function approximation is exploited in formulation of mathematical modelling of the equation based on an unsupervised error with specialty of satisfying boundary conditions at infinity. The training of the weights of the networks is carried out with memetic computing based on genetic algorithm used as a tool for reliable global search method, hybridized with sequential quadratic programming technique used as a tool for rapid local convergence. The proposed scheme is evaluated on three variants of the two boundary problems by taking different values of nonlinearity operators and constant coefficients. The reliability and effectiveness of the design approaches are validated through the results of statistical analyses based on sufficient large number of independent runs in terms of accuracy, convergence, and computational complexity. Comparative studies of the proposed results are made with state of the art analytical solvers, which show a good agreement mostly and even better in few cases as well. The intrinsic worth of the schemes is simplicity in the concept, ease in implementation, to avoid singularity at origin, to deal with strong nonlinearity effectively, and their ability to handle exactly traditional initial conditions along with boundary condition at infinity.  相似文献   

18.
Most existing neural networks for solving linear variational inequalities (LVIs) with the mapping Mx + p require positive definiteness (or positive semidefiniteness) of M. In this correspondence, it is revealed that this condition is sufficient but not necessary for an LVI being strictly monotone (or monotone) on its constrained set where equality constraints are present. Then, it is proposed to reformulate monotone LVIs with equality constraints into LVIs with inequality constraints only, which are then possible to be solved by using some existing neural networks. General projection neural networks are designed in this correspondence for solving the transformed LVIs. Compared with existing neural networks, the designed neural networks feature lower model complexity. Moreover, the neural networks are guaranteed to be globally convergent to solutions of the LVI under the condition that the linear mapping Mx + p is monotone on the constrained set. Because quadratic and linear programming problems are special cases of LVI in terms of solutions, the designed neural networks can solve them efficiently as well. In addition, it is discovered that the designed neural network in a specific case turns out to be the primal-dual network for solving quadratic or linear programming problems. The effectiveness of the neural networks is illustrated by several numerical examples.  相似文献   

19.
This paper presents two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots. Three recurrent neural networks are applied for determining a joint velocity vector with its maximum absolute value component being minimal among all possible joint velocity vectors corresponding to the desired end-effector velocity. In each proposed neural network approach, two cooperating recurrent neural networks are used. The first approach employs two Tank-Hopfield networks for linear programming. The second approach employs two two-layer recurrent neural networks for quadratic programming and linear programming, respectively. Both the minimal 2-norm and infinity-norm of joint velocity vector can be obtained from the output of the recurrent neural networks. Simulation results demonstrate that the proposed approaches are effective with the second approach being better in terms of accuracy and optimality  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号