首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
基于对分法求解线性规划问题的神经网络方法   总被引:1,自引:0,他引:1  
从数值逼近的方法出发,结合对分法的思想,提出了一种新的神经网络计算模型,我们称为对分神经网络BNN(BisectNeuralNetwork)模型;给出了一种基于对分法思想的神经网络学习算法,并用于求解线性规划问题,其目的是给线性规划求解问题提供一个新方法。  相似文献   

2.
一种新的线性规划问题的神经网络解法   总被引:2,自引:1,他引:1  
田大钢  费奇 《自动化学报》1999,25(5):709-712
1 引言单纯形法是解线性规划问题的最常用方法,可它不是一种多项式算法[1].椭圆算法[2]的提出,使人们认识到线性规划问题存在多项式解法.但椭圆算法本身在实际中的应用却并不成功.内点法[3-5]是新的一类多项式算法,尽管它在求解大规模线性规划问题方面显示了相当的潜力,其算法的精度和软件的开发都有待完善和发展.神经网络方法展示了一种新的计算思想.由于固有的并行性和学习、联想能力,其应用和发展前景未可估量.对线性规划问题而言,Hopfield和Tank提出的TH算法[6]是这种方法的代表,然而,TH…  相似文献   

3.
针对不等式约束条件下,目标函数和约束条件中含有参数的线性规划问题,提出一种基于新型光滑精确罚函数的神经网络计算方法.引入误差函数构造单位阶跃函数的近似函数,给出一种更加精确地逼近于Ll精确罚函数的光滑罚函数,讨论了其基本性质;利用所提光滑精确罚函数建立了求解参数线性规划问题的神经网络模型,证明了该网络模型的稳定性和收敛性,并给出了详细的算法步骤.数值仿真验证了所提方法具有罚因子取值小、结构简单、计算精度高等优点.  相似文献   

4.
一般的线性规划模型都存在大量的冗余约束,这些冗余约束势必会干扰线性规划问题的求解,降低解题速度,影响解题效率。如果在求解线性规划问题之前,能够对这些冗余约束进行化简并消去,就能够提高模型中约束矩阵的稀疏度,缩小线性规划问题的规模,且在求解时能节省大量的计算机存储空间和运算时间。提出了一种新的简化线性规划模型的方法,程序实现后试验表明,该简化方法达到了预期的效果。  相似文献   

5.
针对约束条件中含有参数的非线性规划问题,提出一种基于L1精确罚函数神经网络的新型计算方法。该方法的罚因子为有限实数,并且取值小,便于硬件实现。在改进现有网络模型的基础上,利用最速下降原理构造了神经网络的动力学方程。给出所提神经网络模型在优化计算方面的具体应用步骤。最后,通过数值实例进行仿真验证,结果表明所提方法能够更加快速、精准地收敛于原规划问题的最优解。  相似文献   

6.
基于整数线性规划问题的分支定界方法,以子问题或根问题的目标最优值作为参数,构造了一种新的切割不等式,能够方便地切割子问题或根问题的非整数最优解.在分支之前进行这种切割,产生了一种新的求解整数线性规划问题的切割与分支算法.将该算法应用于求解一些经典的数值例子,实验结果表明,与经典的分支定界方法相比,该算法大大减少了分支的数量,提高了计算效率.随着问题规模的增大,该算法的计算优越性体现得更加明显.  相似文献   

7.
一种新的非线性规划神经网络模型   总被引:1,自引:0,他引:1  
提出一种新型的求解非线性规划问题的神经网络模型.该模型由变量神经元、Lagrange 乘子神经元和Kuhn-Tucker乘子神经元相互连接构成.通过将Kuhn-Tucker乘子神经元限 制在单边饱和工作方式,使得在处理非线性规划问题中不等式约束时不需要引入松弛变量,避 免了由于引入松弛变量而造成神经元数目的增加,有利于神经网络的硬件实现和提高神经网 络的收敛速度.可以证明,在适当的条件下,文中提出的神经网络模型的状态轨迹收敛到与非 线性规划问题的最优解相对应的平衡点.  相似文献   

8.
求解混合约束非线性规划的神经网络模型   总被引:1,自引:0,他引:1  
陶卿  任富兴  孙德敏 《软件学报》2002,13(2):304-310
通过巧妙构造Liapunov函数,提出一种大范围收敛的求解优化问题的连续神经网络模型.它具有良好的功能和性能,可以求解具有等式和不等式约束的非线性规划问题.该模型是Newton最速下降法对约束问题的推广,能有效地提高解的精度.即使对正定二次规划问题,它也比现有的模型结构简单.  相似文献   

9.
本文主要研究网络权值和阈值为模糊数的前馈神经网络,给出了一种利用线性规划确定这种网络权值和阈值的算法,并结合实际算例验证了该算法的优越性。  相似文献   

10.
解非线性规划的多目标遗传算法及其收敛性   总被引:1,自引:0,他引:1  
给出非线性约束规划问题的一种新解法。它既不需用传统的惩罚函数,又不需区分可行解和不可行解,新方法把带约束的非线性规划问题转化成为两个目标函数优化问题,其中一个是原约束问题的目标函数,另一个是违反约束的度函数,并利用多目标优化中的Pareto优劣关系设计了一种新的选择算子,通过对搜索操作和参数的合理设计给出了一种新型遗传算法,且给出了算法的收敛性证明,最后数据实验表明该算法对带约束的非线性规划问题求解是非常有效的。  相似文献   

11.
A novel neural network approach is proposed for solving linear bilevel programming problem. The proposed neural network is proved to be Lyapunov stable and capable of generating optimal solution to the linear bilevel programming problem. The numerical result shows that the neural network approach is feasible and efficient.  相似文献   

12.
Two classes of high-performance neural networks for solving linear and quadratic programming problems are given. We prove that the new system converges globally to the solutions of the linear and quadratic programming problems. In a neural network, network parameters are usually not specified. The proposed models can overcome numerical difficulty caused by neural networks with network parameters and obtain desired approximate solutions of the linear and quadratic programming problems.  相似文献   

13.
提出了一种基于0.618法求解具有线性约束的二次规划问题的神经网络学习新算法。与已有的求解线性约束的二次规划问题的神经网络学习算法相比,该算法的适用范围更广,计算精度更高。其目的是为具有线性约束的二次规划问题的求解提供一种新方法。仿真实验验证了新算法的有效性。  相似文献   

14.
This paper presents two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots. Three recurrent neural networks are applied for determining a joint velocity vector with its maximum absolute value component being minimal among all possible joint velocity vectors corresponding to the desired end-effector velocity. In each proposed neural network approach, two cooperating recurrent neural networks are used. The first approach employs two Tank-Hopfield networks for linear programming. The second approach employs two two-layer recurrent neural networks for quadratic programming and linear programming, respectively. Both the minimal 2-norm and infinity-norm of joint velocity vector can be obtained from the output of the recurrent neural networks. Simulation results demonstrate that the proposed approaches are effective with the second approach being better in terms of accuracy and optimality  相似文献   

15.
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples  相似文献   

16.
In this paper, a discrete-time recurrent neural network with global exponential stability is proposed for solving linear constrained quadratic programming problems. Compared with the existing neural networks for quadratic programming, the proposed neural network in this paper has lower model complexity with only one-layer structure. Moreover, the global exponential stability of the neural network can be guaranteed under some mild conditions. Simulation results with some applications show the performance and characteristic of the proposed neural network.  相似文献   

17.
Presents a new neural network which improves existing neural networks for solving general linear programming problems. The network, without setting parameter, uses only simple hardware in which no analog multipliers are required, and is proved to be completely stable to the exact solutions. Moreover, using this network the author can solve linear programming problems and its dual simultaneously, and cope with problems with nonunique solutions whose set is allowed to be unbounded.  相似文献   

18.
A new neural network for solving linear programming problems with bounded variables is presented. The network is shown to be completely stable and globally convergent to the solutions to the linear programming problems. The proposed new network is capable of achieving the exact solutions, in contrast to existing optimization neural networks which need a suitable choice of the network parameters and thus can obtain only approximate solutions. Furthermore, both the primal problems and their dual problems are solved simultaneously by the new network.  相似文献   

19.
Most existing neural networks for solving linear variational inequalities (LVIs) with the mapping Mx + p require positive definiteness (or positive semidefiniteness) of M. In this correspondence, it is revealed that this condition is sufficient but not necessary for an LVI being strictly monotone (or monotone) on its constrained set where equality constraints are present. Then, it is proposed to reformulate monotone LVIs with equality constraints into LVIs with inequality constraints only, which are then possible to be solved by using some existing neural networks. General projection neural networks are designed in this correspondence for solving the transformed LVIs. Compared with existing neural networks, the designed neural networks feature lower model complexity. Moreover, the neural networks are guaranteed to be globally convergent to solutions of the LVI under the condition that the linear mapping Mx + p is monotone on the constrained set. Because quadratic and linear programming problems are special cases of LVI in terms of solutions, the designed neural networks can solve them efficiently as well. In addition, it is discovered that the designed neural network in a specific case turns out to be the primal-dual network for solving quadratic or linear programming problems. The effectiveness of the neural networks is illustrated by several numerical examples.  相似文献   

20.
Global exponential stability is a desirable property for dynamic systems. The paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in the paper further demonstrate the superior convergence properties of the existing neural networks for optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号