首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 531 毫秒
1.
针对约束条件中含有参数的非线性规划问题,提出一种基于L1精确罚函数神经网络的新型计算方法。该方法的罚因子为有限实数,并且取值小,便于硬件实现。在改进现有网络模型的基础上,利用最速下降原理构造了神经网络的动力学方程。给出所提神经网络模型在优化计算方面的具体应用步骤。最后,通过数值实例进行仿真验证,结果表明所提方法能够更加快速、精准地收敛于原规划问题的最优解。  相似文献   

2.
针对带有不等式约束条件的非光滑伪凸优化问题,提出了一种基于微分包含理论的新型递归神经网络模型,根据目标函数与约束条件设计出随着状态向量变化而变化的罚函数,使得神经网络的状态向量始终朝着可行域方向运动,确保神经网络状态向量可在有限时间内进入可行域,最终收敛到原始优化问题的最优解。最后,用两个仿真实验用来验证神经网络的有效性与准确性。与现有神经网络相比,它是一种新型的神经网络模型,模型结构简单,无需计算精确的罚因子,最重要的是无需可行域有界。  相似文献   

3.
非光滑伪凸优化问题广泛应用于科学及工程等领域,属于一类特殊的非凸优化问题,具有重要的研究意义.针对含有等式约束和不等式约束条件的非光滑伪凸优化问题,该文提出了一种新的神经动力学方法,并引入罚函数和正则化思想.通过有效的罚函数保证了所提出的神经网络状态的有界性,从而保证神经网络的状态解在有限时间内进入可行域中,最终收敛到原问题的最优解.最后,用两个数值实验验证了所提出模型的有效性.与现有的神经网络相比,该文的模型有以下优势:避免预先计算精确的惩罚因子,初始点的选取无特殊要求,结构简单.  相似文献   

4.
罚函数法是一种将约束优化问题转化为无约束问题的重要方法.对于一般的约束优化问题,通过加入新参数,给出了一种改进的精确罚函数和这种罚函数的精确罚定理证明,提出了求解这种罚函数的算法.实验表明该算法是有效的.  相似文献   

5.
为寻求满足约束条件的优化问题的最优解,针对目标函数是非李普西茨函数,可行域由线性不等式或非线性不等式约束函数组成的区域的优化问题,构造了一种光滑神经网络模型。此模型通过引进光滑逼近技术将目标函数由非光滑函数转换成相应的光滑函数以及结合惩罚函数方法所构造而成。通过详细的理论分析证明了不论初始点在可行域内还是在可行域外,光滑神经网络的解都具有一致有界性和全局性,以及光滑神经网络的任意聚点都是原始优化问题的稳定点等结论。最后通过几个简单的仿真实验证明了理论的正确性。  相似文献   

6.
带有线性不等式约束的非光滑非优化问题被广泛应用于稀疏优化,具有重要的研究价值.为了解决这类问题,提出了一种基于光滑化和微分包含理论的神经网络模型.通过理论分析,证明了所提神经网络的状态解全局存在,轨迹能够在有限时间进入可行域并永驻其中,且任何聚点都是目标优化问题的广义稳定点.最后给出数值实验和图像复原实验验证神经网络在理论和应用中的有效性.与现有神经网络相比,它具有以下优势:初始点可以任意选取;避免计算精确罚因子;无需求解复杂的投影算子.  相似文献   

7.
装备系统可靠性分配的本质之一是有约束条件的非线性规划问题,从内罚函数角度出发,通过构造惩罚函数,将问题转化为无约束的优化问题;构造出基于装备系统复杂度和平均故障间隔时间的费用函数,将系统可靠度与费用关联起来,给出了在费用约束条件下的可靠性分配数学模型;在实例计算分析中,利用Matlab中的ode15s功能函数对分配模型进行计算,得到了不同惩罚因子下可靠性分配的最优解,计算结果具有较好的收敛性和稳定性,说明本分配方法具有良好的可行性。   相似文献   

8.
基于精确罚函数的一类广义非线性神经网络模型   总被引:3,自引:0,他引:3  
针对一般的非线性优化问题定义了一种2次非线性罚函数,证明了在一定条件下对 应的罚优化问题的精确罚定理,由此引进了一种广义非线性神经网络模型,并证明了这种网络 的平衡点与能量函数之间的联系,在一定条件下对应的平衡点收敛到原问题的最优解.这种神 经网络模型对于求解许多优化问题具有重要的作用.  相似文献   

9.
提出了解决一类带等式与不等式约束的非光滑非凸优化问题的神经网络模型。证明了当目标函数有下界时,神经网络的解轨迹在有限时间收敛到可行域。同时,神经网络的平衡点集与优化问题的关键点集一致,且神经网络最终收敛于优化问题的关键点集。与传统基于罚函数的神经网络模型不同,提出的模型无须计算罚因子。最后,通过仿真实验验证了所提出模型的有效性。  相似文献   

10.
回归型支持向量机的调节熵函数法   总被引:1,自引:0,他引:1  
基于最优化理论中的KKT 互补条件建立支持向量回归机的无约束不可微优化模型,并给出了一种有效的光滑近似解法———调节熵函数方法.该方法不需参数取值很大便可逼近问题的最优解,从而避免了一般熵函数法为了逼近精确解,参数取得过大而导致数值的溢出现象,为求解支持向量回归机提供了一条新途径.数值实验结果表明,回归型支持向量机的调节熵函数法改善了支持向量机的回归性能和效率.  相似文献   

11.
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples  相似文献   

12.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

13.
研究了广义特征根问题求解的神经网络方法,给出了求解该问题的一个时间连续性反馈网络模型,利用LaSalle不变原理分析并证明了该网络的拟全局收敛性,这是网络能够确切的求解广义特征根问题的保证.同时,该网络解决了已有的基于罚函数方法构造的特征根问题的神经网络存在的一些基本缺陷:其一,基于罚函数的网络模型所得到的解可能不是真解,甚至可能都不是可行解;其二,它们的共同缺陷是有一个需要调节的参数,但是参数的选择并没有一个可供参考的准则;其三,这些模型的稳定性无法保证.本文所提出的网络模型解决了这些问题,并且,此网络具有一个很好的特征就是在初始点选定在问题的可行解集的话,网络轨线将永远是可行的并收敛到一个广义特征向量.最后,数值模拟也表明这里所提出的网络的可靠性能,进一步证明了此网络可以很好地求解广义特征根问题.  相似文献   

14.
In this paper, a time-varying two-phase (TVTP) optimization neural network is proposed based on the two-phase neural network and the time-varying programming neural network. The proposed TVTP algorithm gives exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. It can be applied to system identification and control where it has some constraints on weights in the learning of the neural network. To demonstrate its effectiveness and applicability, the proposed algorithm is applied to the learning of a neo-fuzzy neuron model.  相似文献   

15.
求解混合约束非线性规划的神经网络模型   总被引:1,自引:0,他引:1  
陶卿  任富兴  孙德敏 《软件学报》2002,13(2):304-310
通过巧妙构造Liapunov函数,提出一种大范围收敛的求解优化问题的连续神经网络模型.它具有良好的功能和性能,可以求解具有等式和不等式约束的非线性规划问题.该模型是Newton最速下降法对约束问题的推广,能有效地提高解的精度.即使对正定二次规划问题,它也比现有的模型结构简单.  相似文献   

16.
In this paper linear and quadratic programming problems are solved using a novel recurrent artificial neural network. The new model is simpler and converges very fast to the exact primal and dual solutions simultaneously. The model is based on a nonlinear dynamical system, using arbitrary initial conditions. In order to construct an economy model, here we avoid using analog multipliers. The dynamical system is a time dependent system of equations with the gradient of specific Lyapunov energy function in the right hand side. Block diagram of the proposed neural network model is given. Fourth order Runge–Kutta method with controlled step size is used to solve the problem numerically. Global convergence of the new model is proved, both theoretically and numerically. Numerical simulations show the fast convergence of the new model for the problems with a unique solution or infinitely many. This model converges to the exact solution independent of the way that we may choose the starting points, i.e. inside, outside or on the boundaries of the feasible region.  相似文献   

17.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

18.
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号