首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
针对不等式约束条件下,目标函数和约束条件中含有参数的线性规划问题,提出一种基于新型光滑精确罚函数的神经网络计算方法.引入误差函数构造单位阶跃函数的近似函数,给出一种更加精确地逼近于Ll精确罚函数的光滑罚函数,讨论了其基本性质;利用所提光滑精确罚函数建立了求解参数线性规划问题的神经网络模型,证明了该网络模型的稳定性和收敛性,并给出了详细的算法步骤.数值仿真验证了所提方法具有罚因子取值小、结构简单、计算精度高等优点.  相似文献   

2.
基于精确罚函数的一类广义非线性神经网络模型   总被引:3,自引:0,他引:3  
针对一般的非线性优化问题定义了一种2次非线性罚函数,证明了在一定条件下对 应的罚优化问题的精确罚定理,由此引进了一种广义非线性神经网络模型,并证明了这种网络 的平衡点与能量函数之间的联系,在一定条件下对应的平衡点收敛到原问题的最优解.这种神 经网络模型对于求解许多优化问题具有重要的作用.  相似文献   

3.
针对函数是非光滑的问题以及采用固定惩罚项的弊端,利用 Clarke广义梯度的理论和lagrange乘子法的思想,建立了一个微分包含的神经网络模型。此模型是采用罚函数的方法,有效避免了固定项的缺陷。理论证明了网络是有全局解的,并且收敛到原问题的关键点集,对于凸问题来说网络收敛的平衡点就是问题的最优点。最后通过仿真实验验证了理论结果的正确性。  相似文献   

4.
投影型神经网络算法的全局收敛性分析   总被引:3,自引:0,他引:3  
投影型神经网络具有自然保证解的可行性、可调参数少、搜索方向维数低和模型结构简单等优点,已引起众多学者关注.神经网络可用于求解优化问题的前提是它应具有全局收敛性.目前,该模型的这一性质仅对有界约束下严格凸二次规划问题得到了证明.该文利用常微分方程理论和LaSalle不变原理,通过构造Lyapunov函数,证明了该网络对一般凸规划问题的全局收敛性,并将约束区域推广到任一闭凸集.该文的结论奠定了该类网络的应用基础,扩大了它的应用范围.同时作者也讨论了该模型在较弱限制条件下的指数收敛性.最后给出一组实例,说明该网络计算上是可行和有效的.  相似文献   

5.
递归神经网络方法解决非光滑伪凸优化问题   总被引:1,自引:0,他引:1  
针对目标函数为非光滑伪凸函数且带有等式约束和不等式约束的优化问题,基于罚函数以及微分包含的思想,构建一个层次仅为一层且不包含惩罚算子的新型递归神经网络模型。该模型不用提前计算惩罚参数,能够很好地收敛。理论证明全局解存在,模型的状态解能够在有限的时间内进到原目标函数的可行域并不再离开,其状态解最终收敛到目标函数的一个最优解。仿真实验证实了理论结果的可行性。  相似文献   

6.
提出了解决一类带等式与不等式约束的非光滑非凸优化问题的神经网络模型。证明了当目标函数有下界时,神经网络的解轨迹在有限时间收敛到可行域。同时,神经网络的平衡点集与优化问题的关键点集一致,且神经网络最终收敛于优化问题的关键点集。与传统基于罚函数的神经网络模型不同,提出的模型无须计算罚因子。最后,通过仿真实验验证了所提出模型的有效性。  相似文献   

7.
多目标应急设施选址问题的模拟退火算法   总被引:1,自引:0,他引:1       下载免费PDF全文
考虑应急设施选址时的成本和应急时间因素,给出了多目标应急设施选址问题的模型,通过设置罚函数将该多约束问题转化成易于计算机求解的简单约束模型,进而在初始解的选取、温度参数的控制、可行解的迭代策略和算法终止条件等方面为之设计了模拟退火算法,并通过仿真证明了该算法的有效性。  相似文献   

8.
通常的递归神经网络计算方法采用渐近收敛的网络模型,误差函数渐近收敛于零,理论上需经过无穷长的计算时间才能获得被求解问题的精确解。文中提出了一种终态递归神经网络模型,该网络形式新颖,具有有限时间收敛特性,用于解决时变矩阵计算问题时可使得计算过程快速收敛,且计算精度高。该网络的另一特点是动态方程右端函数值有限,易于实现。首先,分析渐近收敛网络模型在时变计算问题求解方面的缺陷,说明引入终态网络模型的必要性;然后,给出终态网络动态方程,推导出该网络收敛时间的具体表达式。对于时变矩阵逆和广义逆求解,定义一个误差函数,并依据误差函数构造终态递归神经网络进行求解,使计算过程在有限时间内收敛便能得到精确解。在将任意初始位置下的冗余机械臂轨迹规划任务转换为二次规划问题后,利用所提出的神经网络进行计算,得出的关节角轨迹导致末端执行器完成封闭轨迹跟踪,且关节角严格返回初始位置,以实现可重复运动。使用MATLAB/SIMULINK对时变矩阵计算问题和机器人轨迹规划任务分别进行仿真,通过比较分别采用渐近网络模型和终态网络模型时的计算过程与结果可以看出,使用终态网络模型的计算过程收敛快且显著提高了计算精度。对不同时变计算问题的求解体现了所提神经网络的应用背景。  相似文献   

9.
孟志青  徐蕾艳  蒋敏  沈瑞 《计算机科学》2017,44(Z6):97-98, 132
首先定义了压缩感知优化问题的一个等价表示问题,证明了这个等价表示问题的最优解也是压缩感知优化问题的最优解。然后定义了它的一个具有2阶以上的光滑性的目标罚函数,给出了一个迭代求解算法,证明了所提算法的收敛性定理。定理表明,可以通过求解目标罚函数来获得压缩感知优化问题的近似最优解,该方法为研究和解决实际的压缩感知问题提供了一个新的工具。  相似文献   

10.
针对约束条件中含有参数的非线性规划问题,提出一种基于L1精确罚函数神经网络的新型计算方法。该方法的罚因子为有限实数,并且取值小,便于硬件实现。在改进现有网络模型的基础上,利用最速下降原理构造了神经网络的动力学方程。给出所提神经网络模型在优化计算方面的具体应用步骤。最后,通过数值实例进行仿真验证,结果表明所提方法能够更加快速、精准地收敛于原规划问题的最优解。  相似文献   

11.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

12.
针对带有不等式约束条件的非光滑伪凸优化问题,提出了一种基于微分包含理论的新型递归神经网络模型,根据目标函数与约束条件设计出随着状态向量变化而变化的罚函数,使得神经网络的状态向量始终朝着可行域方向运动,确保神经网络状态向量可在有限时间内进入可行域,最终收敛到原始优化问题的最优解。最后,用两个仿真实验用来验证神经网络的有效性与准确性。与现有神经网络相比,它是一种新型的神经网络模型,模型结构简单,无需计算精确的罚因子,最重要的是无需可行域有界。  相似文献   

13.
A neural network for solving convex nonlinear programming problems is proposed in this paper. The distinguishing features of the proposed network are that the primal and dual problems can be solved simultaneously, all necessary and sufficient optimality conditions are incorporated, and no penalty parameter is involved. Based on Lyapunov, LaSalle and set stability theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the set of its equilibrium points, regardless of whether a convex nonlinear programming problem has unique or infinitely many optimal solutions. Numerical simulation results also show that the proposed network is feasible and efficient. In addition, a general method for transforming non-linear programming problems into unconstrained problems is also proposed. ID="A1" Correspondence and offprint requests to: Dr Z Chen, Department of Electronic Engineering, Brunel University, Uxbridge, Middle-sex, UK  相似文献   

14.
为寻求满足约束条件的优化问题的最优解,针对目标函数是非李普西茨函数,可行域由线性不等式或非线性不等式约束函数组成的区域的优化问题,构造了一种光滑神经网络模型。此模型通过引进光滑逼近技术将目标函数由非光滑函数转换成相应的光滑函数以及结合惩罚函数方法所构造而成。通过详细的理论分析证明了不论初始点在可行域内还是在可行域外,光滑神经网络的解都具有一致有界性和全局性,以及光滑神经网络的任意聚点都是原始优化问题的稳定点等结论。最后通过几个简单的仿真实验证明了理论的正确性。  相似文献   

15.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

16.
Xia Y  Kamel MS 《Neural computation》2008,20(3):844-872
The constrained L(1) estimation is an attractive alternative to both the unconstrained L(1) estimation and the least square estimation. In this letter, we propose a cooperative recurrent neural network (CRNN) for solving L(1) estimation problems with general linear constraints. The proposed CRNN model combines four individual neural network models automatically and is suitable for parallel implementation. As a special case, the proposed CRNN includes two existing neural networks for solving unconstrained and constrained L(1) estimation problems, respectively. Unlike existing neural networks, with penalty parameters, for solving the constrained L(1) estimation problem, the proposed CRNN is guaranteed to converge globally to the exact optimal solution without any additional condition. Compared with conventional numerical algorithms, the proposed CRNN has a low computational complexity and can deal with the L(1) estimation problem with degeneracy. Several applied examples show that the proposed CRNN can obtain more accurate estimates than several existing algorithms.  相似文献   

17.
We present a general methodology for designing optimization neural networks. We prove that the neural networks constructed by using the proposed method are guaranteed to be globally convergent to solutions of problems with bounded or unbounded solution sets, in contrast with the gradient methods whose convergence is not guaranteed. We show that the proposed method contains both the gradient methods and nongradient methods employed in existing optimization neural networks as special cases. Based on the theoretical results of the proposed method, we study the convergence and stability of general gradient models in the case of unisolated solutions. Using the proposed method, we derive some new neural network models for a very large class of optimization problems, in which the equilibrium points correspond to exact solutions and there is no variable parameter. Finally, some numerical examples show the effectiveness of the method.  相似文献   

18.
In this paper, a time-varying two-phase (TVTP) optimization neural network is proposed based on the two-phase neural network and the time-varying programming neural network. The proposed TVTP algorithm gives exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. It can be applied to system identification and control where it has some constraints on weights in the learning of the neural network. To demonstrate its effectiveness and applicability, the proposed algorithm is applied to the learning of a neo-fuzzy neuron model.  相似文献   

19.
A variety of real-world problems can be formulated as continuous optimization problems with variable constraint. It is well-known, however, that it is difficult to develop a unified method for obtaining their feasible solutions. We have recognized that the recent work of solving the traveling salesman problem (TSP) by the Hopfield model explores an innovative approach to them as well as combinatorial optimization problems. The Hopfield model is generalized into the Cohen-Grossberg model (CGM) to which a specific Lyapunov function has been found. This paper thus extends the Hopfield method onto the CGM in order to develop a unified solving-method of continuous optimization problems with variable-constraint. Specifically, we consider a certain class of continuous optimization problems with a constraint equation including the Hopfield version of the TSP as a particular member. Then we theoretically develop a method that, from any given problem of that class, derives a network of an extended CGM to provide feasible solutions to it. The main idea for constructing that extended CGM lies in adding to it a synapse dynamical system concurrently operating with its current unit dynamical system so that the constraint equation can be enforced to satisfaction at final states. This construction is also motivated by previous neuron models in biophysics and learning algorithms in neural networks  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号