首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
为了更有效地用乘子法求解结构优化问题,将倒变量和混合变量这两种中间变量引入乘子法,提出了基于中间变量的乘子法。在这种新方法中,基于增广拉格朗日函数的无约束子问题的构造和求解,以及算法的迭代和收敛,都是对中间变量进行的。只有目标函数与约束函数的函数值及梯度值的求取,是对原变量进行的。用两个具有较高非线性的工程算例对新方法进行测试,测试结果表明新方法具有良好的收敛性,且比一般的乘子法的收敛速度更快。  相似文献   

2.
提出了一种同时具有迟滞和混沌特性的神经元模型,并利用该模型构造出神经网络,用于求解优化计算等问题.通过在神经元中引入自反馈,使得神经元具有混沌特性.将神经元的激励函数改为具有上升分支和下降分支的迟滞函数,从而将迟滞特性引入神经元和神经网络中.结合模拟退火机制,在优化计算初期,利用混沌特性可提高网络的遍历寻优能力,利用迟滞特性可在一定程度上克服假饱和现象,提高网络的寻优速度:在优化计算末期,网络蜕变为普通的Hopfiled型神经网络,按照梯度寻优方式收敛到某局部最优解.可通过构造能量函数的方法,将图像识别中的特征点匹配等问题转化为优化计算问题,从而可采用该神经网络进行问题求解.仿真结果验证了该方法的有效性.  相似文献   

3.
一种回归神经网络的快速在线学习算法   总被引:11,自引:0,他引:11  
韦巍 《自动化学报》1998,24(5):616-621
针对回归神经网络BP学习算法收敛慢的缺陷,提出了一种新的快速在线递推学习算法.本算法在目标函数中引入了遗忘因子,并借助于非线性系统的最大似然估计原理成功地解决了动态非线性系统回归神经网络模型权系数学习的实时性和快速性问题.仿真结果表明,该算法比传统的回归BP学习算法具有更快的收敛速度.  相似文献   

4.
建立多级调速泵结构配置连续非线性规划和整数非线性规划二阶段模型.非线性整数规划子问题采用外逼近算法求解.针对连续非线性规划主问题,提出基于割角法的可行域协调分解优化算法,证明割角法陷阱问题并建立判断准则排除已知的陷阱区域,在此基础上构建系列松弛问题得到原优化问题渐进收紧的下界估计,并最终收敛到原优化问题全局最优解.三级调速泵结构配置实例验证了算法的有效性,并给出与其他算法的比较结果.  相似文献   

5.
针对变压器故障诊断中的小样本、非线性、参数寻优难等问题,提出改进的变量预测模型的变压器故障诊断方法.分析变量预测模型和布谷鸟搜索算法结合解决小样本和非线性问题,指出其后期收敛速度慢,稳定性差,收敛精度不高,易陷入局部极小值问题,在此基础上在谷鸟搜索算法位置更新中引入变异操作,提高解的多样性.引入动态步长和动态发现概率提...  相似文献   

6.
实时无等待HFS调度的一种拉格朗日松弛算法   总被引:5,自引:1,他引:4  
轩华  唐立新 《控制与决策》2006,21(4):376-380
研究了实时无等待HFS调度问题,并建立一个整数规划模型,提出运用拉格朗日松弛算法来求解,在此算法中,常采用次梯度方法更新拉格朗日乘子,但它随着迭代数的增加收敛速度会减慢,因此设计了一个改进的bundle方法。将以前的次梯度累积到bundle中,以获得一个更好的乘子更新方向.仿真实验表明,与次梯度方法相比,所设计的bundle法不仅在较少的迭代数内得到了更快的收敛速度而且改进了优化性能,对于大规模问题效果更为显著。  相似文献   

7.
针对超限学习机在大数据环境下计算负担过重的问题,文中提出正则化超限学习机的多分块松弛交替方向乘子法及N-等分和N/2-等分情形的标量化实现.模型分块使算法具有高度的并行结构,与松弛技术结合提高算法的收敛速度.通过分析,建立算法收敛的充要条件,给出最优收敛率及最优参数.在基准数据集上仿真计算收敛率随分块数的变化关系,对比不同算法的收敛速率和GPU加速比.实验表明,文中算法具有较低的计算复杂度和较高的并行性.  相似文献   

8.
Chan-Vese模型是图像分割模型中效率较高的一种.传统的分割方法解决Chan-Vese模型出现了计算效率低、占用内存大、对于解决结构复杂的模型运行时间长等问题.针对上述问题,提出了FADMM和ACPDM两种新的快速分割方法.基于离散的二值标记函数,将两相分割模型转化为凸优化模型,结合FISTA算法和Chambolle-Pock算法对ADMM和对偶方法进行改进,采用变分的思想,通过引入辅助变量和拉格朗日乘子,交替迭代直至收敛到泛函的极值.实验结果表明,两种方法在保持图像区域边界的条件下,收敛速度可提高两倍以上.  相似文献   

9.
为实现在实际的炉群多变量燃烧系统中,对各个燃烧的子系统的控制参数进行优化,提出了一种基于改进适应度函数的遗传单神经元控制算法,该算法克服了采用神经网络方法收敛速度慢、在求解过程中陷入局部极小点等缺点,利用遗传算法的全局寻优特性和神经网络对非线性函数较强的逼近能力,将改进的遗传算法和单神经元控制相结合,实现对一类非线性系统的参数进行优化。模拟实验和真实结果验证了这种方法是可行的。  相似文献   

10.
提出在模糊神经网络中使用粗糙集理论进行网络的设计.在模糊神经网络中引入粗糙集理论,不仅可以去除模糊神经网络中输入层的冗余神经元而且可以确定隐含层神经元的数目,从而使模糊神经网络具有更准确的逼近收敛能力和较高的精度.最后应用于股票市场,在股票买卖时机预测中取得了良好的效果.  相似文献   

11.
《Applied Soft Computing》2007,7(3):783-790
An approach based on augmented Lagrange programming neural network (ALPNN) is proposed for the optimal operation of multi-reservoir network control problems. The main objective here is to find out the optimal hourly water releases from each hydro-plant in the interconnected hydro system to minimize the energy deficit and to distribute uniformly the energy deficit if any in each time interval. The interdependence between water discharge rate variables is very apparent in multi-reservoir network control problems. This proposed method takes into account the concurrent interaction among all the water discharge rate variables of the problem. This approach is based on the Lagrange multiplier theory and search for solutions satisfying the necessary conditions of optimality in the state space. The network equilibrium point satisfies the Kuhn-Tucker condition for the problem and corresponds to the Lagrange solution of the problem. This technique has been applied to a standard 10-reservoir interconnected network in which each hydro-power plant has a linear generation model and discretized time varying river inflows. Results obtained from this approach are compared with those obtained by the conventional discrete maximum principle method. It is observed from the results that the proposed method is very effective and provides better results with respect to constraint satisfaction.  相似文献   

12.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

13.
Neural network for solving extended linear programming problems.   总被引:5,自引:0,他引:5  
A neural network for solving extended linear programming problems is presented and is shown to be globally convergent to exact solutions. The proposed neural network only uses simple hardware in which no analog multiplier for variables is required, and has no parameter tuning problem. Finally, an application of the neural network to the L(1 )-norm minimization problem is given.  相似文献   

14.
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.  相似文献   

15.
In this paper we discuss neural network approach for allocation with capacity constraints problem. This problem can be formulated as zero-one integer programming problem. We transform this zero-one integer programming problem into an equivalent nonlinear programming problem by replacing zero-one constraints with quadratic concave equality constraints. We propose two kinds of neural network structures based on penalty function method and augmented Lagrangian multiplier method, and compare them by theoretical analysis and numerical simulation. We show that penalty function based neural network approach is not good to combinatorial optimization problem because it falls in the dilemma whether terminating at an infeasible solution or sticking at any feasible solution, and augmented Lagrangian multiplier method based neural network can alleviate this suffering in some degree.  相似文献   

16.
Watta and Hassoun (1996) proposed a coupled gradient neural network for mixed integer programming. In this network continuous neurons were used to represent discrete variables. For the larger temporal problem they attempted many of the solutions found were infeasible. This paper proposes an augmented Hopfield network which is similar to the coupled gradient network proposed by Watta and Hassoun. However, in this network truly discrete neurons are used. It is shown that this network can be applied to mixed integer programming. Results illustrate that feasible solutions are now obtained for the larger temporal problem.  相似文献   

17.
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.  相似文献   

18.
This paper proposes a new differential dynamic programming algorithm for solving discrete time optimal control problems with equality and inequality constraints on both control and state variables and proves its convergence. The present algorithm is different from differential dynamic programming algorithms developed in [10]-[15], which can hardly solve optimal control problems with inequality constraints on state variables and whose convergence has not been proved. Composed of iterative methods for solving systems of nonlinear equations, it is based upon Kuhn-Tucker conditions for recurrence relations of dynamic programming. Numerical examples show file efficiency of the present algorithm.  相似文献   

19.
求解混合约束非线性规划的神经网络模型   总被引:1,自引:0,他引:1  
陶卿  任富兴  孙德敏 《软件学报》2002,13(2):304-310
通过巧妙构造Liapunov函数,提出一种大范围收敛的求解优化问题的连续神经网络模型.它具有良好的功能和性能,可以求解具有等式和不等式约束的非线性规划问题.该模型是Newton最速下降法对约束问题的推广,能有效地提高解的精度.即使对正定二次规划问题,它也比现有的模型结构简单.  相似文献   

20.
Intrinsically, Lagrange multipliers in nonlinear programming algorithms play a regulating role in the process of searching optimal solution of constrained optimization problems. Hence, they can be regarded as the counterpart of control input variables in control systems. From this perspective, it is demonstrated that constructing nonlinear programming neural networks may be formulated into solving servomechanism problems with unknown equilibrium point which coincides with optimal solution. In this paper, under second-order sufficient assumption of nonlinear programming problems, a dynamic output feedback control law analogous to that of nonlinear servomechanism problems is proposed to stabilize the corresponding nonlinear programming neural networks. Moreover, the asymptotical stability is shown by Lyapunov First Approximation Principle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号