首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stochastic optimization algorithms like genetic algorithms (GAs) and particle swarm optimization (PSO) algorithms perform global optimization but waste computational effort by doing a random search. On the other hand deterministic algorithms like gradient descent converge rapidly but may get stuck in local minima of multimodal functions. Thus, an approach that combines the strengths of stochastic and deterministic optimization schemes but avoids their weaknesses is of interest. This paper presents a new hybrid optimization algorithm that combines the PSO algorithm and gradient-based local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. In the new gradient-based PSO algorithm, referred to as the GPSO algorithm, the PSO algorithm is used for global exploration and a gradient based scheme is used for accurate local exploration. The global minimum is located by a process of finding progressively better local minima. The GPSO algorithm avoids the use of inertial weights and constriction coefficients which can cause the PSO algorithm to converge to a local minimum if improperly chosen. The De Jong test suite of benchmark optimization problems was used to test the new algorithm and facilitate comparison with the classical PSO algorithm. The GPSO algorithm is compared to four different refinements of the PSO algorithm from the literature and shown to converge faster to a significantly more accurate final solution for a variety of benchmark test functions.  相似文献   

2.
We analyze and compare the well-known gradient descent algorithm and the more recent exponentiated gradient algorithm for training a single neuron with an arbitrary transfer function. Both algorithms are easily generalized to larger neural networks, and the generalization of gradient descent is the standard backpropagation algorithm. We prove worst-case loss bounds for both algorithms in the single neuron case. Since local minima make it difficult to prove worst case bounds for gradient-based algorithms, we must use a loss function that prevents the formation of spurious local minima. We define such a matching loss function for any strictly increasing differentiable transfer function and prove worst-case loss bounds for any such transfer function and its corresponding matching loss. The different forms of the two algorithms' bounds indicates that exponentiated gradient outperforms gradient descent when the inputs contain a large number of irrelevant components. Simulations on synthetic data confirm these analytical results.  相似文献   

3.
陈佳楠  夏飞  张浩  彭道刚 《测控技术》2016,35(5):124-128
针对传统小波神经网络的问题,提出了一种基于模拟退火粒子群算法优化小波神经网络并用于汽轮机故障诊断.先使用模拟退火粒子群算法对小波神经网络的参数进行初步优化,再用小波神经网络进行二次优化训练.实验结果表明,所提出的SA-PSO-WNN算法与WNN、PSO-WNN算法相比,网络的训练速度更快,全局搜索能力更强,网络的泛化能力更好,具有很好的实用价值.  相似文献   

4.
黄杰  杨孝平 《自动化学报》2012,38(4):582-590
利用活动轮廓线方法进行图像分割的一个重要缺陷是目标函数是非凸的, 这不仅使得分割结果容易陷于局部极小, 而且还使得一些快速算法无法开展.本文首先从贝叶斯风险估计的方法出发,针对B超幅度图像, 给出一种基于Rayleigh分布的活动轮廓线模型. 然后结合凸松弛的方法,得到一个新的放松的凸模型.原有模型和放松后模型的关系可由定理1给出. 最后结合分裂Bregman算法, 给出基于B超分割模型的快速算法.与传统梯度下降法相比较,本文提出的算法不仅能得到全局最优解,而且在算法收敛速度上也 大大优于梯度下降法.  相似文献   

5.
Conventional gradient descent learning algorithms for soft computing systems have the learning speed bottleneck problem and the local minima problem. To effectively solve the two problems, the n-variable constructive granular system with high-speed granular constructive learning is proposed based on granular computing and soft computing, and proved to be a universal approximator. The fast granular constructive learning algorithm can highly speed up granular knowledge discovery by directly calculating all parameters of the n-variable constructive granular system using training data, and then construct the n-variable constructive granular system with any required accuracy using a small number of granular rules. Predictive granular knowledge discovery simulation results indicate that the direct-calculation-based granular constructive algorithm is better than the conventional gradient descent learning algorithm in terms of learning speed, learning error, and prediction error.  相似文献   

6.
The updating rule of the original discrete Hopfield neural network (DHNN) is based on gradient descent dynamics, which always leads to the local minima problem. In this paper, by introducing the idea of the simulated annealing (SA) into the DHNN, we first propose an annealing HNN (AHNN) that permits temporary energy ascent to help the DHNN escape from local minima. Then, from a cooperative perspective, a population of the AHNN processes are simultaneously implemented and coupled by their acceptance probabilities, and thus a cooperative AHNN (CoAHNN) is proposed. The primary objective of the coupling in the CoAHNN is to create cooperative behavior via information exchange among neural networks. This objective helps in the decision of whether uphill moves will be accepted. In addition, coupling can provide information used online to guide the networks toward the global optimum. The CoAHNN is tested on 21 unconstrained binary quadratic programming problems (UBQP) with the size ranging from 3000 to 7000, and 48 maximum cut benchmark problems, a special case of the UBQP, with the size ranging from 512 to 3375. The UBQP consists in maximizing a quadratic 0–1 function. It is a well known NP-hard problem and is considered as a unified model for a variety of combinatorial optimization problems. Simulation results show that the CoAHNN is better than or competitive with other HNN based algorithms, metaheuristic algorithms and state-of-the-art algorithms.  相似文献   

7.
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.  相似文献   

8.
On the problem of local minima in recurrent neural networks   总被引:2,自引:0,他引:2  
Many researchers have recently focused their efforts on devising efficient algorithms, mainly based on optimization schemes, for learning the weights of recurrent neural networks. As in the case of feedforward networks, however, these learning algorithms may get stuck in local minima during gradient descent, thus discovering sub-optimal solutions. This paper analyses the problem of optimal learning in recurrent networks by proposing conditions that guarantee local minima free error surfaces. An example is given that also shows the constructive role of the proposed theory in designing networks suitable for solving a given task. Moreover, a formal relationship between recurrent and static feedforward networks is established such that the examples of local minima for feedforward networks already known in the literature can be associated with analogous ones in recurrent networks.  相似文献   

9.
Many real world problems can be modelled as optimization problems. However, the traditional algorithms for these problems often encounter the problem of being trapped in local minima. The filled function method is an effective approach to tackle this kind of problems. However the existing filled functions have the disadvantages of discontinuity, non-differentiability or sensitivity to parameters which limit their efficiency. In this paper, we proposed a new filled function which is continuous and differentiable without any parameter to tune. Compared to discontinuous or non-differentiable filled functions, the continuous and differentiable filled function mainly has three advantages: firstly, it is not easier to produce extra local minima, secondly, more efficient local search algorithms using gradient information can be applied and thirdly, a continuous and differentiable filled function can be optimized more easily. Based on the new proposed filled function, a new algorithm was designed for unconstrained global optimization problems. Numerical experiments were conducted and the results show the proposed algorithm was more efficient.  相似文献   

10.
Image segmentation using evolutionary computation   总被引:2,自引:0,他引:2  
Image segmentation denotes a process by which a raw input image is partitioned into nonoverlapping regions such that each region is homogeneous and the union of any two adjacent regions is heterogeneous. A segmented image is considered to be the highest domain-independent abstraction of an input image. The image segmentation problem is treated as one of combinatorial optimization. A cost function which incorporates both edge information and region gray-scale uniformity is defined. The cost function is shown to be multivariate with several local minima. The genetic algorithm, a stochastic optimization technique based on evolutionary computation, is explored in the context of image segmentation. A class of hybrid evolutionary optimization algorithms based on a combination of the genetic algorithm and stochastic annealing algorithms such as simulated annealing, microcanonical annealing, and the random cost algorithm is shown to exhibit superior performance as compared with the canonical genetic algorithm. Experimental results on gray-scale images are presented  相似文献   

11.
免疫遗传算法除了具有简单遗传算法的全局寻优能力外,还具有免疫记忆、免疫调节及多样性保持功能。梯度下降算法训练神经网络收敛速度慢,容易陷入局部最优,且受初始值的影响较大。本文综合两种方法的优点,提出一种用免疫遗传算法结合梯度下降算法的组合训练方法,用于RBF网的训练,并通过实验证明所提出的组合算法比简单遗传算法结合梯度下降组合算法的速度更快并且最终误差更小。  相似文献   

12.
等圆Packing问题研究如何将n个单位半径的圆形物体互不嵌入地置入一个边长尽量小的正三角形容器内,作为一类经典的NP难度问题,其有着重要的理论价值和广泛的应用背景.模拟退火算法是一种随机的全局寻优算法,通过将启发式格局更新策略与基于梯度法的局部搜索策略融入模拟退火算法,并与二分搜索相结合,提出一种求解正三角形容器内等圆Packing问题的启发式算法.该算法将启发式格局更新策略用来产生新格局和跳坑,用梯度法搜索新产生格局附近能量更低的格局,并用二分搜索得到正三角形容器的最小边长.对41个算例进行测试的实验结果表明,文中算法改进了其中38个实例的目前最优结果,是求解正三角形容器内等圆Packing问题的一种有效算法.  相似文献   

13.
The optimal design of structural systems with conventional members or systems with conventional as well as passive or active members is presented. The optimal sizes of the conventional members of structural systems are obtained for dynamic loads. A modified simulated annealing algorithm is presented which is used to solve the optimization problem with dynamic constraints. The present algorithm differs from existing simulated annealing algorithms in two respects; first, an automatic reduction of the search range is performed, and second, a sensitivity analysis of the design variables is utilized. The present method converges to the minimum in less iterations when compared to existing simulated annealing algorithms. The algorithm is advantageous over classical methods for optimization of structural systems with constraints arising from dynamic loads. For certain initial values of the design variables, classical optimization methods either fail to converge or produce designs which are local minima; the present algorithm seems to be successful in yielding the global minimum design.  相似文献   

14.
Discrete optimization of truss structures is a hard computing problem with many local minima. Metaheuristic algorithms are naturally suited for discrete optimization problems as they do not require gradient information. A recently developed method called Jaya algorithm (JA) has proven itself very efficient in continuous engineering problems. Remarkably, JA has a very simple formulation and does not utilize algorithm-specific parameters. This study presents a novel JA formulation for discrete optimization of truss structures under stress and displacement constraints. The new algorithm, denoted as discrete advanced JA (DAJA), implements efficient search mechanisms for generating new trial designs including discrete sizing, layout and topology optimization variables. Besides the JA’s basic concept of moving towards the best design of the population and moving away from the worst design, DAJA tries to form a set of descent directions in the neighborhood of each candidate designs thus generating high quality trial designs that are very likely to improve current population. Results collected in seven benchmark problems clearly demonstrate the superiority of DAJA over other state-of-the-art metaheuristic algorithms and multi-stage continuous–discrete optimization formulations.  相似文献   

15.
Maximum‐margin clustering is an extension of the support vector machine (SVM) to clustering. It partitions a set of unlabeled data into multiple groups by finding hyperplanes with the largest margins. Although existing algorithms have shown promising results, there is no guarantee of convergence of these algorithms to global solutions due to the nonconvexity of the optimization problem. In this paper, we propose a simulated annealing‐based algorithm that is able to mitigate the issue of local minima in the maximum‐margin clustering problem. The novelty of our algorithm is twofold, ie, (i) it comprises a comprehensive cluster modification scheme based on simulated annealing, and (ii) it introduces a new approach based on the combination of k‐means++ and SVM at each step of the annealing process. More precisely, k‐means++ is initially applied to extract subsets of the data points. Then, an unsupervised SVM is applied to improve the clustering results. Experimental results on various benchmark data sets (of up to over a million points) give evidence that the proposed algorithm is more effective at solving the clustering problem than a number of popular clustering algorithms.  相似文献   

16.
本文对蚁群优化算法的BP神经网络中的RPROP混合算法进行了研究,提出了利用蚁群优化算法,结合RPROP混合算法解决无线网络传感器中如何处理信息服务点中大量的冗余数据、网络运行速度等相关问题,通过建立系统构架及信息服务点,证明该算法能够延长BP神经网络的生命周期,加快BP神经网络的收缩速度,能够将网络中信息服务点的重复数据进行有效的合并处理,并及时过滤掉非正常信息服务点的数据,减少数据服务点的能量消耗,期训练过程中迭代次数改善明显,解决BP神经网络的学习、训练时间冗余等问题,同时具有较强的计算、寻优等能力,提高了网络分类正确率和运行的效率,是一种较为实用的算法,完全能够满足日益增长的无线互联网终端的运行需要。  相似文献   

17.
以卫星舱布局为背景,研究一类带静不平衡约束的正交矩形布局问题.借鉴拟物策略,定义矩形与矩形、矩形与圆形容器之间的嵌入度计算公式,将该问题转变为无约束的优化问题.通过将启发式格局更新策略、基于梯度法的局部搜索机制与具有全局优化功能的模拟退火算法相结合,提出一种求解带静不平衡约束的正交矩形布局问题的启发式模拟退火算法.算法中的启发式格局更新策略产生新格局和跳坑,梯度法搜索新格局附近能量更低的格局.另外,在布局优化过程中,通过在挤压弹性势能的基础上增加静不平衡量惩罚项,并采用质心平移的方法,使布局系统的静不平衡量达到约束要求.实验表明,文中算法是一种解决带静不平衡约束的正交矩形布局问题的有效算法.  相似文献   

18.
Sequential monte carlo methods To train neural network models   总被引:2,自引:0,他引:2  
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving on-line, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the one-step-ahead probability density functions of the options prices.  相似文献   

19.
针对目前多数改进蚁群算法求解多约束服务质量路由(QoSR)存在收敛速度慢、易陷入局部最优从而效率不高的问题,提出一种引入梯度下降的蚁群算法(ACAGD)。该算法将梯度下降法引入到蚁群的局部搜索中,结合残余信息素,综合决定蚂蚁的下一跳选择策略。蚁群不仅以一定概率按照信息素浓度搜索下一跳,还将以一定概率按照梯度下降法搜索下一跳,从而降低传统蚁群算法容易陷入局部最优的可能性。利用Waxman网络模型随机生成不同路由节点数量的网络拓扑进行仿真实验。实验结果表明,ACAGD相比其他改进蚁群算法,能够在收敛速度不受影响的情况下,取得综合代价相对较低的路由,且算法的稳定性较好。  相似文献   

20.
对于非线性方程组的求解,传统方法有很多,如牛顿法、梯度下降法等,但这些算法存在要求方程组连续可微、初值的选取是否合适等缺点,根据以上缺点将求解的问题转化为优化的问题,提出了新的交叉优化算法,充分利用细菌觅食算法局部搜索能力和粒子群算法的全局搜索能力,充分发挥了这两个算法各自优点。数值实验表明,新的算法可以弥补粒子群算法局部搜索能力弱和细菌觅食算法的全局搜索能力的不足,是求解非线性方程的有效方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号