首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficient global optimization for image registration   总被引:6,自引:0,他引:6  
The image registration problem of finding a mapping that matches data from multiple cameras is computationally intensive. Current solutions to this problem tolerate Gaussian noise, but are unable to perform the underlying global optimization computation in real time. This paper expands these approaches to other noise models and proposes the Terminal Repeller Unconstrained Subenergy Tunneling (TRUST) method, originally introduced by B.C. Cetin et al. (1993), as an appropriate global optimization method for image registration. TRUST avoids local minima entrapment, without resorting to exhaustive search by using subenergy-tunneling and terminal repellers. The TRUST method applied to the registration problem shows good convergence results to the global minimum. Experimental results show TRUST to be more computationally efficient than either tabu search or genetic algorithms  相似文献   

2.
In this paper, an improved approach incorporating adaptive particle swarm optimization (APSO) and a priori information into feedforward neural networks for function approximation problem is proposed. It is well known that gradient-based learning algorithms such as backpropagation algorithm have good ability of local search, whereas PSO has good ability of global search. Therefore, in the improved approach, the APSO algorithm encoding the first-order derivative information of the approximated function is used to train network to near global minima. Then, with the connection weights produced by APSO, the network is trained with a modified gradient-based algorithm with magnified gradient function. The modified gradient-based algorithm can reduce input-to-output mapping sensitivity and lessen the chance of being trapped into local minima. By combining APSO with local search algorithm and considering a priori information, the improved approach has better approximation accuracy and convergence rate. Finally, simulation results are given to verify the efficiency and effectiveness of the proposed approach.  相似文献   

3.
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.  相似文献   

4.
针对基本灰狼优化算法在求解复杂问题时同样存在依赖初始种群、过早收敛、易陷入局部最优等缺点,提出一种改进的灰狼优化算法应用于求解函数优化问题中。该算法首先利用混沌Cat映射产生灰狼种群的初始位置,为算法全局搜索过程的种群多样性奠定基础;同时引入粒子群算法中的个体记忆功能以便增强算法的局部搜索能力和加快其收敛速度;最后采用高斯变异扰动和优胜劣汰选择规则对当前最优解进行变异操作以避免算法陷入局部最优。对13个基准测试函数进行仿真实验,结果表明,与基本GWO算法、PSO算法、GA算法以及ACO算法相比,该算法具有更好的求解精度和更快的收敛速度。  相似文献   

5.
高艳卉  诸克军 《计算机应用》2011,31(6):1648-1651
融合了粒子群算法(PSO) 和Solver 加载宏,形成混合PSO-Solver算法进行优化问题的求解。PSO作为全局搜索算法首先给出问题的全局可行解,Solver则是基于梯度信息的局部搜索工具,对粒子群算法得出的解再进行改进,二者互相结合,既加快了全局搜索的速度,又有效地避免了陷入局部最优。算法用VBA语言进行编程,简单且易于实现。通过对无约束优化问题和约束优化问题的求解,以及和标准PSO、其他一些混合算法的比较表明,PSO-Solver算法能够有效地提高求解过程的收敛速度和解的精确性。  相似文献   

6.
The Prediction Error Method (PEM) is related to an optimization problem built on input/output data collected from the system to be identified. It is often hard to find the global solution of this optimization problem because the corresponding objective function presents local minima and/or the search space is constrained to a nonconvex set. The shape of the cost function, and hence the difficulty in solving the optimization problem, depends directly on the experimental conditions, more specifically on the spectrum of the input/output data collected from the system. Therefore, it seems plausible to improve the convergence to the global minimum by properly choosing the spectrum of the input; in this paper, we address this problem. We present a condition for convergence to the global minimum of the cost function and propose its inclusion in the input design. We present the application of the proposed approach to case studies where the algorithms tend to get trapped in nonglobal minima.  相似文献   

7.
This paper addresses the issue of training feedforward neural networks by global optimization. The main contributions include characterization of global optimality of a network error function, and formulation of a global descent algorithm to solve the network training problem. A network with a single hidden-layer and a single-output unit is considered. By means of a monotonic transformation, a sufficient condition for global optimality of a network error function is presented. Based on this, a penalty-based algorithm is derived directing the search towards possible regions containing the global minima. Numerical comparison with benchmark problems from the neural network literature shows superiority of the proposed algorithm over some local methods, in terms of the percentage of trials attaining the desired solutions. The algorithm is also shown to be effective for several pattern recognition problems.  相似文献   

8.
一种混沌Hopfiele网络及其在优化计算中的应用   总被引:2,自引:1,他引:2  
文章讨论了神经网络算法在约束优化问题中的应用,提出了一种混沌神经网络模型。在Hopfield网络中引入混沌机制,首先在混沌动态下搜索,然后利用HNN梯度优化搜索。对非线性函数的优化问题仿真表明算法具有很强的克服陷入局部极小能力。  相似文献   

9.
基于局部进化的Hopfield神经网络的优化计算方法   总被引:4,自引:0,他引:4       下载免费PDF全文
提出一种基于局部进化的Hopfield神经网络优化计算方法,该方法将遗传算法和Hopfield神经网络结合在一起,克服了Hopfield神经网络易收敛到局部最优值的缺点,以及遗传算法收敛速度慢的缺点。该方法首先由Hopfield神经网络进行状态方程的迭代计算降低网络能量,收敛后的Hopfield神经网络在局部范围内进行遗传算法寻优,以跳出可能的局部最优值陷阱,再由Hopfield神经网络进一步迭代优化。这种局部进化的Hopfield神经网络优化计算方法尤其适合于大规模的优化问题,对图像分割问题和规模较大的200城市旅行商问题的优化计算结果表明,其全局收敛率和收敛速度明显提高。  相似文献   

10.
一种混沌Hopfield网络及其在优化计算中的应用   总被引:2,自引:0,他引:2  
文章讨论了神经网络算法在约束优化问题中的应用,提出了一种混沌神经网络模型。在Hopfield网络中引入混沌机制,首先在混沌动态下搜索,然后利用HNN梯度优化搜索。对非线性函数的优化问题仿真表明算法具有很强的克服陷入局部极小能力。  相似文献   

11.
Neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements that are shown to be extremely effective in computation. This paper presents an architecture of recurrent neural networks for solving the N-Queens problem. More specifically, a modified Hopfield network is developed and its internal parameters are explicitly computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points, which represent a solution of the considered problem. The network is shown to be completely stable and globally convergent to the solutions of the N-Queens problem. A fuzzy logic controller is also incorporated in the network to minimize convergence time. Simulation results are presented to validate the proposed approach.  相似文献   

12.
This paper presents an empirical study of the convergence characteristics of augmented Lagrangian coordination (ALC) for solving multi-modal optimization problems in a distributed fashion. A number of test problems that do not satisfy all assumptions of the convergence proof for ALC are selected to demonstrate the convergence characteristics of ALC algorithms. When only a local search is employed at the subproblems, local solutions to the original problem are often attained. When a global search is performed at subproblems, global solutions to the original, non-decomposed problem are found for many of the examples. Although these findings are promising, ALC with a global subproblem search may yield only local solutions in the case of non-convex coupling functions or disconnected feasible domains. Results indicate that for these examples both the starting point and the sequence in which subproblems are solved determines which solution is obtained. We illustrate that the main cause for this behavior lies in the alternating minimization inner loop, which is inherently of a local nature.  相似文献   

13.
针对量子粒子群优化算法在处理高维复杂函数收敛速度慢、易陷入局优的问题,利用混沌算子的遍历性提出了基于惯性权重自适应调整的混沌量子粒子群优化算法。该算法在运行过程中根据粒子适应值的优劣情况,相应采取不同的惯性权重策略,以调节粒子的全局搜索和局部搜索能力。对几个典型函数的测试结果表明,该算法在收敛速度和精度上有大幅度的提高,且有很强的避免陷入局优的能力,性能远远优于一般的粒子群算法和量子粒子群算法。  相似文献   

14.
We have already proposed the inverse function delayed (ID) model as a novel neuron model. The ID model has a negative resistance similar to Bonhoeffer–van der Pol (BVP) model and the network has an energy function similar to Hopfield model. The neural network having an energy can converge on a solution of the combinatorial optimization problem and the computation is in parallel and hence fast. However, the existence of local minima is a serious problem. The negative resistance of the ID model can make the network state free from such local minima by selective destabilization. Hence, we expect that it has a potential to overcome the local minimum problems. In computer simulations, we have already shown that the ID network can be free from local minima and that it converges on the optimal solutions. However, the theoretical analysis has not been presented yet. In this paper, we redefine three types of constraints for the particular problems, then we analytically estimate the appropriate network parameters giving the global minimum states only. Moreover, we demonstrate the validity of estimated network parameters by computer simulations.   相似文献   

15.
一种改进的求解TSP混合粒子群优化算法   总被引:1,自引:1,他引:0       下载免费PDF全文
为解决粒子群算法在求解组合优化问题中存在的早熟性收敛和收敛速度慢等问题,将粒子群算法与局部搜索优化算法结合,可抑制粒子群算法早熟收敛问题,提高粒子群算法的收敛速度。通过建立有效的局部搜索优化算法所需借助的参照优化边集,提高了局部搜索优化算法的求解质量和求解效率。新的混合粒子群算法高效收敛于中小规模旅行商问题的全局最优解,实验表明改进的混合粒子群算法是有效的。  相似文献   

16.
一种自适应分群的粒子群算法   总被引:1,自引:0,他引:1       下载免费PDF全文
结合小生境思想及灾变原理,提出了一种动态调整种群结构的粒子群算法(AGPSO)。该算法在获取局部最优区域后只留下部分粒子寻找局部最优点,同时将其他粒子进行灾变处理,然后约束在剩余区域进行新最优区域搜索,这样既达到了快速局部收敛的目的,同时又增加了粒子种群的多样性,较好地解决了早熟收敛的问题。通过典型优化函数的仿真实验验证了该算法的有效性。  相似文献   

17.
针对基本粒子群优化算法(PSO)容易陷入局部最优点和收敛速度较慢的缺点,提出在PSO更新过程中加入两类基于正态分布投点的变异操作。一类变异用来增强局部搜索能力,另一类变异用来提高发现全局最优点的能力,避免所有粒子陷入到一个局部最优点的邻域内。数值结果表明,所提出算法的全局搜索能力有显著提高,并且收敛速度更快。  相似文献   

18.
求解TSP问题的自逃逸混合离散粒子群算法研究   总被引:3,自引:0,他引:3  
通过对旅行商问题(TSP)局部最优解与个体最优解、群体最优解之间的关系分析,针对DPSO算法易早熟和收敛慢的缺点,重新定义了离散粒子群DPSO的速度、位置公式,结合生物界中物种在生存密度过大时个体会自动分散迁徙的特性和局部搜索算法(SEC)后,提出了一种新的自逃逸混合离散粒子群算法(SEHDPSO).自逃逸思想是一种确定性变异操作,能使算法中陷入局部极小区域的粒子通过自逃逸行为进行全局寻优,从而克服算法易早熟的缺陷.仿真结果表明,SEHDPSO算法比混合蚁群算法(ACS+2-OPT)具有更好的收敛性和搜索效率.  相似文献   

19.
This paper presents a novel Heuristic Global Learning (HER-GBL) algorithm for multilayer neural networks. The algorithm is based upon the least squares method to maintain the fast convergence speed, and the penalized optimization to solve the problem of local minima. The penalty term, defined as a Gaussian-type function of the weight, is to provide an uphill force to escape from local minima. As a result, the training performance is dramatically improved. The proposed HER-GBL algorithm yields excellent results in terms of convergence speed, avoidance of local minima and quality of solution.  相似文献   

20.
Stochastic optimization algorithms like genetic algorithms (GAs) and particle swarm optimization (PSO) algorithms perform global optimization but waste computational effort by doing a random search. On the other hand deterministic algorithms like gradient descent converge rapidly but may get stuck in local minima of multimodal functions. Thus, an approach that combines the strengths of stochastic and deterministic optimization schemes but avoids their weaknesses is of interest. This paper presents a new hybrid optimization algorithm that combines the PSO algorithm and gradient-based local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. In the new gradient-based PSO algorithm, referred to as the GPSO algorithm, the PSO algorithm is used for global exploration and a gradient based scheme is used for accurate local exploration. The global minimum is located by a process of finding progressively better local minima. The GPSO algorithm avoids the use of inertial weights and constriction coefficients which can cause the PSO algorithm to converge to a local minimum if improperly chosen. The De Jong test suite of benchmark optimization problems was used to test the new algorithm and facilitate comparison with the classical PSO algorithm. The GPSO algorithm is compared to four different refinements of the PSO algorithm from the literature and shown to converge faster to a significantly more accurate final solution for a variety of benchmark test functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号