首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes a new scheme of binary codification of artificial neural networks designed to generate automatically neural networks using any optimization method. Instead of using direct mapping of strings of bits in network connectivities, this particular codification abstracts binary encoding so that it does not reference the artificial indexing of network nodes; this codification employs shorter string length and avoids illegal points in the search space, but does not exclude any legal neural network. With these goals in mind, an Abelian semi-group structure with neutral element is obtained in the set of artificial neural networks with a particular internal operation called superimposition that allows building complex neural nets from minimum useful structures. This scheme preserves the significant feature that similar neural networks only differ in one bit, which is desirable when using search algorithms. Experimental results using this codification with genetic algorithms are reported and compared to other codification methods in terms of speed of convergence and the size of the networks obtained as a solution.  相似文献   

2.
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.  相似文献   

3.
一类求解最大独立集问题的混合神经演化算法   总被引:5,自引:0,他引:5  
李有梅  徐宗本  孙建永 《计算机学报》2003,26(11):1538-1545
提出一类求解最大独立集问题(MIS)的混合型神经演化算法.该算法基于空间剖分与“排除”策略,有效综合了神经网络快速收敛及遗传算法稳健全局搜索的特别优点.与标准遗传算法和神经网络算法相比,该算法显示了极高的全局优化性态与计算效率.  相似文献   

4.
In this paper, a novel approach to adjusting the weightings of fuzzy neural networks using a Real-coded Chaotic Quantum-inspired genetic Algorithm (RCQGA) is proposed. Fuzzy neural networks are traditionally trained by using gradient-based methods, which may fall into local minimum during the learning process. To overcome the problems encountered by the conventional learning methods, RCQGA algorithms are adopted because of their capabilities of directed random search for global optimization. It is well known, however, that the searching speed of the conventional quantum genetic algorithms (QGA) is not satisfactory. In this paper, a real-coded chaotic quantum-inspired genetic algorithm (RCQGA) is proposed based on the chaotic and coherent characters of Q-bits. In this algorithm, real chromosomes are inversely mapped to Q-bits in the solution space. Q-bits probability-guided real cross and chaos mutation are applied to the evolution and searching of real chromosomes. Chromosomes consisting of the weightings of the fuzzy neural network are coded as an adjustable vector with real number components that are searched by the RCQGA. Simulation results have shown that faster convergence of the evolution process in searching for an optimal fuzzy neural network can be achieved. Examples of nonlinear functions approximated by using the fuzzy neural network via the RCQGA are demonstrated to illustrate the effectiveness of the proposed method.  相似文献   

5.
刘威  付杰  周定宁  王薪予  成秘  黄敏  靳宝  牛英杰 《控制与决策》2021,36(10):2339-2349
针对郊狼优化算法优化性能弱及多样性低等问题,提出一种基于反时限衰减算子的混沌郊狼优化算法(ICCOA).首先,在个体迭代更新过程加入反时限衰减权重因子,使得全局搜索与局部开发能力保持平衡的同时提高算法的搜索速度;其次,加入基于Tent混沌映射的混沌干扰机制,将种群中部分较差个体经过映射产生新个体,进而增大种群多样性;接着,为了验证ICCOA算法的优化能力,分别在10、30和100维度下进行函数优化测试,并与5种优化算法进行比较,其实验结果表明ICCOA算法具有良好的优化性能;最后,将ICCOA算法应用于BP神经网络参数优化,提出新的神经网络模型(ICCOABP),并与标准神经网络、基于遗传算法的BP神经网络参数优化方法一同应用于机器学习的分类任务进行性能比较,实验结果表明ICCOABP算法具有高效性.  相似文献   

6.
基于局部进化的Hopfield神经网络的优化计算方法   总被引:4,自引:0,他引:4       下载免费PDF全文
提出一种基于局部进化的Hopfield神经网络优化计算方法,该方法将遗传算法和Hopfield神经网络结合在一起,克服了Hopfield神经网络易收敛到局部最优值的缺点,以及遗传算法收敛速度慢的缺点。该方法首先由Hopfield神经网络进行状态方程的迭代计算降低网络能量,收敛后的Hopfield神经网络在局部范围内进行遗传算法寻优,以跳出可能的局部最优值陷阱,再由Hopfield神经网络进一步迭代优化。这种局部进化的Hopfield神经网络优化计算方法尤其适合于大规模的优化问题,对图像分割问题和规模较大的200城市旅行商问题的优化计算结果表明,其全局收敛率和收敛速度明显提高。  相似文献   

7.
针对遗传算法在局部搜索能力方面的缺陷,提出了一种基于扩散算子的遗产算法(简称扩散遗产算法)。该算法中包含的扩散算子是变异算子,其主要作用是在遗传搜索中进行局部搜索。用扩散遗传算法和实数编码遗传算法分别训练用于解XOR问题的神经网络,对比结果表明,论文提出的算法兼具强的全局搜索能力和局部搜索能力,因此,该算法可以不借助其它局部搜索算法而单独作为神经网络训练算法,从而简化训练算法,提高训练效率。该算法对提高遗传算法搜索效率和求解精度具有重要的意义。  相似文献   

8.
二值网络在速度、能耗、内存占用等方面优势明显,但会对深度网络模型造成较大的精度损失.为了解决上述问题,本文提出了二值网络的"分阶段残差二值化"优化算法,以得到精度更好的二值神经网络模型.本文将随机量化的方法与XNOR-net相结合,提出了两种改进算法"带有近似因子的随机权重二值化"和"确定权重二值化",以及一种全新的"分阶段残差二值化"的BNN训练优化算法,以得到接近全精度神经网络的识别准确率.实验表明,本文提出的"分阶段残差二值化"算法能够有效提升二值模型的训练精度,而且不会增加相关网络在测试过程中的计算量,从而保持了二值网络速度快、空间小、能耗低的优势.  相似文献   

9.
There is no method to determine the optimal topology for multi-layer neural networks for a given problem. Usually the designer selects a topology for the network and then trains it. Since determination of the optimal topology of neural networks belongs to class of NP-hard problems, most of the existing algorithms for determination of the topology are approximate. These algorithms could be classified into four main groups: pruning algorithms, constructive algorithms, hybrid algorithms and evolutionary algorithms. These algorithms can produce near optimal solutions. Most of these algorithms use hill-climbing method and may be stuck at local minima. In this article, we first introduce a learning automaton and study its behaviour and then present an algorithm based on the proposed learning automaton, called survival algorithm, for determination of the number of hidden units of three layers neural networks. The survival algorithm uses learning automata as a global search method to increase the probability of obtaining the optimal topology. The algorithm considers the problem of optimization of the topology of neural networks as object partitioning rather than searching or parameter optimization as in existing algorithms. In survival algorithm, the training begins with a large network, and then by adding and deleting hidden units, a near optimal topology will be obtained. The algorithm has been tested on a number of problems and shown through simulations that networks generated are near optimal.  相似文献   

10.
Evolutionary computation is a class of global search techniques based on the learning process of a population of potential solutions to a given problem, that has been successfully applied to a variety of problems. In this paper a new approach to the construction of neural networks based on evolutionary computation is presented. A linear chromosome combined to a graph representation of the network are used by genetic operators, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. This paper describes the approach, the operators and reports results of the application of this technique to several binary classification problems.  相似文献   

11.
为改善遗传算法求解多目标组合优化问题的搜索效率,提出一种新的遗传局部搜索算法.算法采取非劣解并行局部搜索策略以及基于分散度的精英选择策略,并采用基于NSGA-Ⅱ的适应度赋值方式和二元赌轮选择操作,以提高算法收敛性,保持群体多样性.实验结果表明,新算法能够产生数量较多分布较广的近似Pareto最优解.  相似文献   

12.
陈国龙 《计算机科学》2002,29(11):141-143
1 引言设计计算机通信网的一个基本要求是网络全局有效性,即连通概率。从网络角度,连通概率指的是网络至少简单连通。其除依赖于各计算机系统和通信能力外,主要依赖于通信链路的拓扑设计。对一个给定计算机通信网的最大全局可靠性的网络拓扑优化设计,人们已提出许多启发式算法,但这些算法并未给出精确解。本文采用遗传算法进行设计,成功地解决了这类问题。  相似文献   

13.
深度前馈神经网络在分类和回归问题上得到了很好的应用,但网络性能极大程度上受到其结构和超参数影响.为了获得高性能的神经网络,首先对遗传算法的选择策略进行改进,之后利用该改进遗传算法,采用二进制编码与实数编码的混合编码策略对深度前馈神经网络层数、每层节点量以及学习率和权重进行优化.改进的选择策略,在最优保存策略的基础上从父代和子代合并的2n个个体中,以一定的概率选择部分适应值较差个体作为新父代,以增加种群多样性,避免陷入局部最优.同时引入dropout方法减少网络过拟合训练数据.使用Ring、Breast cancer、Twonorm、Heart、Blood、Ionosphere、Monk共7个数据集进行数值实验,并与其他相关文献中的算法比较,仿真结果表明,改进的遗传算法能搜索到较高性能的神经网络.  相似文献   

14.
林哲  全海燕 《计算机仿真》2020,37(3):270-274
在BP神经网络训练算法中,针对权值的优化学习容易陷入局部极值点、收敛速度慢等问题,很多研究引入智能优化算法对其进行改进,但传统的智能优化算法通常有多个控制参数,若不能正确选取参数,或者没有适当选择初始点位置,则很难搜索到最优的神经网络权值。为了解决这些问题,提出一种基于单形进化的BP神经网络学习算法,它通过全随机搜索减少算法的控制参数,利用群体的多角色态保持粒子的多样性,避免算法陷入局部的极值点,减少了对初始值的依赖。在应用中,将该算法应用于神经网络的训练算法中,通过对UCI数据集和人脸图像的测试,实验结果表明,上校算法训练的神经网络有效提高了识别率与训练效率。  相似文献   

15.
An improved strategy for GAs in structural optimization   总被引:3,自引:0,他引:3  
An improved strategy for genetic algorithms in structural optimization is presented in this paper. In the improved genetic algorithms, the terms of the feasible, infeasible individual strings, and the related space for the individual strings are given. In initializing the population and generating the individual strings of the next generations, the feasible individual strings are only chosen. The approach to structural approximation analysis by artificial neural networks is also adopted, in order to reduce the expensive computation arising from the constraints evaluations. The effectiveness of the improved strategy for genetic algorithms is shown by the numerical examples of the optimum weights of five- and 10-bar structures.  相似文献   

16.
Abstract

GENITOR is a genetic algorithm which employs one-at-a-time reproduction and allocates reproductive opportunities according to rank to achieve selective pressure. Theoretical arguments and empirical evidence suggest that GENITOR is less vulnerable to some of the biases that degrade performance in standard genetic algorithms.

A distributed version of GENITOR which uses many smaller distributed populations in place of a single large population is introduced. GENITOR II is able to optimize a broad range of sample problems more accurately and more consistently than GENITOR with a single population. GENITOR II also appears to be more robust than a single population genetic algorithm, yielding better performance without parameter tuning. We present some preliminary analyses to explain the performance advantage of the distributed algorithm. A distributed search is shown to yield improved search on several classes of problems, including binary encoded feedforward neural networks, the Traveling Salesman Problem, and a set of ‘ deceptive problems’ specially designed to be hard for genetic algorithms.  相似文献   

17.
A.  H.  J.  I.  O.  B. 《Neurocomputing》2009,72(16-18):3541
The design of radial basis function neural networks (RBFNNs) still remains as a difficult task when they are applied to classification or to regression problems. The difficulty arises when the parameters that define an RBFNN have to be set, these are: the number of RBFs, the position of their centers and the length of their radii. Another issue that has to be faced when applying these models to real world applications is to select the variables that the RBFNN will use as inputs. The literature presents several methodologies to perform these two tasks separately, however, due to the intrinsic parallelism of the genetic algorithms, a parallel implementation will allow the algorithm proposed in this paper to evolve solutions for both problems at the same time. The parallelization of the algorithm not only consists in the evolution of the two problems but in the specialization of the crossover and mutation operators in order to evolve the different elements to be optimized when designing RBFNNs. The subjacent genetic algorithm is the non-sorting dominated genetic algorithm II (NSGA-II) that helps to keep a balance between the size of the network and its approximation accuracy in order to avoid overfitted networks. Another of the novelties of the proposed algorithm is the incorporation of local search algorithms in three stages of the algorithm: initialization of the population, evolution of the individuals and final optimization of the Pareto front. The initialization of the individuals is performed hybridizing clustering techniques with the mutual information (MI) theory to select the input variables. As the experiments will show, the synergy of the different paradigms and techniques combined by the presented algorithm allow to obtain very accurate models using the most significant input variables.  相似文献   

18.
智能仿生算法及其网络优化中的应用研究进展   总被引:5,自引:0,他引:5  
网络优化问题是一类特殊的组合优化问题,很多问题找不到求最优解的多项式时间算法,属于NP困难问题;智能仿生类算法主要是模拟生物进化和生物群体的智能化方法,如人工神经网络、遗传算法、DNA分子算法、蚂蚁算法等,它们在解决NP问题上表现出得天独厚的优势,取得了诸多丰硕的成果。因此,该文系统地综述了近年来智能仿生算法及其网络优化中的应用研究进展和未来发展方向。  相似文献   

19.
首先利用遗传算法优化的投影寻踪技术对神经网络学习矩阵降维,再利用Bagging技术和不同的神经网络学习算法生成集成个体,并再次用遗传算法进化的投影寻踪技术对神经网络个体集成.建立基于遗传算法优化的投影寻踪技术神经网络集成模型,通过上证指数开盘价、收盘价进行实例分析,计算结果表明该方法具有较好的学习能力和泛化能力,在股市预测中预测精度高、稳定性好.  相似文献   

20.
神经网络和改进粒子群算法在地震预测中的应用   总被引:1,自引:1,他引:0  
提出了一种基于神经网络与改进粒子群算法的地震预测方法,该方法采用前向神经网络作为地震震级的预测模型,引入改进的粒子群算法对前向网络的连接权值进行修正。为了设计在全局搜索和局部搜索之间取得最佳平衡的惯性权重,基于粒子动态变异思想对粒子群优化算法进行改进,提出了一种动态变异粒子群优化算法,并将其应用于地震震级预测神经网络模型优化。在仿真实验中,将所提出的方法与另外两个采用不同算法的前向网络预测方法进行了比较。结果表明所提出的优化算法收敛速度最快,所得模型的预测误差最小,泛化能力最强,对地震的中期预测有很好的参考作用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号