首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
针对生物地理学优化训练多层感知器存在的早熟收敛以及初始化灵敏等问题,提出一种基于差分进化生物地理学优化的多层感知器训练方法。将生物地理学优化(Biogeography-based Optimization,BBO)与差分进化(Differential Evolution,DE)算法相结合,形成改进的混合DE_BBO算法;采用改进的DE_BBO来训练多层感知器(Multi-Layer Perceptron,MLP),并应用于虹膜、乳腺癌、输血、钞票验证等4类数据分类。与BBO、PSO、GA、ACO、ES、PBIL等6种主流启发式算法的实验结果进行比较表明,DE_BBO_MLP算法在分类精度和收敛速度等方面优于已有方法。  相似文献   

2.

Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms.

  相似文献   

3.
This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.  相似文献   

4.
Presents a systematic approach for constructing reformulated radial basis function (RBF) neural networks, which was developed to facilitate their training by supervised learning algorithms based on gradient descent. This approach reduces the construction of radial basis function models to the selection of admissible generator functions. The selection of generator functions relies on the concept of the blind spot, which is introduced in the paper. The paper also introduces a new family of reformulated radial basis function neural networks, which are referred to as cosine radial basis functions. Cosine radial basis functions are constructed by linear generator functions of a special form and their use as similarity measures in radial basis function models is justified by their geometric interpretation. A set of experiments on a variety of datasets indicate that cosine radial basis functions outperform considerably conventional radial basis function neural networks with Gaussian radial basis functions. Cosine radial basis functions are also strong competitors to existing reformulated radial basis function models trained by gradient descent and feedforward neural networks with sigmoid hidden units.  相似文献   

5.
多层感知器MLP是处理分类问题的一种方法,可实现非线性高维度分类,并有很好的扩展能力.但是,在传统MLP的训练过程中,MLP分类结果的好坏与参数选择关系密切,而且传统算法的参数选择有很多缺陷.使用群智能算法替代传统多层感知器训练器是一种解决方案.灰狼优化算法GWO是其中一种兼顾高水平的探索和开发能力的算法.但是,GWO...  相似文献   

6.
This paper proposes a novel high-order associative memory system (AMS) based on the discrete Taylor series (DTS). The mathematical foundation for the new AMS scheme is derived, three training algorithms are proposed, and the convergence of learning is proved. The DTS-AMS thus developed is capable of implementing error-free approximation to multivariable polynomial functions of arbitrary order. Compared with cerebellar model articulation controllers and radial basis function neural networks, it provides higher learning precision and less memory request. Furthermore, it offers less training computation and faster convergence rate than that attainable by multilayer perceptron. Numerical simulations show that the proposed DTS-AMS is effective in higher order function approximation and has potential in practical applications.  相似文献   

7.

Feature selection (FS) methods are necessary to develop intelligent analysis tools that require data preprocessing and enhancing the performance of the machine learning algorithms. FS aims to maximize the classification accuracy by minimizing the number of selected features. This paper presents a new FS method using a modified Slime mould algorithm (SMA) based on the firefly algorithm (FA). In the developed SMAFA, FA is adopted to improve the exploration of SMA, since it has high ability to discover the feasible regions which have optima solution. This will lead to enhance the convergence by increasing the quality of the final output. SMAFA is evaluated using twenty UCI datasets and also with comprehensive comparisons to a number of the existing MH algorithms. To further assess the applicability of SMAFA, two high-dimensional datasets related to the QSAR modeling are used. Experimental results verified the promising performance of SMAFA using different performance measures.

  相似文献   

8.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

9.
This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.  相似文献   

10.
This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks.  相似文献   

11.
The present paper proposes a new stochastic optimization algorithm as a hybridization of a relatively recent stochastic optimization algorithm, called biogeography-based optimization (BBO) with the differential evolution (DE) algorithm. This combination incorporates DE algorithm into the optimization procedure of BBO with an attempt to incorporate diversity to overcome stagnation at local optima. We also propose to implement an additional selection procedure for BBO, which preserves fitter habitats for subsequent generations. The proposed variation of BBO, named DBBO, is tested for several benchmark function optimization problems. The results show that DBBO can significantly outperform the basic BBO algorithm and can mostly emerge as the best solution providing algorithm among competing BBO and DE algorithms.  相似文献   

12.
Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR). The experimental results reveal that DSNDE achieves better performance than its peers.   相似文献   

13.
The biogeography-based optimisation (BBO) algorithm is a novel evolutionary algorithm inspired by biogeography. Similarly, to other evolutionary algorithms, entrapment in local optima and slow convergence speed are two probable problems it encounters in solving challenging real problems. Due to the novelty of this algorithm, however, there is little in the literature regarding alleviating these two problems. Chaotic maps are one of the best methods to improve the performance of evolutionary algorithms in terms of both local optima avoidance and convergence speed. In this study, we utilise ten chaotic maps to enhance the performance of the BBO algorithm. The chaotic maps are employed to define selection, emigration, and mutation probabilities. The proposed chaotic BBO algorithms are benchmarked on ten test functions. The results demonstrate that the chaotic maps (especially Gauss/mouse map) are able to significantly boost the performance of BBO. In addition, the results show that the combination of chaotic selection and emigration operators results in the highest performance.  相似文献   

14.

A new hybrid genetic algorithm with the significant improvement of convergence performance is proposed in this study. This algorithm comes from the incorporation of a modified microgenetic algorithm with a local optimizer based on the heuristic pattern move. The hybridization process is implemented by replacing the two worst individuals in the offspring obtained from the conventional genetic operations with two new individuals generated from the local optimizer in each generation. Some implementation-related problems such as the selection of control parameters in the local optimizer are addressed in detail. This new algorithm has been examined using six benchmarking functions, and is compared with the conventional genetic algorithms without the local optimizer incorporated, as well as the hybrid algorithms incorporated with the hill-climbing method in terms of convergence performance. The results show that the proposed hybrid algorithm is more effective and efficient to obtain the global optimum. It takes about 6.4%-74.4% of the number of generations normally required by the conventional genetic algorithms to obtain the global optimum, while the computation cost for reproducing each new generation has hardly increased compared to the conventional genetic algorithms. Another advantage of this new algorithm is the implementation process is very simple and straightforward. There are no extra function evaluations and other complex calculations involved in the added local optimizer as well as in the hybridization process. This makes the new algorithm easy to be incorporated with the existing software packages of genetic algorithms so as to further improve their performance. As an engineering example, this new algorithm is applied for the detection of a crack in a composite plate, which demonstrates its effectiveness in solving engineering practical problems.  相似文献   

15.
This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBFs. The form of the RBFs is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBFs. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBFs with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms  相似文献   

16.
Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for online applications. Conventional RNN training algorithms such as the backpropagation through time and real-time recurrent learning have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes three specially designed adaptive parameters to maximize training speed for a recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.  相似文献   

17.
通过分析生物地理学优化算法(BBO)性能的不足,提出了一种基于混合凸迁移和趋优柯西变异的对偶生物地理学优化算法(DuBBO).在迁移算子中,采用动态的混合凸迁移算子,使算法能够快速地向最优解方向收敛;在变异机制中,采用趋优变异策略,并加入了柯西分布随机数帮助算法跳出局部最优解;最后将对偶学习策略集成到算法中,加快了算法收敛速度并提升了搜索能力.在23个benchmark函数上的实验结果证明了提出的三种改进策略的有效性和必要性.最后将DuBBO与BBO以及另外六种优秀的改进算法进行对比.实验结果表明,DuBBO在整体性能上最好、收敛速度更快、收敛精度更高.  相似文献   

18.
Stochastic learning automata and genetic algorithms (GAs) have previously been shown to have valuable global optimization properties. Learning automata have, however, been criticized for having a relatively slow rate of convergence. In this paper, these two techniques are combined to provide an increase in the rate of convergence for the learning automata and also to improve the chances of escaping local optima. The technique separates the genotype and phenotype properties of the GA and has the advantage that the degree of convergence can be quickly ascertained. It also provides the GA with a stopping rule. If the technique is applied to real-valued function optimization problems, then bounds on the range of the values within which the global optima is expected can be determined throughout the search process. The technique is demonstrated through a number of bit-based and real-valued function optimization examples.  相似文献   

19.

In the present study, a new algorithm is developed for neural network training by combining a gradient-based and a meta-heuristic algorithm. The new algorithm benefits from simultaneous local and global search, eliminating the problem of getting stuck in local optimum. For this purpose, first the global search ability of the grey wolf optimizer (GWO) is improved with the Levy flight, a random walk in which the jump size follows the Levy distribution, which results in a more efficient global search in the search space thanks to the long jumps. Then, this improved algorithm is combined with back propagation (BP) to use the advantages of enhanced global search ability of GWO and local search ability of BP algorithm in training neural network. The performance of the proposed algorithm has been evaluated by comparing it against a number of well-known meta-heuristic algorithms using twelve classification and function-approximation datasets.

  相似文献   

20.
Due to its simplicity and ease of use, the standard grey wolf optimizer (GWO) is attracting much attention. However, due to its imperfect search structure and possible risk of being trapped in local optima, its application has been limited. To perfect the performance of the algorithm, an optimized GWO is proposed based on a mutation operator and eliminating-reconstructing mechanism (MR-GWO). By analyzing GWO, it is found that it conducts search with only three leading wolves at the core, and balances the exploration and exploitation abilities by adjusting only the parameter a, which means the wolves lose some diversity to some extent. Therefore, a mutation operator is introduced to facilitate better searching wolves, and an eliminating- reconstructing mechanism is used for the poor search wolves, which not only effectively expands the stochastic search, but also accelerates its convergence, and these two operations complement each other well. To verify its validity, MR-GWO is applied to the global optimization experiment of 13 standard continuous functions and a radial basis function (RBF) network approximation experiment. Through a comparison with other algorithms, it is proven that MR-GWO has a strong advantage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号