首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
This paper proposes a new hybrid approach for recurrent neural networks (RNN). The basic idea of this approach is to train an input layer by unsupervised learning and an output layer by supervised learning. In this method, the Kohonen algorithm is used for unsupervised learning, and dynamic gradient descent method is used for supervised learning. The performances of the proposed algorithm are compared with backpropagation through time (BPTT) on three benchmark problems. Simulation results show that the performances of the new proposed algorithm exceed the standard backpropagation through time in the reduction of the total number of iterations and in the learning time required in the training process.  相似文献   

2.
Neural Computing and Applications - Recurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in...  相似文献   

3.
This paper proposes a novel algorithm for function approximation that extends the standard generalized regression neural network. Instead of a single bandwidth for all the kernels, we employ a multiple-bandwidth configuration. However, unlike previous works that use clustering of the training data for the reduction of the number of bandwidths, we propose a distinct scheme that manages a dramatic bandwidth reduction while preserving the required model complexity. In this scheme, the algorithm partitions the training patterns to groups, where all patterns within each group share the same bandwidth. Grouping relies on the analysis of the local nearest neighbor distance information around the patterns and the principal component analysis with fuzzy clustering. Furthermore, we use a hybrid optimization procedure combining a very efficient variant of the particle swarm optimizer and a quasi-Newton method for global optimization and locally optimal fine-tuning of the network bandwidths. Training is based on the minimization of a flexible adaptation of the leave-one-out validation error that enhances the network generalization. We test the proposed algorithm with real and synthetic datasets, and results show that it exhibits competitive regression performance compared to other techniques.  相似文献   

4.
This paper presents an online procedure for training dynamic neural networks with input-output recurrences whose topology is continuously adjusted to the complexity of the target system dynamics. This is accomplished by changing the number of the elements of the network hidden layer whenever the existing topology cannot capture the dynamics presented by the new data. The training mechanism is based on the suitably altered extended Kalman filter (EKF) algorithm which is simultaneously used for the network parameter adjustment and for its state estimation. The network consists of a single hidden layer with Gaussian radial basis functions (GRBF), and a linear output layer. The choice of the GRBF is induced by the requirements of the online learning. The latter implies the network architecture which permits only local influence of the new data point in order not to forget the previously learned dynamics. The continuous topology adaptation is implemented in our algorithm to avoid memory and computational problems of using a regular grid of GRBF'S which covers the network input space. Furthermore, we show that the resulting parameter increase can be handled "smoothly" without interfering with the already acquired information. If the target system dynamics are changing over time, we show that a suitable forgetting factor can be used to "unlearn" the no longer-relevant dynamics. The quality of the recurrent network training algorithm is demonstrated on the identification of nonlinear dynamic systems.  相似文献   

5.
Parallel nonlinear optimization techniques for training neural networks   总被引:5,自引:0,他引:5  
In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.  相似文献   

6.
Ximin Zhang  Yan Qiu Chen   《Neurocomputing》2000,30(1-4):333-337
A novel method for globally searching to find the good minima is proposed in this paper. Starting from a local minimum, the weight space around it is scanned with the process being guided by terrain-independent emanating rays. During the search, starting points for further exploration are identified and used to find corresponding local minima. Based on the correct classification rate (CCR) on the validation data, the best minimum is found.  相似文献   

7.
Knowledge and Information Systems - In recent years, significant advancements have been made in artificial neural network models and they have been applied to a variety of real-world problems....  相似文献   

8.
This paper presents a novel evolutionary algorithm (EA) for constrained optimization problems, i.e., the hybrid constrained optimization EA (HCOEA). This algorithm effectively combines multiobjective optimization with global and local search models. In performing the global search, a niching genetic algorithm based on tournament selection is proposed. Also, HCOEA has adopted a parallel local search operator that implements a clustering partition of the population and multiparent crossover to generate the offspring population. Then, nondominated individuals in the offspring population are used to replace the dominated individuals in the parent population. Meanwhile, the best infeasible individual replacement scheme is devised for the purpose of rapidly guiding the population toward the feasible region of the search space. During the evolutionary process, the global search model effectively promotes high population diversity, and the local search model remarkably accelerates the convergence speed. HCOEA is tested on 13 well-known benchmark functions, and the experimental results suggest that it is more robust and efficient than other state-of-the-art algorithms from the literature in terms of the selected performance metrics, such as the best, median, mean, and worst objective function values and the standard deviations.  相似文献   

9.
RBF神经网络参数估计的两种混合优化算法   总被引:2,自引:1,他引:1  
甘敏  彭晓燕  彭辉 《控制与决策》2009,24(8):1172-1176
基于全局搜索的进化算法和一种局部搜索算法--结构化的非线性参数优化方法(SNPOM),提出两种混合的优化算法来估计RBF神经网络中的参数:1)初始化一定数目的种群作为SNPOM的初始值得到其适应值,通过选择、交叉和替换策略来更新种群;2)采用进化算法运行一定的代数,从最终群体中选取一些个体进一步用SNPOM来优化.这两种混合优化算法的本质是用进化算法为SNPOM搜寻最优初始值,以得到全局最优解.仿真实验结果表明,该混合算法比单独使用进化算法或SNPOM更优,且优于其他一些算法.  相似文献   

10.
For training algorithms of recurrent neural networks (RNN), convergent speed and training error are always two contradictory performances. In this letter, we propose a normalized adaptive recurrent learning (NARL) to obtain a tradeoff between transient and steady-state response. An augmented term is added to error gradient to exactly model the derivative of cost function with respect to hidden layer weight. The influence of the induced gain of activation function on training stability is also taken into consideration. Moreover, adaptive learning rate is employed to improve the robustness of the gradient training. Finally, computer simulations of a model prediction problem are synthesized to give comparisons between NARL and conventional normalized real-time recurrent learning (N-RTRL).  相似文献   

11.
蜜蜂群优化算法用于训练前馈神经网络   总被引:4,自引:0,他引:4       下载免费PDF全文
训练人工神经网络的目的是调整各层的权重系数以达到最优,因而训练过程的实质是一项优化任务。传统的训练算法存在着容易陷入局部最优、计算复杂等缺陷。介绍一种训练前馈神经网络的蜜蜂群优化算法,它是一种简单、鲁棒性强的群体智能随机优化算法。该算法把探查和开发过程有效地结合在一起,并采取了跳出局部最优的搜索策略。成功地把该算法应用于神经网络训练的基本问题:异或问题、N位奇偶校验和编码解码问题,并与传统的BP算法进行比较。仿真实验证明其性能较传统的GD算法和LM算法更为优越。  相似文献   

12.
Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for online applications. Conventional RNN training algorithms such as the backpropagation through time and real-time recurrent learning have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes three specially designed adaptive parameters to maximize training speed for a recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.  相似文献   

13.
Noisy and large data sets are extremely difficult to handle and especially to predict. Time series prediction is a problem, which is frequently addressed by researchers in many engineering fields. This paper presents a hybrid approach to handle a large and noisy data set. In fact, a Self Organizing Map (SOM), combined with multiple recurrent neural networks (RNN) has been trained to predict the components of noisy and large data set. The SOM has been developed to construct incrementally a set of clusters. Each cluster has been represented by a subset of data used to train a recurrent neural network. The back propagation through time has been deployed to train the set of recurrent neural networks. To show the performances of the proposed approach, a problem of instruction addresses prefetching has been treated.  相似文献   

14.
In this paper, a novel particle swarm optimization model for radial basis function neural networks (RBFNN) using hybrid algorithms to solve classification problems is proposed. In the model, linearly decreased inertia weight of each particle (ALPSO) can be automatically calculated according to fitness value. The proposed ALPSO algorithm was compared with various well-known PSO algorithms on benchmark test functions with and without rotation. Besides, a modified fisher ratio class separability measure (MFRCSM) was used to select the initial hidden centers of radial basis function neural networks, and then orthogonal least square algorithm (OLSA) combined with the proposed ALPSO was employed to further optimize the structure of the RBFNN including the weights and controlling parameters. The proposed optimization model integrating MFRCSM, OLSA and ALPSO (MOA-RBFNN) is validated by testing various benchmark classification problems. The experimental results show that the proposed optimization method outperforms the conventional methods and approaches proposed in recent literature.  相似文献   

15.
Global exponential stability is a desirable property for dynamic systems. The paper studies the global exponential stability of several existing recurrent neural networks for solving linear programming problems, convex programming problems with interval constraints, convex programming problems with nonlinear constraints, and monotone variational inequalities. In contrast to the existing results on global exponential stability, the present results do not require additional conditions on the weight matrices of recurrent neural networks and improve some existing conditions for global exponential stability. Therefore, the stability results in the paper further demonstrate the superior convergence properties of the existing neural networks for optimization.  相似文献   

16.
Chen  Tianyu  Li  Sheng  Yan  Jun 《Neural computing & applications》2022,34(19):16515-16532
Neural Computing and Applications - Recurrent neural networks (RNNs) provide powerful tools for sequence problems. However, simple RNN and its variants are prone to high computational cost, for...  相似文献   

17.
刘建伟  宋志妍 《控制与决策》2022,37(11):2753-2768
循环神经网络是神经网络序列模型的主要实现形式,近几年得到迅速发展,其是机器翻译、机器问题回答、序列视频分析的标准处理手段,也是对于手写体自动合成、语音处理和图像生成等问题的主流建模手段.鉴于此,循环神经网络的各分支按照网络结构进行详细分类,大致分为3大类:一是衍生循环神经网络,这类网络是基于基本RNNs模型的结构衍生变体,即对RNNs的内部结构进行修改;二是组合循环神经网络,这类网络将其他一些经典的网络模型或结构与第一类衍生循环神经网络进行组合,得到更好的模型效果,是一种非常有效的手段;三是混合循环神经网络,这类网络模型既有不同网络模型的组合,又在RNNs内部结构上进行修改,是同属于前两类网络分类的结构.为了更加深入地理解循环神经网络,进一步介绍与循环神经网络经常混为一谈的递归神经网络结构以及递归神经网络与循环神经网络的区别和联系.在详略描述上述模型的应用背景、网络结构以及模型变种后,对各个模型的特点进行总结和比较,并对循环神经网络模型进行展望和总结.  相似文献   

18.
递归神经网络的结构研究   总被引:8,自引:0,他引:8  
丛爽  戴谊 《计算机应用》2004,24(8):18-20,27
从非线性动态系统的角度出发,对递归动态网络结构及其功能进行详尽的综述。将递归动态网络分为三大类:全局反馈递归网络、前向递归网络和混合型网络。每一类网络又可分为若干种网络。给出了每种网络描述网络特性的结构图,同时还对多种网络进行了功能对比,分析了各种网络的异同。  相似文献   

19.
混合粒子群优化算法优化前向神经网络结构和参数   总被引:3,自引:1,他引:3  
提出了综合利用粒子群优化算法(PSO)和离散粒子群优化算法(D-PSO)同时优化前向神经网络结构和参数的新方法。该算法使用离散粒子群优化算法优化神经网络连接结构,用多维空间中0或1取值的粒子来描述所有可能的神经网络连接,同时使用粒子群优化算法优化神经网络权值。将经过该算法训练的神经网络应用于故障诊断,能够有效消除冗余连接结构对网络诊断能力的影响。仿真试验的结果表明,相比遗传算法等其他算法,该算法能够有效改善神经网络结构和参数的优化效率,提高故障模式识别的准确率。  相似文献   

20.
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号