首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm.  相似文献   

2.
Adaptation algorithms for 2-D feedforward neural networks   总被引:1,自引:0,他引:1  
The generalized weight adaptation algorithms presented by J.G. Kuschewski et al. (1993) and by S.H. Zak and H.J. Sira-Ramirez (1990) are extended for 2-D madaline and 2-D two-layer feedforward neural nets (FNNs).  相似文献   

3.
In general, fuzzy neural networks cannot match nonlinear systems exactly. Unmodeled dynamic leads parameters drift and even instability problem. According to system identification theory, robust modification terms must be included in order to guarantee Lyapunov stability. This paper suggests new learning laws for Mamdani and Takagi-Sugeno-Kang type fuzzy neural networks based on input-to-state stability approach. The new learning schemes employ a time-varying learning rate that is determined from input-output data and model structure. Stable learning algorithms for the premise and the consequence parts of fuzzy rules are proposed. The calculation of the learning rate does not need any prior information such as estimation of the modeling error bounds. This offer an advantage compared to other techniques using robust modification.  相似文献   

4.
We present two fuzzy conjugate gradient learning algorithms based on evolutionary algorithms for polygonal fuzzy neural networks (PFNN). First, we design a new algorithm, fuzzy conjugate algorithm based on genetic algorithm (GA). In the algorithm, we obtain an optimal learning constant η by GA and the experiment indicates the new algorithm always converges. Because the algorithm based on GA is a little slow in every iteration step, we propose to get the learning constant η by quantum genetic algorithm (QGA) in place of GA to decrease time spent in every iteration step. The PFNN tuned by the proposed learning algorithm is applied to approximation realization of fuzzy inference rules, and some experiments demonstrate the whole process. © 2011 Wiley Periodicals, Inc.  相似文献   

5.
In this work we present a new hybrid algorithm for feedforward neural networks, which combines unsupervised and supervised learning. In this approach, we use a Kohonen algorithm with a fuzzy neighborhood for training the weights of the hidden layers and gradient descent method for training the weights of the output layer. The goal of this method is to assist the existing variable learning rate algorithms. Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

6.
This paper deals with a fast and computationally simple Successive Over-relaxation Resilient Backpropagation (SORRPROP) learning algorithm which has been developed by modifying the Resilient Backpropagation (RPROP) algorithm. It uses latest computed values of weights between the hidden and output layers to update remaining weights. The modification does not add any extra computation in RPROP algorithm and maintains its computational simplicity. Classification and regression simulations examples have been used to compare the performance. From the test results for the examples undertaken it is concluded that SORRPROP has small convergence times and better performance in comparison to other first-order learning algorithms.  相似文献   

7.
A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commonly used backpropagation learning algorithms are special cases of the developed general algorithm.  相似文献   

8.
限定记忆的前向神经网络在线学习算法研究   总被引:3,自引:0,他引:3  
从理论上分析了隐含层激励函数满足Mercer条件的前向神经网络的数学本质,给出了网络学习的指导方向.提出3种网络在线学习算法,它们通过动态调整网络结构和权值来提高网络在线预测性能.算法完全符合统计学习理论提出的结构风险最小化原则,具有较快的学习收敛速度和良好的抗噪声能力.最后通过具体数值实验验证了上述算法的可行性和优越性.  相似文献   

9.
On-line learning algorithms for locally recurrent neural networks   总被引:9,自引:0,他引:9  
This paper focuses on online learning procedures for locally recurrent neural nets with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose online version, causal recursive backpropagation (CRBP), has some advantages over other online methods. CRBP includes as particular cases backpropagation (BP), temporal BP, Back-Tsoi algorithm (1991) among others, thereby providing a unifying view on gradient calculation for recurrent nets with local feedback. The only learning method known for locally recurrent nets with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and faster convergence with respect to the Back-Tsoi algorithm. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with CRBP. CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space.  相似文献   

10.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

11.
黄鹤  张永亮 《控制与决策》2023,38(10):2815-2822
复值有限内存BFGS(CL-BFGS)算法能有效用于求解复数域的无约束优化问题,但其性能容易受到记忆尺度的影响.为了解决记忆尺度的选择问题,提出一种基于混合搜索方向的CL-BFGS算法.对于给定的记忆尺度候选集,采用滑动窗口法将其划分成有限个子集,将各子集元素作为记忆尺度计算得到一组混合方向,选择使目标函数值最小的混合方向作为当前迭代的搜索方向.在迭代过程中,采用混合搜索方向的策略有益于强化对最新曲率信息的利用,便于记忆尺度的选取,提高算法的收敛速度,所提出的CL-BFGS算法适用于多层前向复值神经网络的高效学习.最后通过在模式识别、非线性信道均衡和复函数逼近上的实验验证了基于混合搜索方向的CLBFGS算法能取得比一些已有算法更好的性能.  相似文献   

12.
一种模糊神经网络的快速参数学习算法   总被引:9,自引:0,他引:9  
提出了一种新的模糊神经网络的快速参数学习算法, 采用一些特殊的处理, 可以用递推最小二乘法(RLS)来调整所有的参数. 以前的学习算法在调整模糊隶属度函数的中心和宽度的时候, 用的是梯度下降法, 具有容易陷入局部最小值点、收敛速度慢等缺点, 而本算法则可以克服这些缺点, 最后通过仿真验证了算法的有效性.  相似文献   

13.

This study is dedicated to developing a fuzzy neural network with linguistic teaching signals. The proposed network, which can be applied either as a fuzzy expert system or a fuzzy controller, is able to process and learn the numerical information as well as the linguistic information. The network consists of two parts: (1) initial weights generation and (2) error back-propagation (EBP)-type learning algorithm. In the first part, a genetic algorithm (GA) generates the initial weights for a fuzzy neural network in order to prevent the network getting stuck to the local minimum. The second part employs the EBP-type learning algorithm for fine-tuning. In addition, the unimportant weights are eliminated during the training process. The simulated results do not only indicate that the proposed network can accurately learn the relations of fuzzy inputs and fuzzy outputs, but also show that the initial weights from the GA can coverage better and weight elimination really can reduce the training error. Moreover, real-world problem results show that the proposed network is able to learn the fuzzy IF-THEN rules captured from the retailing experts regarding the promotion effect on the sales.  相似文献   

14.
Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.  相似文献   

15.
Convergent on-line algorithms for supervised learning in neural networks   总被引:1,自引:0,他引:1  
We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.  相似文献   

16.
In this paper, the local minima-free conditions of the outer-supervised feedforward neural networks (FNN) based on batch-style learning are studied by means of the embedded subspace method. It is proven that only if the rendition that the number of the hidden neurons is not less than that of the training samples, which is sufficient but not necessary, is satisfied, the network will necessarily converge to the global minima with null cost, and that the condition that the range space of the outer-supervised signal matrix is included in the range space of the hidden output matrix Is sufficient and necessary condition for the local minima-free in the error surface. In addition, under the condition of the number of the hidden neurons being less than that of the training samples and greater than the number of the output neurons, it is demonstrated that there will also only exist the global minima with null cost in the error surface if the first layer weights are adequately selected.  相似文献   

17.
Quantum neural networks (QNNs): inherently fuzzy feedforward neuralnetworks   总被引:7,自引:0,他引:7  
This paper introduces quantum neural networks (QNNs), a class of feedforward neural networks (FFNNs) inherently capable of estimating the structure of a feature space in the form of fuzzy sets. The hidden units of these networks develop quantized representations of the sample information provided by the training data set in various graded levels of certainty. Unlike other approaches attempting to merge fuzzy logic and neural networks, QNNs can be used in pattern classification problems without any restricting assumptions such as the availability of a priori knowledge or desired membership profile, convexity of classes, a limited number of classes, etc. Experimental results presented here show that QNNs are capable of recognizing structures in data, a property that conventional FFNNs with sigmoidal hidden units lack.  相似文献   

18.
Employing an effective learning process is a critical topic in designing a fuzzy neural network, especially when expert knowledge is not available. This paper presents a genetic algorithm (GA) based learning approach for a specific type of fuzzy neural network. The proposed learning approach consists of three stages. In the first stage the membership functions of both input and output variables are initialized by determining their centers and widths using a self-organizing algorithm. The second stage employs the proposed GA based learning algorithm to identify the fuzzy rules while the final stage tunes the derived structure and parameters using a back-propagation learning algorithm. The capabilities of the proposed GA-based learning approach are evaluated using a well-examined benchmark example and its effectiveness is analyzed by means of a comparative study with other approaches. The usefulness of the proposed GA-based learning approach is also illustrated in a practical case study where it is used to predict the performance of road traffic control actions. Results from the benchmarking exercise and case study effectively demonstrate the ability of the proposed three stages learning approach to identify relevant fuzzy rules from a training data set with a higher prediction accuracy than alternative approaches.  相似文献   

19.
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA) for training recurrent neural nets (RNN). Each RNN weight is encoded as a floating point number, and a concatenation of numbers forms a chromosome. Reproduction takes place locally in a square grid, each grid point representing a chromosome. Lamarckian and Baldwinian (1896) mechanisms for combining cellular GA and learning are compared. Different hill-climbing algorithms are incorporated into the cellular GA. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. RTRL has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, the simplest form of learning, has been implemented by considering the RNN as feedforward networks. The hybrid algorithms are used to train the RNN to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA is the fastest method. Learning should not be too extensive.  相似文献   

20.
We present two classes of convergent algorithms for learning continuous functions and regressions that are approximated by feedforward networks. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. (1970). The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods (1951). Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号