首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A model is introduced for continuous-time dynamic feedback neural networks with supervised learning ability. Modifications are introduced to conventional models to guarantee precisely that a given desired vector, and its negative, are indeed stored in the network as asymptotically stable equilibrium points. The modifications entail that the output signal of a neuron is multiplied by the square of its associated weight to supply the signal to an input of another neuron. A simulation of the complete dynamics is then presented for a prototype one neuron with self-feedback and supervised learning; the simulation illustrates the (supervised) learning capability of the network.  相似文献   

2.
Robust radial basis function neural networks   总被引:10,自引:0,他引:10  
Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.  相似文献   

3.
Recent studies on human learning reveal that self-regulated learning in a metacognitive framework is the best strategy for efficient learning. As the machine learning algorithms are inspired by the principles of human learning, one needs to incorporate the concept of metacognition to develop efficient machine learning algorithms. In this letter we present a metacognitive learning framework that controls the learning process of a fully complex-valued radial basis function network and is referred to as a metacognitive fully complex-valued radial basis function (Mc-FCRBF) network. Mc-FCRBF has two components: a cognitive component containing the FC-RBF network and a metacognitive component, which regulates the learning process of FC-RBF. In every epoch, when a sample is presented to Mc-FCRBF, the metacognitive component decides what to learn, when to learn, and how to learn based on the knowledge acquired by the FC-RBF network and the new information contained in the sample. The Mc-FCRBF learning algorithm is described in detail, and both its approximation and classification abilities are evaluated using a set of benchmark and practical problems. Performance results indicate the superior approximation and classification performance of Mc-FCRBF compared to existing methods in the literature.  相似文献   

4.
The conversion functions in the hidden layer of radial basis function neural networks (RBFNN) are Gaussian functions. The Gaussian functions are local to the kernel centers. In most of the existing research, the spatial local response of the sample is inaccurately calculated because the kernels have the same shape as a hypersphere, and the kernel parameters in the network are determined by experience. The influence of the fine structure in the local space is not considered during feature extraction. In addition, it is difficult to obtain a better feature extraction ability with less computational complexity. Therefore, this paper develops a multi-scale RBF kernel learning algorithm and proposes a new multi-layer RBF neural network model. For the samples of each class, the expectation maximization (EM) algorithm is used to obtain multi-layer nested sub-distribution models with different local response ranges, which are called multi-scale kernels in the network. The prior information of each sub-distribution is used as the connection weight between the multi-scale kernels. Finally, feature extraction is implemented using multi-layer kernel subspace embedding. The multi-scale kernel learning model can efficiently and accurately describe the fine structure of the samples and is fault tolerant to setting the number of kernels to a certain extent. Considering the prior probability of each kernel as the weight makes the feature extraction process satisfy the Bayes rule, which can enhance the interpretability of feature extraction in the network. This paper also theoretically proves that the proposed neural network is a generalized version of the original RBFNN. The experimental results show that the proposed method has better performance compared with some state-of-the-art algorithms.  相似文献   

5.
Median radial basis function neural network   总被引:3,自引:0,他引:3  
Radial basis functions (RBFs) consist of a two-layer neural network, where each hidden unit implements a kernel function. Each kernel is associated with an activation region from the input space and its output is fed to an output unit. In order to find the parameters of a neural network which embeds this structure we take into consideration two different statistical approaches. The first approach uses classical estimation in the learning stage and it is based on the learning vector quantization algorithm and its second-order statistics extension. After the presentation of this approach, we introduce the median radial basis function (MRBF) algorithm based on robust estimation of the hidden unit parameters. The proposed algorithm employs the marginal median for kernel location estimation and the median of the absolute deviations for the scale parameter estimation. A histogram-based fast implementation is provided for the MRBF algorithm. The theoretical performance of the two training algorithms is comparatively evaluated when estimating the network weights. The network is applied in pattern classification problems and in optical flow segmentation.  相似文献   

6.
A novel supervised learning method is proposed by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, growing and credit-assignment algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multilayered perceptron.  相似文献   

7.
The mixed use of different shapes of radial basis functions (RBFs) in radial basis functions neural networks (RBFNNs) is investigated in this paper. For this purpose, we propose the use of a generalised version of the standard RBFNN, based on the generalised Gaussian distribution. The generalised radial basis function (GRBF) proposed in this paper is able to reproduce other different radial basis functions (RBFs) by changing a real parameter τ. In the proposed methodology, a hybrid evolutionary algorithm (HEA) is employed to estimate the number of hidden neuron, the centres, type and width of each RBF associated with each radial unit. In order to test the performance of the proposed methodology, an experimental study is presented with 20 datasets from the UCI repository. The GRBF neural network (GRBFNN) was compared to RBFNNs with Gaussian, Cauchy and inverse multiquadratic RBFs in the hidden layer and to other classifiers, including different RBFNN design methods, support vector machines (SVMs), a sparse probabilistic classifier (sparse multinominal logistic regression, SMLR) and other non-sparse (but regularised) probabilistic classifiers (regularised multinominal logistic regression, RMLR). The GRBFNN models were found to be better than the alternative RBFNNs for almost all datasets, producing the highest mean accuracy rank.  相似文献   

8.
Learning in neural networks can broadly be divided into two categories, viz., off-line (or batch) learning and online (or incremental) learning. In this paper, a review of a variety of supervised neural networks with online learning capabilities is presented. Specifically, we focus on articles published in main indexed journals in the past 10 years (2003–2013). We examine a number of key neural network architectures, which include feedforward neural networks, recurrent neural networks, fuzzy neural networks, and other related networks. How the online learning methodologies are incorporated into these networks is exemplified, and how they are applied to solving problems in different domains is highlighted. A summary of the review that covers different network architectures and their applications is presented.  相似文献   

9.
Convergent on-line algorithms for supervised learning in neural networks   总被引:1,自引:0,他引:1  
We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.  相似文献   

10.
This article presents a new family of reformulated radial basis function (RBF) neural networks that employ adjustable weighted norms to measure the distance between the training vectors and the centers of the radial basis functions. The reformulated RBF model introduced in this article incorporates norm weights that can be updated during learning to facilitate the implementation of the desired input‐output mapping. Experiments involving classification and function approximation tasks verify that the proposed RBF neural networks outperform conventional RBF neural networks and reformulated RBF neural networks employing fixed Euclidean norms. Reformulated RBF neural networks with adjustable weighted norms are also strong competitors to conventional feedforward neural networks in terms of performance, implementation simplicity, and training speed. © 2003 Wiley Periodicals, Inc.  相似文献   

11.
Kernel orthonormalization in radial basis function neural networks   总被引:7,自引:0,他引:7  
This paper deals with optimization of the computations involved in training radial basis function (RBF) neural networks. The main contribution of the reported work is the method for network weights calculation, in which the key idea is to transform the RBF kernels into an orthonormal set of functions (using the standard Gram-Schmidt orthogonalization). This significantly reduces the computing time if the RBF training scheme, which relies on adding one kernel hidden node at a time to improve network performance, is adopted. Another property of the method is that, after the RBF network weights are computed, the original network structure can be restored back. An additional strength of the method is the possibility to decompose the proposed computing task into a number of parallel subtasks so gaining further savings on computing time. Also, the proposed weight calculation technique has low storage requirements. These features make the method very attractive for hardware implementation. The paper presents a detailed derivation of the proposed network weights calculation procedure and demonstrates its validity for RBF network training on a number of data classification and function approximation problems.  相似文献   

12.
This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBFs. The form of the RBFs is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBFs. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBFs with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms  相似文献   

13.
Robust full Bayesian learning for radial basis networks   总被引:1,自引:0,他引:1  
  相似文献   

14.
梯度算法下RBF网的参数变化动态   总被引:2,自引:0,他引:2  
分析神经网络学习过程中各参数的变化动态,对理解网络的动力学行为,改进网络的结构和性能等具有积极意义.本文讨论了用梯度算法优化误差平方和损失函数时RBF网隐节点参数的变化动态,即算法收敛后各隐节点参数的可能取值.主要结论包括:如果算法收敛后损失函数不为零,则各隐节点将位于样本输入的加权聚类中心;如果损失函数为零,则网络中的冗余隐节点将出现萎缩、衰减、外移或重合现象.进一步的试验发现,对结构过大的RBF网,冗余隐节点的萎缩、外移、衰减和重合是频繁出现的现象.  相似文献   

15.
A supervised learning neural network (SLNN) coprocessor which enhances the performance of a digital soft-decision Viterbi decoder used for forward error correction in a digital communication channel with either fading plus additive white Gaussian noise (AWGN) or pure AWGN has been investigated and designed. The SLNN is designed to cooperate with a phase shift keying (PSK) demodulator, an automatic gain control (AGC) circuit, and a 3-bit quantizer which is an analog to digital convertor. It is trained to learn the best uniform quantization step-size Delta (BEST) as a function of the mean and the standard deviation of various sets of Gaussian distributed random variables. The channel cutoff rate (R(0)) of the channel is employed to determine the best quantization threshold step-size (Delta(BEST)) that results in the minimization of the Viterbi decoder output bit error rate (BER). For a digital communication system with a SLNN coprocessor, consistent and substantial BER performance improvements are observed. The performance improvement ranges from a minimum of 9% to a maximum of 25% for a pure AWGN channel and from a minimum of 25% to a maximum of 70% for a fading channel. This neural network coprocessor approach can be generalized and applied to any digital signal processing system to decrease the performance losses associated with quantization and/or signal instability.  相似文献   

16.
Robust error measure for supervised neural network learning withoutliers   总被引:1,自引:0,他引:1  
Most supervised neural networks (NNs) are trained by minimizing the mean squared error (MSE) of the training set. In the presence of outliers, the resulting NN model can differ significantly from the underlying system that generates the data. Two different approaches are used to study the mechanism by which outliers affect the resulting models: influence function and maximum likelihood. The mean log squared error (MLSE) is proposed as the error criteria that can be easily adapted by most supervised learning algorithms. Simulation results indicate that the proposed method is robust against outliers.  相似文献   

17.
Face recognition with radial basis function (RBF) neural networks   总被引:33,自引:0,他引:33  
A general and efficient design approach using a radial basis function (RBF) neural classifier to cope with small training sets of high dimension, which is a problem frequently encountered in face recognition, is presented. In order to avoid overfitting and reduce the computational burden, face features are first extracted by the principal component analysis (PCA) method. Then, the resulting features are further processed by the Fisher's linear discriminant (FLD) technique to acquire lower-dimensional discriminant patterns. A novel paradigm is proposed whereby data information is encapsulated in determining the structure and initial parameters of the RBF neural classifier before learning takes place. A hybrid learning algorithm is used to train the RBF neural networks so that the dimension of the search space is drastically reduced in the gradient paradigm. Simulation results conducted on the ORL database show that the system achieves excellent performance both in terms of error rates of classification and learning efficiency.  相似文献   

18.
We have developed a novel pulse-coupled neural network (PCNN) for speech recognition. One of the advantages of the PCNN is in its biologically based neural dynamic structure using feedback connections. To recall the memorized pattern, a radial basis function (RBF) is incorporated into the proposed PCNN. Simulation results show that the PCNN with a RBF can be useful for phoneme recognition. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

19.
20.
This paper introduces evolving fuzzy neural networks (EFuNNs) as a means for the implementation of the evolving connectionist systems (ECOS) paradigm that is aimed at building online, adaptive intelligent systems that have both their structure and functionality evolving in time. EFuNNs evolve their structure and parameter values through incremental, hybrid supervised/unsupervised, online learning. They can accommodate new input data, including new features, new classes, etc., through local element tuning. New connections and new neurons are created during the operation of the system. EFuNNs can learn spatial-temporal sequences in an adaptive way through one pass learning and automatically adapt their parameter values as they operate. Fuzzy or crisp rules can be inserted and extracted at any time of the EFuNN operation. The characteristics of EFuNNs are illustrated on several case study data sets for time series prediction and spoken word classification. Their performance is compared with traditional connectionist methods and systems. The applicability of EFuNNs as general purpose online learning machines, what concerns systems that learn from large databases, life-long learning systems, and online adaptive systems in different areas of engineering are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号