首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 340 毫秒
1.
In pattern classification problem, one trains a classifier to recognize future unseen samples using a training dataset. Practically, one should not expect the trained classifier could correctly recognize samples dissimilar to the training dataset. Therefore, finding the generalization capability of a classifier for those unseen samples may not help in improving the classifiers accuracy. The localized generalization error model was proposed to bound above the generalization mean square error for those unseen samples similar to the training dataset only. This error model is derived based on the stochastic sensitivity measure(ST-SM)of the classifiers. We present the ST-SMS for various Gaussian based classifiers: radial basis function neural networks and support vector machine in this paper. At the end of this work, we compare the decision boundaries visualization using the training samples yielding the largest sensitivity measures and the one using support vectors in the input space.  相似文献   

2.
Recent research has linked backpropagation (BP) and radial basis function (RBF) network classifiers, trained by minimizing the standard mean square error (MSE), to two main topics in statistical pattern recognition (SPR), namely the Bayes decision theory and discriminant analysis. However, so far, the establishment of these links has resulted in only a few practical applications for training, using, and evaluating these classifiers. The paper aims at providing more of these applications. It first illustrates that while training a linear output BP network, the explicit utilization of the network discriminant capability leads to an improvement in its classification performance. Then, for linear output BP and RBF networks, the paper defines a new generalization measure that provides information about the closeness of the network classification performance to the optimal performance. The estimation procedure of this measure is described and its use as an efficient criterion for terminating the learning algorithm and choosing the network topology is explained. The paper finally proposes an upper bound on the number of hidden units needed by an RBF network classifier to achieve an arbitrary value of the minimized MSE. Experimental results are presented to validate all proposed applications.  相似文献   

3.
A pattern classification problem usually involves using high-dimensional features that make the classifier very complex and difficult to train. With no feature reduction, both training accuracy and generalization capability will suffer. This paper proposes a novel hybrid filter-wrapper-type feature subset selection methodology using a localized generalization error model. The localized generalization error model for a radial basis function neural network bounds from above the generalization error for unseen samples located within a neighborhood of the training samples. Iteratively, the feature making the smallest contribution to the generalization error bound is removed. Moreover, the novel feature selection method is independent of the sample size and is computationally fast. The experimental results show that the proposed method consistently removes large percentages of features with statistically insignificant loss of testing accuracy for unseen samples. In the experiments for two of the datasets, the classifiers built using feature subsets with 90% of features removed by our proposed approach yield average testing accuracies higher than those trained using the full set of features. Finally, we corroborate the efficacy of the model by using it to predict corporate bankruptcies in the US.  相似文献   

4.
径向基函数神经网络的一种两级学习方法   总被引:2,自引:1,他引:1  
建立RBF(radial basis function)神经网络模型关键在于确定网络隐中心向量、基宽度参数和隐节点数.为设计结构简单,且具有良好泛化性能径向基网络结构,本文提出了一种RBF网络的两级学习新设计方法.该方法在下级由正则化正交最小二乘法与D-最优试验设计结合算法自动构建结构节俭的RBF网络模型;在上级通过粒子群优化算法优选结合算法中影响网络泛化性能的3个学习参数,即基宽度参数、正则化系数和D-最优代价系数的最佳参数组合.仿真实例表明了该方法的有效性.  相似文献   

5.
韩丽  史丽萍  徐治皋 《信息与控制》2007,36(5):604-609,615
分析了满足给定学习误差要求的最小结构神经网络的各种实现方法.把粗糙集理论引入神经网络的结构构造中;提出了一种基于粗糙集理论的RBF神经网络剪枝算法,并将这种算法与现有剪枝算法相比较.最后将该算法应用于热工过程中过热气温动态特性建模.仿真结果表明基于该算法的神经网络模型具有较高的建模精度以及泛化能力.  相似文献   

6.
In this paper, an objective function for training a functional link network to tolerate multiplicative weight noise is presented. Basically, the objective function is similar in form to other regularizer-based functions that consist of a mean square training error term and a regularizer term. Our study shows that under some mild conditions the derived regularizer is essentially the same as a weight decay regularizer. This explains why applying weight decay can also improve the fault-tolerant ability of a radial basis function (RBF) with multiplicative weight noise. In accordance with the objective function, a simple learning algorithm for a functional link network with multiplicative weight noise is derived. Finally, the mean prediction error of the trained network is analyzed. Simulated experiments on two artificial data sets and a real-world application are performed to verify theoretical result.  相似文献   

7.

In this article, we have proposed a methodology for making a radial basis function network (RBFN) robust with respect to additive and multiplicative input noises. This is achieved by properly selecting the centers and widths for the radial basis function (RBF) units of the hidden layer. For this purpose, firstly, a set of self-organizing map (SOM) networks are trained for center selection. For training a SOM network, random Gaussian noise is injected in the samples of each class of the data set. The number of SOM networks is same as the number of classes present in the data set, and each of the SOM networks is trained separately by the samples belonging to a particular class. The weight vector associated with a unit in the output layer of a particular SOM network corresponding to a class is used as the center of a RBF unit for that class. To determine the widths of the RBF units, p-nearest neighbor algorithm is used class-wise. Proper selection of centers and widths makes the RBFN robust with respect to input perturbation and outliers present in the data set. The weights between the hidden and output layers of RBFN are obtained by pseudo inverse method. To test the robustness of the proposed method in additive and multiplicative noise scenarios, ten standard data sets have been used for classification. Proposed method has been compared with three existing methods, where the centers have been generated in three ways: randomly, using k-means algorithm, and based on SOM network. Simulation results show the superiority of the proposed method compared to those methods. Wilcoxon signed-rank test also shows that the proposed method is statistically better than those methods.

  相似文献   

8.
Recursive least square (RLS) is an efficient approach to neural network training. However, in the classical RLS algorithm, there is no explicit decay in the energy function. This will lead to an unsatisfactory generalization ability for the trained networks. In this paper, we propose a generalized RLS (GRLS) model which includes a general decay term in the energy function for the training of feedforward neural networks. In particular, four different weight decay functions, namely, the quadratic weight decay, the constant weight decay and the newly proposed multimodal and quartic weight decay are discussed. By using the GRLS approach, not only the generalization ability of the trained networks is significantly improved but more unnecessary weights are pruned to obtain a compact network. Furthermore, the computational complexity of the GRLS remains the same as that of the standard RLS algorithm. The advantages and tradeoffs of using different decay functions are analyzed and then demonstrated with examples. Simulation results show that our approach is able to meet the design goals: improving the generalization ability of the trained network while getting a compact network.  相似文献   

9.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

10.
To realize effective modeling and secure accurate prediction abilities of models for power supply for high-field magnet (PSHFM), we develop a comprehensive design methodology of information granule-oriented radial basis function (RBF) neural networks. The proposed network comes with a collection of radial basis functions, which are structurally as well as parametrically optimized with the aid of information granulation and genetic algorithm. The structure of the information granule-oriented RBF neural networks invokes two types of clustering methods such as K-Means and fuzzy C-Means (FCM). The taxonomy of the resulting information granules relates to the format of the activation functions of the receptive fields used in RBF neural networks. The optimization of the network deals with a number of essential parameters as well as the underlying learning mechanisms (e.g., the width of the Gaussian function, the numbers of nodes in the hidden layer, and a fuzzification coefficient used in the FCM method). During the identification process, we are guided by a weighted objective function (performance index) in which a weight factor is introduced to achieve a sound balance between approximation and generalization capabilities of the resulting model. The proposed model is applied to modeling power supply for high-field magnet where the model is developed in the presence of a limited dataset (where the small size of the data is implied by high costs of acquiring data) as well as strong nonlinear characteristics of the underlying phenomenon. The obtained experimental results show that the proposed network exhibits high accuracy and generalization capabilities.  相似文献   

11.
Robust radial basis function neural networks   总被引:10,自引:0,他引:10  
Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.  相似文献   

12.
Efficient training of RBF neural networks for pattern recognition.   总被引:5,自引:0,他引:5  
The problem of training a radial basis function (RBF) neural network for distinguishing two disjoint sets in R(n) is considered. The network parameters can be determined by minimizing an error function that measures the degree of success in the recognition of a given number of training patterns. In this paper, taking into account the specific feature of classification problems, where the goal is to obtain that the network outputs take values above or below a fixed threshold, we propose an approach alternative to the classical one that makes use of the least-squares error function. In particular, the problem is formulated in terms of a system of nonlinear inequalities, and a suitable error function, which depends only on the violated inequalities, is defined. Then, a training algorithm based on this formulation is presented. Finally, the results obtained by applying the algorithm to two test problems are compared with those derived by adopting the commonly used least-squares error function. The results show the effectiveness of the proposed approach in RBF network training for pattern recognition, mainly in terms of computational time saving.  相似文献   

13.
神经网络泛化性能优化算法   总被引:3,自引:0,他引:3  
基于提高神经网络泛化性能的目标提出了神经网络泛化损失率的概念,解析了与前一周期相比当前网络误差的变化趋势,在此基础上导出了基于泛化损失率的神经网络训练目标函数.利用新的目标函数和基于量子化粒子群算法的神经网络训练方法,得到了一种新的网络泛化性能优化算法.实验结果表明,将该算法与没有引入泛化损失率的算法相比,网络的收敛性能和泛化性能都有明显提高.  相似文献   

14.
蒙西    乔俊飞    李文静   《智能系统学报》2018,13(3):331-338
针对径向基函数(radial basis function,RBF)神经网络隐含层结构难以确定的问题,提出一种基于快速密度聚类的网络结构设计算法。该算法将快速密度聚类算法良好的聚类特性用于RBF神经网络结构设计中,通过寻找密度最大的点并将其作为隐含层神经元,进而确定隐含层神经元个数和初始参数;同时,引入高斯函数的特性,保证了每个隐含层神经元的活性;最后,用一种改进的二阶算法对神经网络进行训练,提高了神经网络的收敛速度和泛化能力。利用典型非线性函数逼近和非线性动态系统辨识实验进行仿真验证,结果表明,基于快速密度聚类设计的RBF神经网络具有紧凑的网络结构、快速的学习能力和良好的泛化能力。  相似文献   

15.
Nonlinear system models constructed from radial basis function (RBF) networks can easily be over-fitted due to the noise on the data. While information criteria, such as the final prediction error (FPE), can provide a trade-off between training error and network complexity, the tunable parameters that penalise a large size of network model are hard to determine and are usually application dependent. This article introduces a new locally regularised, two-stage stepwise construction algorithm for RBF networks. The main objective is to produce a parsimonious network that generalises well over unseen data. This is achieved by utilising Bayesian learning within a two-stage stepwise construction procedure to penalise centres that are mainly interpreted by the noise. Specifically, each output layer weight is assigned a hyperparameter, a large value of such a parameter forcing the associated output layer weight to be near to zero. Sparsity is achieved by removing irrelevant RBF centres from the network. The efficacy of proposed algorithm from the original two-stage construction method is retained. Numerical analysis shows that this new method only needs about half of the computation involved in the locally regularised orthogonal least squares (LROLS) alternative. Results from two simulation examples are presented to show that the nonlinear system models resulting from this new approach are superior in terms of both sparsity and generalisation capability.  相似文献   

16.
In this paper, an objective function for training a radial basis function (RBF) network to handle single node open fault is presented. Based on the definition of this objective function, we propose a training method in which the computational complexity is the same as that of the least mean squares (LMS) method. Simulation results indicate that our method could greatly improve the fault tolerance of RBF networks, as compared with the one trained by LMS method. Moreover, even if the tuning parameter is misspecified, the performance deviation is not significant.  相似文献   

17.
This paper aims to develop a load forecasting method for short-term load forecasting based on multiwavelet transform and multiple neural networks. Firstly, a variable weight combination load forecasting model for power load is proposed and discussed. Secondly, the training data are extracted from power load data through multiwavelet transform. Lastly, the obtained data are trained through a variable weight combination model. BP network, RBF network and wavelet neural network are adopted as the training network, and the trained data from three neural networks are input to a three-layer feedforward neural network for the load forecasting. Simulation results show that accuracy of the combination load forecasting model proposed in the paper is higher than any one single network model and the combination forecast model of three neural networks without preprocessing method of multiwavelet transform.  相似文献   

18.
样条权函数神经网络克服了很多传统神经网络(如BP、RBF)的缺点:比如局部极小、收敛速度慢等。样条权函数神经网络的拓扑结构简单,训练后的神经网络的权值是输入样本的函数,能够精确记忆训练过的样本,可以很好地反映样本的信息特征,亦可以求得全局最小值。为了克服传统网络在指纹识别中的弊端,文中利用了样条权函数神经网络的优点,介绍了其在指纹识别中的应用。首先通过主成分分析方法对指纹图像进行特征提取,然后利用样条权函数神经网络进行指纹识别,最后通过Matlab仿真与其他传统的神经网络进行比较,验证了样条权函数在指纹识别方面的可行性且比传统神经网络效率更高。  相似文献   

19.
In this paper we present and analyze a new structure for designing a radial basis function neural network (RBFNN). In the training phase, input layer of RBFNN is augmented with desired output vector. Generalization phase involves the following steps: 1) identify the cluster to which a previously unseen input vector belongs; 2) augment the input layer with an average of the targets of the input vectors in the identified cluster; and 3) use the augmented network to estimate the unknown target. It is shown that, under some reasonable assumptions, the generalization error function admits an upper bound in terms of the quantization errors minimized when determining the centers of the proposed method over the training set and the difference between training samples and generalization samples in a deterministic setting. When the difference between the training and generalization samples goes to zero, the upper bound can be made arbitrarily small by increasing the number of hidden neurons. Computer simulations verified the effectiveness of the proposed method.  相似文献   

20.
一种新型的广义RBF神经网络及其训练方法   总被引:1,自引:0,他引:1  
提出一种新型的广义RBF神经网络模型,将径向基输出权值改为权函数,采用高次函数取代线性加权.给出网络学习方法,并通过仿真分析研究隐单元宽度、权函数幂次等参数的选取对网络逼近精度以及训练时间的影响.结果表明,和传统的RBF神经网络相比,该网络具有良好的逼近能力和较快的计算速度,在系统辨识和控制中具有广阔的应用前景.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号