首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The central problem in training a radial basis function neural network (RBFNN) is the selection of hidden layer neurons, which includes the selection of the center and width of those neurons. In this paper, we propose an enhanced swarm intelligence clustering (ESIC) method to select hidden layer neurons, and then, train a cosine RBFNN based on the gradient descent learning process. Also, we apply this new method for classification of deep Web sources. Experimental results show that the average Precision, Recall and F of our ESIC-based RBFNN classifier achieve higher performance than BP, Support Vector Machines (SVM) and OLS RBF for our deep Web sources classification problems.  相似文献   

2.
In this work, we have proposed a self-adaptive radial basis function neural network (RBFNN)-based method for high-speed recognition of human faces. It has been seen that the variations between the images of a person, under varying pose, facial expressions, illumination, etc., are quite high. Therefore, in face recognition problem to achieve high recognition rate, it is necessary to consider the structural information lying within these images in the classification process. In the present study, it has been realized by modeling each of the training images as a hidden layer neuron in the proposed RBFNN. Now, to classify a facial image, a confidence measure has been imposed on the outputs of the hidden layer neurons to reduce the influences of the images belonging to other classes. This process makes the RBFNN as self-adaptive for choosing a subset of the hidden layer neurons, which are in close neighborhood of the input image, to be considered for classifying the input image. The process reduces the computation time at the output layer of the RBFNN by neglecting the ineffective radial basis functions and makes the proposed method to recognize face images in high speed and also in interframe period of video. The performance of the proposed method has been evaluated on the basis of sensitivity and specificity on two popular face recognition databases, the ORL and the UMIST face databases. On the ORL database, the best average sensitivity (recognition) and specificity rates are found to be 97.30 and 99.94%, respectively using five samples per person in the training set. Whereas, on the UMIST database, the above quantities are found to be 96.36 and 99.81%, respectively using eight samples per person in the training set. The experimental results indicate that the proposed method outperforms some of the face recognition approaches.  相似文献   

3.
Interuet上有大量的页面是由后台数据库动态产生的,传统的搜索引擎搜索不出这部分页面,我们称之为深网,其中大部分深网信息是结构化的。将这些结构化的深网数据库按所属领域进行分类是获得深网信息的一个关键问题。本文针对已有深网数据库分类方法实现成本高昂、效率低下的问题,提出了一种基于Web日志粒度化的深网数据库分类算法,并通过实验检验了方法的分类效果。  相似文献   

4.
This paper presents a fuzzy hybrid learning algorithm (FHLA) for the radial basis function neural network (RBFNN). The method determines the number of hidden neurons in the RBFNN structure by using cluster validity indices with majority rule while the characteristics of the hidden neurons are initialized based on advanced fuzzy clustering. The FHLA combines the gradient method and the linear least-squared method for adjusting the RBF parameters and the neural network connection weights. The RBFNN with the proposed FHLA is used as a classifier in a face recognition system. The inputs to the RBFNN are the feature vectors obtained by combining shape information and principal component analysis. The designed RBFNN with the proposed FHLA, while providing a faster convergence in the training phase, requires a hidden layer with fewer neurons and less sensitivity to the training and testing patterns. The efficiency of the proposed method is demonstrated on the ORL and Yale face databases, and comparison with other algorithms indicates that the FHLA yields excellent recognition rate in human face recognition.  相似文献   

5.
For classification applications, the role of hidden layer neurons of a radial basis function (RBF) neural network can be interpreted as a function which maps input patterns from a nonlinear separable space to a linear separable space. In the new space, the responses of the hidden layer neurons form new feature vectors. The discriminative power is then determined by RBF centers. In the present study, we propose to choose RBF centers based on Fisher ratio class separability measure with the objective of achieving maximum discriminative power. We implement this idea using a multistep procedure that combines Fisher ratio, an orthogonal transform, and a forward selection search method. Our motivation of employing the orthogonal transform is to decouple the correlations among the responses of the hidden layer neurons so that the class separability provided by individual RBF neurons can be evaluated independently. The strengths of our method are double fold. First, our method selects a parsimonious network architecture. Second, this method selects centers that provide large class separation.  相似文献   

6.
A novel method based on rough sets (RS) and the affinity propagation (AP) clustering algorithm is developed to optimize a radial basis function neural network (RBFNN). First, attribute reduction (AR) based on RS theory, as a preprocessor of RBFNN, is presented to eliminate noise and redundant attributes of datasets while determining the number of neurons in the input layer of RBFNN. Second, an AP clustering algorithm is proposed to search for the centers and their widths without a priori knowledge about the number of clusters. These parameters are transferred to the RBF units of RBFNN as the centers and widths of the RBF function. Then the weights connecting the hidden layer and output layer are evaluated and adjusted using the least square method (LSM) according to the output of the RBF units and desired output. Experimental results show that the proposed method has a more powerful generalization capability than conventional methods for an RBFNN.  相似文献   

7.
Many methods have been used to discriminate magnetizing inrush from internal faults in power transformers. Most of them follow a deterministic approach, i.e. they rely on an index and fixed threshold. This article proposes two approaches (i.e. NNPCA and RBFNN) for power transformer differential protection and address the challenging task of detecting magnetizing inrush from internal fault. These approaches based on the pattern recognition technique. In the proposed algorithm, the Neural Network Principal Component Analysis (NNPCA) and Radial Basis Function Neural Network (RBFNN) are used as a classifier. The principal component analysis is used to preprocess the data from power system in order to eliminate redundant information and enhance hidden pattern of differential current to discriminate between internal faults from inrush and over-excitation condition. The presented algorithm also makes use of ratio of voltage-to-frequency and amplitude of differential current for detection transformer operating condition. For both proposed cases, optimal number of neurons has been considered in the neural network architectures and the effect of hidden layer neurons on the classification accuracy is analyzed. A comparison among the performance of the FFBPNN (Feed Forward Back Propagation Neural Network), NNPCA, RBFNN based classifiers and with the conventional harmonic restraint method based on Discrete Fourier Transform (DFT) method is presented in distinguishing between magnetizing inrush and internal fault condition of power transformer. The algorithm is evaluated using simulation performed with PSCAD/EMTDC and MATLAB. The results confirm that the RBFNN is faster, stable and more reliable recognition of transformer inrush and internal fault condition.  相似文献   

8.
In this paper, a novel self-adaptive extreme learning machine (ELM) based on affinity propagation (AP) is proposed to optimize the radial basis function neural network (RBFNN). As is well known, the parameters of original ELM which developed by G.-B. Huang are randomly determined. However, that cannot objectively obtain a set of optimal parameters of RBFNN trained by ELM algorithm for different realistic datasets. The AP algorithm can automatically produce a set of clustering centers for the different datasets. According to the results of AP, we can, respectively, get the cluster number and the radius value of each cluster. In that case, the above cluster number and radius value can be used to initialize the number and widths of hidden layer neurons in RBFNN and that is also the parameters of coefficient matrix H of ELM. This may successfully avoid the subjectivity prior knowledge and randomness of training RBFNN. Experimental results show that the method proposed in this thesis has a more powerful generalization capability than conventional ELM for an RBFNN.  相似文献   

9.
带优选聚类算法的 RBF 网络辨识器及应用   总被引:2,自引:1,他引:2  
以RBF神经网络为模型框架,解决非线性系统的辨识问题。针对RBF网络的结构辨识问题,提出一种优选聚类算法,并用该算法,依据输入样本优选确定RBF神经网络的隐含层节点个数,采用新型二阶递推学习算法估计RBF网络中的参数和权值。上述混合算法,同时解决了RBF网络结构和参数辨识问题,大大提高了RBF网络的建模和预测精度。应用实例表明了所提出方案的有效性。  相似文献   

10.
In this paper we present and analyze a new structure for designing a radial basis function neural network (RBFNN). In the training phase, input layer of RBFNN is augmented with desired output vector. Generalization phase involves the following steps: 1) identify the cluster to which a previously unseen input vector belongs; 2) augment the input layer with an average of the targets of the input vectors in the identified cluster; and 3) use the augmented network to estimate the unknown target. It is shown that, under some reasonable assumptions, the generalization error function admits an upper bound in terms of the quantization errors minimized when determining the centers of the proposed method over the training set and the difference between training samples and generalization samples in a deterministic setting. When the difference between the training and generalization samples goes to zero, the upper bound can be made arbitrarily small by increasing the number of hidden neurons. Computer simulations verified the effectiveness of the proposed method.  相似文献   

11.
针对Deep Web数据源主题分类问题,首先研究了不同位置的特征项对Deep Web接口领域分类的影响,提出一种基于分级权重的特征选择方法RankFW;然后提出一种依赖领域知识的量子自组织特征映射神经网络模型DR-QSOFM及其分类算法,该模型在训练的不同阶段对特征向量和目标向量产生不同程度的依赖,使竞争层中获胜神经元...  相似文献   

12.
Sperduti and Starita proposed a new type of neural network which consists of generalized recursive neurons for classification of structures. In this paper, we propose an entropy-based approach for constructing such neural networks for classification of acyclic structured patterns. Given a classification problem, the architecture, i.e., the number of hidden layers and the number of neurons in each hidden layer, and all the values of the link weights associated with the corresponding neural network are automatically determined. Experimental results have shown that the networks constructed by our method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods.  相似文献   

13.
The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.  相似文献   

14.

This paper presents an adaptive technique for obtaining centers of the hidden layer neurons of radial basis function neural network (RBFNN) for face recognition. The proposed technique uses firefly algorithm to obtain natural sub-clusters of training face images formed due to variations in pose, illumination, expression and occlusion, etc. Movement of fireflies in a hyper-dimensional input space is controlled by tuning the parameter gamma (γ) of firefly algorithm which plays an important role in maintaining the trade-off between effective search space exploration, firefly convergence, overall computational time and the recognition accuracy. The proposed technique is novel as it combines the advantages of evolutionary firefly algorithm and RBFNN in adaptive evolution of number and centers of hidden neurons. The strength of the proposed technique lies in its fast convergence, improved face recognition performance, reduced feature selection overhead and algorithm stability. The proposed technique is validated using benchmark face databases, namely ORL, Yale, AR and LFW. The average face recognition accuracies achieved using proposed algorithm for the above face databases outperform some of the existing techniques in face recognition.

  相似文献   

15.
The paper is focused on the idea to demonstrate the advantages of deep learning approaches over ordinary shallow neural network on their comparative applications to image classifying from such popular benchmark databases as FERET and MNIST. An autoassociative neural network is used as a standalone program realized the nonlinear principal component analysis for prior extracting the most informative features of input data for neural networks to be compared further as classifiers. A special study of the optimal choice of activation function and the normalization transformation of input data allows to improve efficiency of the autoassociative program. One more study devoted to denoising properties of this program demonstrates its high efficiency even on noisy data. Three types of neural networks are compared: feed-forward neural net with one hidden layer, deep network with several hidden layers and deep belief network with several pretraining layers realized restricted Boltzmann machine. The number of hidden layer and the number of hidden neurons in them were chosen by cross-validation procedure to keep balance between number of layers and hidden neurons and classification efficiency. Results of our comparative study demonstrate the undoubted advantage of deep networks, as well as denoising power of autoencoders. In our work we use both multiprocessor graphic card and cloud services to speed up our calculations. The paper is oriented to specialists in concrete fields of scientific or experimental applications, who have already some knowledge about artificial neural networks, probability theory and numerical methods.  相似文献   

16.
Reducing the dimensionality of a classification problem produces a more computationally-efficient system. Since the dimensionality of a classification problem is equivalent to the number of neurons in the first hidden layer of a network, this work shows how to eliminate neurons on that layer and simplify the problem. In the cases where the dimensionality cannot be reduced without some degradation in classification performance, we formulate and solve a constrained optimization problem that allows a trade-off between dimensionality and performance. We introduce a novel penalty function and combine it with bilevel optimization to solve the constrained problem. The performance of our method on synthetic and applied problems is superior to other known penalty functions such as weight decay, weight elimination, and Hoyer's function. An example of dimensionality reduction for hyperspectral image classification demonstrates the practicality of the new method. Finally, we show how the method can be extended to multilayer and multiclass neural network problems.  相似文献   

17.
In the era of big data, the vast majority of the data are not from the surface Web, the Web that is interconnected by hyperlinks and indexed by most general purpose search engines. Instead, the trove of valuable data often reside in the deep Web, the Web that is hidden behind query interfaces. Since numerous applications, like data integration and vertical portals, require deep Web data, various crawling methods were developed for exhaustively harvesting a deep Web data source with the minimal (or near-minimal) cost. Most existing crawling methods assume that all the documents matched by queries are returned. In practice, data sources often return the top k matches. This makes exhaustive data harvesting difficult: highly ranked documents will be returned multiple times, while documents ranked low have small chance being returned. In this paper, we decompose this problem into two orthogonal sub-problems, i.e., query and ranking bias problems, and propose a document frequency based crawling method to overcome the ranking bias problem. The rational of our method is to use the queries whose document frequencies are within the specified range to avoid the effect of search ranking plus return limit and significantly reduce the difficulty of crawling ranked data source. The method is extensively tested on a variety of datasets and compared with two existing methods. The experimental result demonstrates that our method outperforms the two algorithms by 58 % and 90 % on average respectively.  相似文献   

18.
增量型极限学习机(incremental extreme learning machine,I-ELM)在训练过程中,由于输入权值及隐层神经元阈值的随机获取,造成部分隐层神经元的输出权值过小,使其对网络输出贡献小,从而成为无效神经元.这个问题不但使网络变得更加复杂,而且降低了网络的稳定性.针对此问题,本文提出了一种给I-ELM隐层输出加上偏置的改进方法(即Ⅱ-ELM),并分析证明了该偏置的存在性.最后对I-ELM方法在分类和回归问题上进行仿真对比,验证Ⅱ-ELM的有效性.  相似文献   

19.
用于遥感图象分类的神经网络的构造   总被引:11,自引:0,他引:11       下载免费PDF全文
径向基子数神经网络和多层感知器神经网络具有相似的拓扑结构,它们大都用于目标的分类。对两种模型进行了比较,提出了一个构造径向基函数神经网络分类器的有效方法,并把构造的分类器用于遥感图象的分类实验,取得了比较好的结果。  相似文献   

20.
基于本体的Deep Web查询接口分类   总被引:1,自引:0,他引:1  
目前对于分类问题,主要工作集中在文本或Web文档的分类研究,而很少有对deep Web查询接口的分类研究.deep Web源包括查询接口和查询结果,大量的deep Web源的存在,对它们查询接口的分类是通向deep Web分类集成和检索的关键步骤.本分提出一种deep Web本体分类方法,包括:分类本体的概念模型和由此产生的deep Web空间向量模型(VSM).试验表明,这种分类方法具有良好的分类效果,平均准确率达到91.6%,平均查全率达到92.4%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号