首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

2.
A novel method based on rough sets (RS) and the affinity propagation (AP) clustering algorithm is developed to optimize a radial basis function neural network (RBFNN). First, attribute reduction (AR) based on RS theory, as a preprocessor of RBFNN, is presented to eliminate noise and redundant attributes of datasets while determining the number of neurons in the input layer of RBFNN. Second, an AP clustering algorithm is proposed to search for the centers and their widths without a priori knowledge about the number of clusters. These parameters are transferred to the RBF units of RBFNN as the centers and widths of the RBF function. Then the weights connecting the hidden layer and output layer are evaluated and adjusted using the least square method (LSM) according to the output of the RBF units and desired output. Experimental results show that the proposed method has a more powerful generalization capability than conventional methods for an RBFNN.  相似文献   

3.
In this paper, we propose a methodology for training a new model of artificial neural network called the generalized radial basis function (GRBF) neural network. This model is based on generalized Gaussian distribution, which parametrizes the Gaussian distribution by adding a new parameter τ. The generalized radial basis function allows different radial basis functions to be represented by updating the new parameter τ. For example, when GRBF takes a value of τ=2, it represents the standard Gaussian radial basis function. The model parameters are optimized through a modified version of the extreme learning machine (ELM) algorithm. In the methodology proposed (MELM-GRBF), the centers of each GRBF were taken randomly from the patterns of the training set and the radius and τ values were determined analytically, taking into account that the model must fulfil two constraints: locality and coverage. An thorough experimental study is presented to test its overall performance. Fifteen datasets were considered, including binary and multi-class problems, all of them taken from the UCI repository. The MELM-GRBF was compared to ELM with sigmoidal, hard-limit, triangular basis and radial basis functions in the hidden layer and to the ELM-RBF methodology proposed by Huang et al. (2004) [1]. The MELM-GRBF obtained better results in accuracy than the corresponding sigmoidal, hard-limit, triangular basis and radial basis functions for almost all datasets, producing the highest mean accuracy rank when compared with these other basis functions for all datasets.  相似文献   

4.
带优选聚类算法的 RBF 网络辨识器及应用   总被引:2,自引:1,他引:2  
以RBF神经网络为模型框架,解决非线性系统的辨识问题。针对RBF网络的结构辨识问题,提出一种优选聚类算法,并用该算法,依据输入样本优选确定RBF神经网络的隐含层节点个数,采用新型二阶递推学习算法估计RBF网络中的参数和权值。上述混合算法,同时解决了RBF网络结构和参数辨识问题,大大提高了RBF网络的建模和预测精度。应用实例表明了所提出方案的有效性。  相似文献   

5.
As a novel clustering method, affinity propagation (AP) clustering can identify high-quality cluster centers by passing messages between data points. But its ultimate cluster number is affected by a user-defined parameter called self-confidence. When aiming at a given number of clusters due to prior knowledge, AP has to be launched many times until an appropriate setting of self-confidence is found. K-AP algorithm overcomes this disadvantage by introducing a constraint in the process of message passing to exploit the immediate results of K clusters. The key to K-AP clustering is constructing a suitable similarity matrix, which can truly reflect the intrinsic structure of the dataset. In this paper, a density-adaptive similarity measure is designed to describe the relations between data points more reasonably. Meanwhile, in order to solve the difficulties faced by K-AP algorithm in high-dimensional data sets, we use the dimension reduction method based on spectral graph theory to map the original data points to a low-dimensional eigenspace and propose a density-adaptive AP clustering algorithm based on spectral dimension reduction. Experiments show that the proposed algorithm can effectively deal with the clustering problem of datasets with complex structure and multiple scales, avoiding the singularity problem caused by the high-dimensional eigenvectors. Its clustering performance is better than AP clustering algorithm and K-AP algorithm.  相似文献   

6.
极限学习机(ELM)由于高效的训练方式被广泛应用于分类回归,然而不同的输入权值在很大程度上会影响其学习性能。为了进一步提高ELM的学习性能,针对ELM的输入权值进行了研究,充分利用图像局部感知的稀疏性,将局部感知的方法运用到基于自动编码器的ELM(ELM-AE)上,提出了局部感知的类限制极限学习机(RF-C2ELM)。通过对MNIST数据集进行分类问题分析实验,实验结果表明,在具有相同隐层结点数的条件下,提出的方法能够获得更高的分类精度。  相似文献   

7.
Rough k-means clustering describes uncertainty by assigning some objects to more than one cluster. Rough cluster quality index based on decision theory is applicable to the evaluation of rough clustering. In this paper we analyze rough k-means clustering with respect to the selection of the threshold, the value of risk for assigning an object and uncertainty of objects. According to the analysis, clusters presented as interval sets with lower and upper approximations in rough k-means clustering are not adequate to describe clusters. This paper proposes an interval set clustering based on decision theory. Lower and upper approximations in the proposed algorithm are hierarchical and constructed as outer-level approximations and inner-level ones. Uncertainty of objects in out-level upper approximation is described by the assignment of objects among different clusters. Accordingly, ambiguity of objects in inner-level upper approximation is represented by local uniform factors of objects. In addition, interval set clustering can be improved to obtain a satisfactory clustering result with the optimal number of clusters, as well as optimal values of parameters, by taking advantage of the usefulness of rough cluster quality index in the evaluation of clustering. The experimental results on synthetic and standard data demonstrate how to construct clusters with satisfactory lower and upper approximations in the proposed algorithm. The experiments with a promotional campaign for the retail data illustrates the usefulness of interval set clustering for improving rough k-means clustering results.  相似文献   

8.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem.In this paper, we present a new, real-coded genetic algorithm approach called ‘RCGA-ELM’ to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called ‘network based operator’ and ‘weight based operator’ are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called ‘sparse-ELM’. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.  相似文献   

9.
Huang et al. (2004) has recently proposed an on-line sequential ELM (OS-ELM) that enables the extreme learning machine (ELM) to train data one-by-one as well as chunk-by-chunk. OS-ELM is based on recursive least squares-type algorithm that uses a constant forgetting factor. In OS-ELM, the parameters of the hidden nodes are randomly selected and the output weights are determined based on the sequentially arriving data. However, OS-ELM using a constant forgetting factor cannot provide satisfactory performance in time-varying or nonstationary environments. Therefore, we propose an algorithm for the OS-ELM with an adaptive forgetting factor that maintains good performance in time-varying or nonstationary environments. The proposed algorithm has the following advantages: (1) the proposed adaptive forgetting factor requires minimal additional complexity of O(N) where N is the number of hidden neurons, and (2) the proposed algorithm with the adaptive forgetting factor is comparable with the conventional OS-ELM with an optimal forgetting factor.  相似文献   

10.
The density based notion for clustering approach is used widely due to its easy implementation and ability to detect arbitrary shaped clusters in the presence of noisy data points without requiring prior knowledge of the number of clusters to be identified. Density-based spatial clustering of applications with noise (DBSCAN) is the first algorithm proposed in the literature that uses density based notion for cluster detection. Since most of the real data set, today contains feature space of adjacent nested clusters, clearly DBSCAN is not suitable to detect variable adjacent density clusters due to the use of global density parameter neighborhood radius N rad and minimum number of points in neighborhood N pts . So the efficiency of DBSCAN depends on these initial parameter settings, for DBSCAN to work properly, the neighborhood radius must be less than the distance between two clusters otherwise algorithm merges two clusters and detects them as a single cluster. Through this paper: 1) We have proposed improved version of DBSCAN algorithm to detect clusters of varying density adjacent clusters by using the concept of neighborhood difference and using the notion of density based approach without introducing much additional computational complexity to original DBSCAN algorithm. 2) We validated our experimental results using one of our authors recently proposed space density indexing (SDI) internal cluster measure to demonstrate the quality of proposed clustering method. Also our experimental results suggested that proposed method is effective in detecting variable density adjacent nested clusters.  相似文献   

11.
In this paper, we present a modified filtering algorithm (MFA) by making use of center variations to speed up clustering process. Our method first divides clusters into static and active groups. We use the information of cluster displacements to reject unlikely cluster centers for all nodes in the kd-tree. We reduce the computational complexity of filtering algorithm (FA) through finding candidates for each node mainly from the set of active cluster centers. Two conditions for determining the set of candidate cluster centers for each node from active clusters are developed. Our approach is different from the major available algorithm, which passes no information from one stage of iteration to the next. Theoretical analysis shows that our method can reduce the computational complexity, in terms of the number of distance calculations, of FA at each stage of iteration by a factor of FC/AC, where FC and AC are the numbers of total clusters and active clusters, respectively. Compared with the FA, our algorithm can effectively reduce the computing time and number of distance calculations. It is noted that our proposed algorithm can generate the same clusters as that produced by hard k-means clustering. The superiority of our method is more remarkable when a larger data set with higher dimension is used.  相似文献   

12.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

13.
针对径向基函数(RBF)网络结构和初始数据中心难以客观确定的不足,采用二分搜索密度峰值聚类算法(TSDPCA)找到数据中心值及数据簇类个数作为RBF神经网络的初始参数和隐藏层节点数,再利用梯度下降法优化RBFNN结构及各个参数建立预报模型,并应用于广西月降水预报中,以检验该模型的有效性。结果表明,与K-RBFNN和OLS-RBFNN的模型相比,TSDPCA-RBFNN预报平均相对误差值下降了10%~35%,具有更好的预报性能。  相似文献   

14.
This paper presents a fuzzy hybrid learning algorithm (FHLA) for the radial basis function neural network (RBFNN). The method determines the number of hidden neurons in the RBFNN structure by using cluster validity indices with majority rule while the characteristics of the hidden neurons are initialized based on advanced fuzzy clustering. The FHLA combines the gradient method and the linear least-squared method for adjusting the RBF parameters and the neural network connection weights. The RBFNN with the proposed FHLA is used as a classifier in a face recognition system. The inputs to the RBFNN are the feature vectors obtained by combining shape information and principal component analysis. The designed RBFNN with the proposed FHLA, while providing a faster convergence in the training phase, requires a hidden layer with fewer neurons and less sensitivity to the training and testing patterns. The efficiency of the proposed method is demonstrated on the ORL and Yale face databases, and comparison with other algorithms indicates that the FHLA yields excellent recognition rate in human face recognition.  相似文献   

15.
In this paper, we proposed the Dandelion Algorithm (DA), based on the behaviour of dandelion sowing. In DA, the dandelion is sown in a certain range based on dynamic radius. Meanwhile the dandelion has self-learning ability; it could select a number of excellent seeds to learn. We compare the proposed algorithm with other existing algorithms. Simulations show that the proposed algorithm seems much superior to other algorithms. Moreover, the proposed algorithm can be applied to optimise extreme learning machine (ELM), which has a very good classification and prediction capability.  相似文献   

16.
APSCAN: A parameter free algorithm for clustering   总被引:1,自引:0,他引:1  
DBSCAN is a density based clustering algorithm and its effectiveness for spatial datasets has been demonstrated in the existing literature. However, there are two distinct drawbacks for DBSCAN: (i) the performances of clustering depend on two specified parameters. One is the maximum radius of a neighborhood and the other is the minimum number of the data points contained in such neighborhood. In fact these two specified parameters define a single density. Nevertheless, without enough prior knowledge, these two parameters are difficult to be determined; (ii) with these two parameters for a single density, DBSCAN does not perform well to datasets with varying densities. The above two issues bring some difficulties in applications. To address these two problems in a systematic way, in this paper we propose a novel parameter free clustering algorithm named as APSCAN. Firstly, we utilize the Affinity Propagation (AP) algorithm to detect local densities for a dataset and generate a normalized density list. Secondly, we combine the first pair of density parameters with any other pair of density parameters in the normalized density list as input parameters for a proposed DDBSCAN (Double-Density-Based SCAN) to produce a set of clustering results. In this way, we can obtain different clustering results with varying density parameters derived from the normalized density list. Thirdly, we develop an updated rule for the results obtained by implementing the DDBSCAN with different input parameters and then synthesize these clustering results into a final result. The proposed APSCAN has two advantages: first it does not need to predefine the two parameters as required in DBSCAN and second, it not only can cluster datasets with varying densities but also preserve the nonlinear data structure for such datasets.  相似文献   

17.

In this paper, a new method is proposed to identify solid oxide fuel cell using extreme learning machine–Hammerstein model (ELM–Hammerstein). The ELM–Hammerstein model consists of a static ELM neural network followed by a linear dynamic subsystem. First, the structure of ELM–Hammerstein model is determined by Lipschitz quotient criterion from input–output data. Then, a generalized ELM algorithm is proposed to estimate the parameters of ELM–Hammerstein model, including the parameters of linear dynamic part and the output weights of ELM. The proposed method can obtain accurate identification results and its computation is more efficient. Simulation results demonstrate its effectiveness.

  相似文献   

18.
A novel scheme of neural network model reference adaptive control is proposed for arbitrary complex nonlinear discrete-time systems, i.e., non-minimum phase system, time-delay system and minimum phase system. An improved nearest neighbor clustering algorithm using an optimization strategy is introduced as the on-line learning algorithm to regulate the parameters of the RBFNN, which can simplify the neural network structure and accelerate the convergence speed. The clustering radius can be regulated automatically to guarantee the rationality of radius. Through constructing the pseudo-plant, the direct NNMRAC is also effective to the nonlinear non-minimum phase system. With the help of simulations, the control strategy based on direct RBFNN model reference adaptive control can not only make the multi-dimension nonlinear plants track multi-dimension reference signals quickly, but also endow the control systems with satisfying robustness.  相似文献   

19.

Composite beams (CBs) include concrete slabs jointed to the steel parts by the shear connectors, which highly popular in modern structures such as high rise buildings and bridges. This study has investigated the structural behavior of simply supported CBs in which a concrete slab is jointed to a steel beam by headed stud shear connector. Determining the behavior of CB through empirical study except its costly process can also lead to inaccurate results. In this case, AI models as metaheuristic algorithms could be effectively used for solving difficult optimization problems, such as Genetic algorithm, Differential evolution, Firefly algorithm, Cuckoo search algorithm, etc. This research has used hybrid Extreme machine learning (ELM)–Grey wolf optimizer (GWO) to determine the general behavior of CB. Two models (ELM and GWO) and a hybrid algorithm (GWO–ELM) were developed and the results were compared through the regression parameters of determination coefficient (R2) and root mean square (RMSE). In testing phase, GWO with the RMSE value of 2.5057 and R2 value of 1.2510, ELM with the RMSE value of 4.52 and R2 value of 1.927, and GWO–ELM with the RMSE value of 0.9340 and R2 value of 0.9504 have demonstrated that the hybrid of GWO–ELM could indicate better performance compared to solo ELM and GWO models. In this case, GWO–ELM could determine the general behavior of CB faster, more accurate and with the least error percentages, so the hybrid of GWO–ELM is more reliable model than ELM and GWO in this study.

  相似文献   

20.
介绍了两种新的基于遗传算法的径向基神经网络(GA-Based RBFNN)训练算法.这两种算法均将遗传算法用于优化径向基神经网络的聚类中心和网络结构.第一种GA-Based RBFNN算法对所有训练样本采取二进制编码构成个体,优化径向基函数中心的选取和网络结构;第二种GA-Based RBFNN算法中,RBFNN采用自增长算法训练网络隐含层中心、采用十进制对距离因子ε编码构成染色体,优化网络.将两种GA-Based RBFNN算法应用于Fe、Mn、Cu、Zn同时测定的光谱解析,计算结果表明,本文的GA-Based RBFNN算法较通常的遗传算法与径向基人工神经网络(GA-RBFNN)联用,即在GA选择变量的基础上,再用RBFNN作数据解析的GA-RBFNN方法,在增强网络的泛化能力、提高预测的准确性等方面具有明显的优势.从这两种GA-Based RBFNN的比较看,第二种算法在性能上优于第一种算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号