首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
选择性聚类融合研究进展   总被引:1,自引:0,他引:1  
传统的聚类融合方法通常是将所有产生的聚类成员融合以获得最终的聚类结果。在监督学习中,选择分类融合方法会获得更好的结果,从选择分类融合中得到启示,在聚类融合中应用这种方法被定义为选择性聚类融合。对选择性聚类融合关键技术进行了综述,讨论了未来的研究方向。  相似文献   

2.
杜政霖  李云 《计算机应用》2017,37(3):866-870
针对既有历史数据又有流特征的全新应用场景,提出了一种基于组特征选择和流特征的在线特征选择算法。在对历史数据的组特征选择阶段,为了弥补单一聚类算法的不足,引入聚类集成的思想。先利用k-means方法通过多次聚类得到一个聚类集体,在集成阶段再利用层次聚类算法对聚类集体进行集成得到最终的结果。在对流特征数据的在线特征选择阶段,对组构造产生的特征组通过探讨特征间的相关性来更新特征组,最终通过组变换获得特征子集。实验结果表明,所提算法能有效应对全新场景下的在线特征选择问题,并且有很好的分类性能。  相似文献   

3.
Singh V  Mukherjee L  Peng J  Xu J 《Machine Learning》2010,79(1-2):177-200
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.  相似文献   

4.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

5.
Multi-view clustering has become an important extension of ensemble clustering. In multi-view clustering, we apply clustering algorithms on different views of the data to obtain different cluster labels for the same set of objects. These results are then combined in such a manner that the final clustering gives better result than individual clustering of each multi-view data. Multi view clustering can be applied at various stages of the clustering paradigm. This paper proposes a novel multi-view clustering algorithm that combines different ensemble techniques. Our approach is based on computing different similarity matrices on the individual datasets and aggregates these to form a combined similarity matrix, which is then used to obtain the final clustering. We tested our approach on several datasets and perform a comparison with other state-of-the-art algorithms. Our results show that the proposed algorithm outperforms several other methods in terms of accuracy while maintaining the overall complexity of the individual approaches.  相似文献   

6.
In recent years, ensemble learning has become a prolific area of study in pattern recognition, based on the assumption that using and combining different learning models in the same problem could lead to better performance results than using a single model. This idea of ensemble learning has traditionally been used for classification tasks, but has more recently been adapted to other machine learning tasks such as clustering and feature selection. We propose several feature selection ensemble configurations based on combining rankings of features from individual rankers according to the combination method and threshold value used. The performance of each proposed ensemble configuration was tested for synthetic datasets (to assess the adequacy of the selection), real classical datasets (with more samples than features), and DNA microarray datasets (with more features than samples). Five different classifiers were studied in order to test the suitability of the proposed ensemble configurations and assess the results.  相似文献   

7.
Today, feature selection is an active research in machine learning. The main idea of feature selection is to choose a subset of available features, by eliminating features with little or no predictive information, as well as redundant features that are strongly correlated. There are a lot of approaches for feature selection, but most of them can only work with crisp data. Until now there have not been many different approaches which can directly work with both crisp and low quality (imprecise and uncertain) data. That is why, we propose a new method of feature selection which can handle both crisp and low quality data. The proposed approach is based on a Fuzzy Random Forest and it integrates filter and wrapper methods into a sequential search procedure with improved classification accuracy of the features selected. This approach consists of the following main steps: (1) scaling and discretization process of the feature set; and feature pre-selection using the discretization process (filter); (2) ranking process of the feature pre-selection using the Fuzzy Decision Trees of a Fuzzy Random Forest ensemble; and (3) wrapper feature selection using a Fuzzy Random Forest ensemble based on cross-validation. The efficiency and effectiveness of this approach is proved through several experiments using both high dimensional and low quality datasets. The approach shows a good performance (not only classification accuracy, but also with respect to the number of features selected) and good behavior both with high dimensional datasets (microarray datasets) and with low quality datasets.  相似文献   

8.
一种基于微阵列数据的集成分类方法*   总被引:1,自引:0,他引:1  
针对现有的微阵列数据集成分类方法分类精度不高这一问题,提出了一种Bagging-PCA-SVM方法。该方法首先采用Bootstrap技术对训练样本集重复取样,构成大量训练样本子集,然后在每个子集上进行特征选择和主成分分析以消除噪声基因与冗余基因;最后利用支持向量机作为分类器,采用多数投票的方法预测样本的类属。通过三个数据集进行了测试,测试结果表明了该方法的有效性和可行性。  相似文献   

9.
选择性集成是当前机器学习领域的研究热点之一。由于选择性集成属于NP"难"问题,人们多利用启发式方法将选择性集成转化为其他问题来求得近似最优解,因为各种算法的出发点和描述角度各不相同,现有的大量选择性集成算法显得繁杂而没有规律。为便于研究人员迅速了解和应用本领域的最新进展,本文根据选择过程中核心策略的特征将选择性集成算法分为四类,即迭代优化法、排名法、分簇法、模式挖掘法;然后利用UCI数据库的20个常用数据集,从预测性能、选择时间、结果集成分类器大小三个方面对这些典型算法进行了实验比较;最后总结了各类方法的优缺点,并展望了选择性集成的未来研究重点。  相似文献   

10.
Cluster ensemble first generates a large library of different clustering solutions and then combines them into a more accurate consensus clustering. It is commonly accepted that for cluster ensemble to work well the member partitions should be different from each other, and meanwhile the quality of each partition should remain at an acceptable level. Many different strategies have been used to generate different base partitions for cluster ensemble. Similar to ensemble classification, many studies have been focusing on generating different partitions of the original dataset, i.e., clustering on different subsets (e.g., obtained using random sampling) or clustering in different feature spaces (e.g., obtained using random projection). However, little attention has been paid to the diversity and quality of the partitions generated using these two approaches. In this paper, we propose a novel cluster generation method based on random sampling, which uses the nearest neighbor method to fill the category information of the missing samples (abbreviated as RS-NN). We evaluate its performance in comparison with k-means ensemble, a typical random projection method (Random Feature Subset, abbreviated as FS), and another random sampling method (Random Sampling based on Nearest Centroid, abbreviated as RS-NC). Experimental results indicate that the FS method always generates more diverse partitions while RS-NC method generates high-quality partitions. Our proposed method, RS-NN, generates base partitions with a good balance between the quality and the diversity and achieves significant improvement over alternative methods. Furthermore, to introduce more diversity, we propose a dual random sampling method which combines RS-NN and FS methods. The proposed method can achieve higher diversity with good quality on most datasets.  相似文献   

11.
针对无监督聚类缺少数据分类等先验信息、基聚类的准确性受聚类算法影响以及一般聚类融合算法空间复杂度高的问题,提出一种基于改进遗传算法的聚类融合算法(CEIGA);同时针对传统聚类融合算法已经不能满足大规模数据处理对于时间的要求的问题,提出一种云计算下使用Hadoop平台的基于改进遗传算法的并行聚类融合算法(PCEIGA)。首先,基聚类生成机制产生的基聚类划分在完成簇标签转化后进行基因编码作为遗传算法的初始种群。其次,通过改进遗传算法的选择算子,保证基聚类的多样性;再根据改进的选择算子对染色体进行交叉和变异操作并使用精英策略得到下一代种群,保证基聚类的准确性。如此循环,使聚类融合最终结果达到全局最优,提高算法准确度。通过设计两个MapReduce过程并加入Combine过程减少节点通信,提高算法运行效率。最后,在UCI数据集上比较了CEIGA、PCEIGA和四个先进的聚类融合算法。实验结果表明,与先进的聚类融合算法相比,CEIGA性能最好;而PCEIGA能在不影响聚类结果准确度的前提下明显降低算法运行时间,提高算法效率。  相似文献   

12.
An ensemble of clustering solutions or partitions may be generated for a number of reasons. If the data set is very large, clustering may be done on tractable size disjoint subsets. The data may be distributed at different sites for which a distributed clustering solution with a final merging of partitions is a natural fit. In this paper, two new approaches to combining partitions, represented by sets of cluster centers, are introduced. The advantage of these approaches is that they provide a final partition of data that is comparable to the best existing approaches, yet scale to extremely large data sets. They can be 100,000 times faster while using much less memory. The new algorithms are compared against the best existing cluster ensemble merging approaches, clustering all the data at once and a clustering algorithm designed for very large data sets. The comparison is done for fuzzy and hard-k-means based clustering algorithms. It is shown that the centroid-based ensemble merging algorithms presented here generate partitions of quality comparable to the best label vector approach or clustering all the data at once, while providing very large speedups.  相似文献   

13.
郑建炜  李卓蓉  王万良  陈婉君 《软件学报》2019,30(12):3846-3861
在信息爆炸时代,大数据处理已成为当前国内外热点研究方向之一.谱分析型算法因其特有的性能而获得了广泛的应用,然而受维数灾难影响,主流的谱分析法对高维数据的处理仍是一个极具挑战的问题.提出一种兼顾维数特征优选和图Laplacian约束的聚类模型,即联合拉普拉斯正则项和自适应特征学习(joint Laplacian regularization and adaptive feature learning,简称LRAFL)的数据聚类算法.基于自适应近邻进行图拉普拉斯学习,并将低维嵌入、特征选择和子空间聚类纳入同一框架,替换传统谱聚类算法先图Laplacian构建、后谱分析求解的两级操作.通过添加非负加和约束以及低秩约束,LRAFL能获得稀疏的特征权值向量并具有块对角结构的Laplacian矩阵.此外,提出一种有效的求解方法用于模型参数优化,并对算法的收敛性、复杂度以及平衡参数设定进行了理论分析.在合成数据和多个公开数据集上的实验结果表明,LRAFL在效果效率及实现便捷性等指标上均优于现有的其他数据聚类算法.  相似文献   

14.
Feature selection has always been a critical step in pattern recognition, in which evolutionary algorithms, such as the genetic algorithm (GA), are most commonly used. However, the individual encoding scheme used in various GAs would either pose a bias on the solution or require a pre-specified number of features, and hence may lead to less accurate results. In this paper, a tribe competition-based genetic algorithm (TCbGA) is proposed for feature selection in pattern classification. The population of individuals is divided into multiple tribes, and the initialization and evolutionary operations are modified to ensure that the number of selected features in each tribe follows a Gaussian distribution. Thus each tribe focuses on exploring a specific part of the solution space. Meanwhile, tribe competition is introduced to the evolution process, which allows the winning tribes, which produce better individuals, to enlarge their sizes, i.e. having more individuals to search their parts of the solution space. This algorithm, therefore, avoids the bias on solutions and requirement of a pre-specified number of features. We have evaluated our algorithm against several state-of-the-art feature selection approaches on 20 benchmark datasets. Our results suggest that the proposed TCbGA algorithm can identify the optimal feature subset more effectively and produce more accurate pattern classification.  相似文献   

15.
针对基于约束得分的特征选择容易受成对约束的组成和基数影响的问题, 提出了一种基于约束得分的动态集成选择算法(dynamic ensemble selection based on bagging constraint score, BCS-DES)。该算法将bagging约束得分(bagging constraint score, BCS)引入动态集成选择算法, 通过将样本空间划分为不同的区域, 使用多种群并行遗传算法为不同测试样本选择局部最优的分类集成, 达到提高分类精度的目的。在UCI实验数据集上进行的实验表明, BCS-DES算法较现有的特征选择算法受成对约束组成和基数影响更小, 效果更好。  相似文献   

16.
一种基于局部随机子空间的分类集成算法   总被引:1,自引:0,他引:1  
分类器集成学习是当前机器学习研究领域的热点之一。然而,经典的采用完全随机的方法,对高维数据而言,难以保证子分类器的性能。 为此,文中提出一种基于局部随机子空间的分类集成算法,该算法首先采用特征选择方法得到一个有效的特征序列,进而将特征序列划分为几个区段并依据在各区段的采样比例进行随机采样,以此来改进子分类器性能和子分类器的多样性。在5个UCI数据集和5个基因数据集上进行实验,实验结果表明,文中方法优于单个分类器的分类性能,且在多数情况下优于经典的分类集成方法。  相似文献   

17.
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.  相似文献   

18.
This paper presents a novel wrapper feature selection algorithm for classification problems, namely hybrid genetic algorithm (GA)- and extreme learning machine (ELM)-based feature selection algorithm (HGEFS). It utilizes GA to wrap ELM to search for the optimum subsets in the huge feature space, and then, a set of subsets are selected to make ensemble to improve the final prediction accuracy. To prevent GA from being trapped in the local optimum, we propose a novel and efficient mechanism specifically designed for feature selection problems to maintain GA’s diversity. To measure each subset’s quality fairly and efficiently, we adopt a modified ELM called error-minimized extreme learning machine (EM-ELM) which automatically determines an appropriate network architecture for each feature subsets. Moreover, EM-ELM has good generalization ability and extreme learning speed which allows us to perform wrapper feature selection processes in an affordable time. In other words, we simultaneously optimize feature subset and classifiers’ parameters. After finishing the search process of GA, to further promote the prediction accuracy and get a stable result, we select a set of EM-ELMs from the obtained population to make the final ensemble according to a specific ranking and selecting strategy. To verify the performance of HGEFS, empirical comparisons are carried out on different feature selection methods and HGEFS with benchmark datasets. The results reveal that HGEFS is a useful method for feature selection problems and always outperforms other algorithms in comparison.  相似文献   

19.
Nowadays, microarray gene expression data plays a vital role in tumor classification. However, due to the accessibility of a limited number of tissues compared to large number of genes in genomic data, various existing methods have failed to identify a small subset of discriminative genes. To overcome this limitation, in this paper, we developed a new hybrid technique for gene selection, called ensemble multipopulation adaptive genetic algorithm (EMPAGA) that can overlook the irrelevant genes and classify cancer accurately. The proposed hybrid gene selection algorithm comprises of two phase. In the first phase, an ensemble gene selection (EGS) method used to filter the noisy and redundant genes in high-dimensional datasets by combining multilayer and F-score approaches. Then, an adaptive genetic algorithm based on multipopulation strategy with support vector machine and naïve Bayes (NB) classifiers as a fitness function is applied for gene selection to select the extremely sensible genes from the reduced datasets. The performance of the proposed method is estimated on 10 microarray datasets of numerous tumor. The comprehensive results and various comparisons disclose that EGS has a remarkable impact on the efficacy of the adaptive genetic algorithm with multipopulation strategy and enhance the capability of the proposed approach in terms of convergence rate and solution quality. The experiments results demonstrate the superiority of the proposed method when compared to other standard wrappers regarding classification accuracy and optimal number of genes.  相似文献   

20.
In classification, feature selection is an important data pre-processing technique, but it is a difficult problem due mainly to the large search space. Particle swarm optimisation (PSO) is an efficient evolutionary computation technique. However, the traditional personal best and global best updating mechanism in PSO limits its performance for feature selection and the potential of PSO for feature selection has not been fully investigated. This paper proposes three new initialisation strategies and three new personal best and global best updating mechanisms in PSO to develop novel feature selection approaches with the goals of maximising the classification performance, minimising the number of features and reducing the computational time. The proposed initialisation strategies and updating mechanisms are compared with the traditional initialisation and the traditional updating mechanism. Meanwhile, the most promising initialisation strategy and updating mechanism are combined to form a new approach (PSO(4-2)) to address feature selection problems and it is compared with two traditional feature selection methods and two PSO based methods. Experiments on twenty benchmark datasets show that PSO with the new initialisation strategies and/or the new updating mechanisms can automatically evolve a feature subset with a smaller number of features and higher classification performance than using all features. PSO(4-2) outperforms the two traditional methods and two PSO based algorithm in terms of the computational time, the number of features and the classification performance. The superior performance of this algorithm is due mainly to both the proposed initialisation strategy, which aims to take the advantages of both the forward selection and backward selection to decrease the number of features and the computational time, and the new updating mechanism, which can overcome the limitations of traditional updating mechanisms by taking the number of features into account, which reduces the number of features and the computational time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号