首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
徐涛  王祁 《控制与决策》2007,22(7):783-786
为满足模式识别故障诊断算法的鲁棒性要求,在小波包分解提取特征向量的基础上,提出了有监督模式分类与无监督模式分类相结合的故障诊断方法.利用小波包分解提取各个频带的能量作为特征向量;采用LVQ神经网络作为有监督的模式分类器进行故障诊断;运用无监督的减法聚类方法对新型故障模式进行辨识.最后,通过动力系统管路流量传感器数据对算法进行检验,验证了所提出方法的实用性和有效性.  相似文献   

2.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

3.
基于K-均值聚类的无监督的特征选择方法   总被引:11,自引:1,他引:10  
模式识别方法首先要解决的一个问题就是特征选择,目前许多方法考虑了有监督学习的特征选择问题,对无监督学习的特征选择问题却涉及得很少。依据特征对分类结果的影响和特征之间相关性分析两个方面提出了一种基于K-均值聚类方法的特征选择算法,用于无监督学习的特征选择问题。  相似文献   

4.
由于无监督环境下特征选择缺少类别信息的依赖,所以利用模糊粗糙集理论提出一种非一致性度量方法DAM(disagreement measure),用于度量任意两个特征集合或特征间引起的模糊等价类含义的差异程度.在此基础上实现DAMUFS无监督特征选择算法,其在无监督条件下可以选择出包含更多信息量的特征子集,同时还保证特征子集中属性冗余度尽可能小.实验将DAMUFS算法与一些无监督以及有监督特征选择算法在多个数据集上进行分类性能比较,结果证明了DAMUFS的有效性.  相似文献   

5.
6.
Wu  Yue  Wang  Can  Zhang  Yue-qing  Bu  Jia-jun 《浙江大学学报:C卷英文版》2019,20(4):538-553

Feature selection has attracted a great deal of interest over the past decades. By selecting meaningful feature subsets, the performance of learning algorithms can be effectively improved. Because label information is expensive to obtain, unsupervised feature selection methods are more widely used than the supervised ones. The key to unsupervised feature selection is to find features that effectively reflect the underlying data distribution. However, due to the inevitable redundancies and noise in a dataset, the intrinsic data distribution is not best revealed when using all features. To address this issue, we propose a novel unsupervised feature selection algorithm via joint local learning and group sparse regression (JLLGSR). JLLGSR incorporates local learning based clustering with group sparsity regularized regression in a single formulation, and seeks features that respect both the manifold structure and group sparse structure in the data space. An iterative optimization method is developed in which the weights finally converge on the important features and the selected features are able to improve the clustering results. Experiments on multiple real-world datasets (images, voices, and web pages) demonstrate the effectiveness of JLLGSR.

  相似文献   

7.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

8.
Feature selection (FS) is one of the most important fields in pattern recognition, which aims to pick a subset of relevant and informative features from an original feature set. There are two kinds of FS algorithms depending on the presence of information about dataset class labels: supervised and unsupervised algorithms. Supervised approaches utilize class labels of dataset in the process of feature selection. On the other hand, unsupervised algorithms act in the absence of class labels, which makes their process more difficult. In this paper, we propose unsupervised probabilistic feature selection using ant colony optimization (UPFS). The algorithm looks for the optimal feature subset in an iterative process. In this algorithm, we utilize inter-feature information which shows the similarity between the features that leads the algorithm to decreased redundancy in the final set. In each step of the ACO algorithm, to select the next potential feature, we calculate the amount of redundancy between current feature and all those which have been selected thus far. In addition, we utilize a matrix to hold ant related pheromone which shows the rate of the co-presence of every pair of features in solutions. Afterwards, features are ranked based on a probability function extracted from the matrix; then, their m-top is returned as the final solution. We compare the performance of UPFS with 15 well-known supervised and unsupervised feature selection methods using different classifiers (support vector machine, naive Bayes, and k-nearest neighbor) on 10 well-known datasets. The experimental results show the efficiency of the proposed method compared to the previous related methods.  相似文献   

9.
基于类信息的文本聚类中特征选择算法   总被引:2,自引:0,他引:2  
文本聚类属于无监督的学习方法,由于缺乏类信息还很难直接应用有监督的特征选择方法,因此提出了一种基于类信息的特征选择算法,此算法在密度聚类算法的聚类结果上使用信息增益特征选择法重新选择最有分类能力的特征,实验验证了算法的可行性和有效性。  相似文献   

10.
一种特征加权的聚类算法框架   总被引:3,自引:0,他引:3  
高滢  刘大有  徐益 《计算机科学》2008,35(10):152-154
为了考虑数据各维特征对聚类的不同贡献,并把有监督特征评价方法应用到无监督分类问题中,提出一种特征加权的聚类算法框架.该框架首先通过某种聚类算法对数据聚类,然后,根据聚类结果,采用有监督特征评价方法学习各维特征的权值,再根据特征权值重新聚类,之后再次学习特征权值,该过程反复迭代,直至算法收敛或达到指定的迭代次数.欧几里德空间内基于距离、基于密度的聚类算法均适用于本框架.基于本框架,采用模糊C均值聚类算法(FCM)、密度聚类算法(DBSCAN),并通过信息增益特征评价、ReliefF特征评价方法,对多个UCI数据集进行了实验,验证了该框架的有效性.  相似文献   

11.
属性规约是应对“维数灾难”的有效技术,分形属性规约FDR(Fractal Dimensionality Reduction)是近年来出现的一种无监督属性选择技术,令人遗憾的是其需要多遍扫描数据集,因而难于应对高维数据集情况;基于遗传算法的属性规约技术对于高维数据而言优越于传统属性选择技术,但其无法应用于无监督学习领域。为此,结合遗传算法内在随机并行寻优机制及分形属性选择的无监督特点,设计并实现了基于遗传算法的无监督分形属性子集选择算法GABUFSS(Genetic Algorithm Based Unsupervised Feature Subset Selection)。基于合成与实际数据集的实验对比分析了GABUFSS算法与FDR算法的性能,结果表明GABUFSS相对优于FDR算法,并具有发现等价结果属性子集的特点。  相似文献   

12.
Feature selection, both for supervised as well as for unsupervised classification is a relevant problem pursued by researchers for decades. There are multiple benchmark algorithms based on filter, wrapper and hybrid methods. These algorithms adopt different techniques which vary from traditional search-based techniques to more advanced nature inspired algorithm based techniques. In this paper, a hybrid feature selection algorithm using graph-based technique has been proposed. The proposed algorithm has used the concept of Feature Association Map (FAM) as an underlying foundation. It has used graph-theoretic principles of minimal vertex cover and maximal independent set to derive feature subset. This algorithm applies to both supervised and unsupervised classification. The performance of the proposed algorithm has been compared with several benchmark supervised and unsupervised feature selection algorithms and found to be better than them. Also, the proposed algorithm is less computationally expensive and hence has taken less execution time for the publicly available datasets used in the experiments, which include high-dimensional datasets.  相似文献   

13.
在多标记学习中,数据降维是一项重要且具有挑战性的任务,而特征选择又是一种高效的数据降维技术。在邻域粗糙集理论的基础上提出一种多标记专属特征选择方法,该方法从理论上确保了所得到的专属特征与相应标记具有较强的相关性,进而改善了约简效果。首先,该方法运用粗糙集理论的约简算法来减少冗余属性,在保持分类能力不变的情况下获得标记的专属特征;然后,在邻域精确度和邻域粗糙度概念的基础上,重新定义了基于邻域粗糙集的依赖度与重要度的计算方法,探讨了该模型的相关性质;最后,构建了一种基于邻域粗糙集的多标记专属特征选择模型,实现了多标记分类任务的特征选择算法。在多个公开的数据集上进行仿真实验,结果表明了该算法是有效的。  相似文献   

14.
Simultaneous feature selection and clustering using mixture models   总被引:6,自引:0,他引:6  
Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters.  相似文献   

15.
This paper describes a general fuzzy min-max (GFMM) neural network which is a generalization and extension of the fuzzy min-max clustering and classification algorithms of Simpson (1992, 1993). The GFMM method combines supervised and unsupervised learning in a single training algorithm. The fusion of clustering and classification resulted in an algorithm that can be used as pure clustering, pure classification, or hybrid clustering classification. It exhibits a property of finding decision boundaries between classes while clustering patterns that cannot be said to belong to any of existing classes. Similarly to the original algorithms, the hyperbox fuzzy sets are used as a representation of clusters and classes. Learning is usually completed in a few passes and consists of placing and adjusting the hyperboxes in the pattern space; this is an expansion-contraction process. The classification results can be crisp or fuzzy. New data can be included without the need for retraining. While retaining all the interesting features of the original algorithms, a number of modifications to their definition have been made in order to accommodate fuzzy input patterns in the form of lower and upper bounds, combine the supervised and unsupervised learning, and improve the effectiveness of operations. A detailed account of the GFMM neural network, its comparison with the Simpson's fuzzy min-max neural networks, a set of examples, and an application to the leakage detection and identification in water distribution systems are given  相似文献   

16.
基于多示例的K-means聚类学习算法   总被引:1,自引:1,他引:0       下载免费PDF全文
谢红薇  李晓亮 《计算机工程》2009,35(22):179-181
多示例学习是继监督学习、非监督学习、强化学习后的又一机器学习框架。将多示例学习和非监督学习结合起来,在传统非监督聚类算法K-means的基础上提出MIK-means算法,该算法利用混合Hausdorff距离作为相似测度来实现数据聚类。实验表明,该方法能够有效揭示多示例数据集的内在结构,与K-means算法相比具有更好的聚类效果。  相似文献   

17.
Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user’s intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification.  相似文献   

18.
由于铅酸蓄电池老化程度受诸多因素影响,且蓄电池老化实验受完全充放电时间和样本数量限制,使得基于小样本的具有代表性的特征集的选择在蓄电池健康状态(SOH)预测中显得尤为重要。因此在对蓄电池进行特性分析的基础上,提出基于无监督的ACCA-FCM和有监督的SVM-RFE相结合的蓄电池SOH特征选择算法。该算法利用改进的蚁群聚类算法(ACCA)从全局特征集中选取有效的特征值聚类中心,克服模糊C均值聚类算法(FCM)聚类中心敏感和局部最优问题,并根据特征之间相关性排除冗余特征;再通过SVM-RFE特征排序算法剔除非关键干扰(低预测性)特征,最终得到与待测结果最大相关最小冗余的低维特征子集,且在保证精度的前提下,避开了完全放电过程。经基于支持向量机(SVM)的蓄电池SOH预测模型验证,放电初期特征构成的最优特征子集可准确预测铅酸蓄电池的健康状态。  相似文献   

19.
In pattern recognition field, objects are usually represented by multiple features (multimodal features). For example, to characterize a natural scene image, it is essential to extract a set of visual features representing its color, texture, and shape information. However, integrating multimodal features for recognition is challenging because: (1) each feature has its specific statistical property and physical interpretation, (2) huge number of features may result in the curse of dimensionality (When data dimension is high, the distances between pairwise objects in the feature space become increasingly similar due to the central limit theory. This phenomenon influences negatively to the recognition performance), and (3) some features may be unavailable. To solve these problems, a new multimodal feature selection algorithm, termed Grassmann manifold feature selection (GMFS), is proposed. In particular, by defining a clustering criterion, the multimodal features are transformed into a matrix, and further treated as a point on the Grassmann manifold in Hamm and Lee (Grassmann discriminant analysis: a unifying view on subspace-based learning. In: Proceedings of the 25th international conference on machine learning (ICML), pp. 376–383, Helsinki, Finland [2008]). To deal with the unavailable features, L2-Hausdorff distance, a metric between different-sized matrices, is computed and the kernel is obtained accordingly. Based on the kernel, we propose supervised/unsupervised feature selection algorithms to achieve a physically meaningful embedding of the multimodal features. Experimental results on eight data sets validate the effectiveness the proposed approach.  相似文献   

20.
Fault diagnosis is crucial to improve reliability and performance of machinery. Effective feature extraction and clustering analysis can mine useful information from large amounts of raw data and facilitate fault diagnosis. This paper presents a novel intelligent fault diagnosis method based on ant colony clustering analysis. Vibration signals acquired from equipment are decomposed by wavelet packet transform, after which sub-bands of signals are clustered by ant colony algorithm, and each cluster as a set of data is analyzed from pattern of frequency band perspective for selecting intrinsic features reflecting operation condition of equipment, and thus fault diagnosis model is established to combine the extracted major features with given fault prototypes from historical data. The classification process for fault diagnosis is carried out using Euclidean nearness degree based on the established model. Furthermore, an improved ant colony clustering algorithm is proposed to adjust comparison probability dynamically and detect outliers. When compared with other clustering algorithms, the algorithm has higher convergence speed to meet requirements of real-time analysis as well as further improvement of accuracy. Finally, effectiveness and feasibility of the proposed method is verified by vibration signals acquired from a rotor test bed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号