首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Feature selection, both for supervised as well as for unsupervised classification is a relevant problem pursued by researchers for decades. There are multiple benchmark algorithms based on filter, wrapper and hybrid methods. These algorithms adopt different techniques which vary from traditional search-based techniques to more advanced nature inspired algorithm based techniques. In this paper, a hybrid feature selection algorithm using graph-based technique has been proposed. The proposed algorithm has used the concept of Feature Association Map (FAM) as an underlying foundation. It has used graph-theoretic principles of minimal vertex cover and maximal independent set to derive feature subset. This algorithm applies to both supervised and unsupervised classification. The performance of the proposed algorithm has been compared with several benchmark supervised and unsupervised feature selection algorithms and found to be better than them. Also, the proposed algorithm is less computationally expensive and hence has taken less execution time for the publicly available datasets used in the experiments, which include high-dimensional datasets.  相似文献   

2.
3.
Feature selection is an important preprocessing step for dealing with high dimensional data. In this paper, we propose a novel unsupervised feature selection method by embedding a subspace learning regularization (i.e., principal component analysis (PCA)) into the sparse feature selection framework. Specifically, we select informative features via the sparse learning framework and consider preserving the principal components (i.e., the maximal variance) of the data at the same time, such that improving the interpretable ability of the feature selection model. Furthermore, we propose an effective optimization algorithm to solve the proposed objective function which can achieve stable optimal result with fast convergence. By comparing with five state-of-the-art unsupervised feature selection methods on six benchmark and real-world datasets, our proposed method achieved the best result in terms of classification performance.  相似文献   

4.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

5.
Qian  Youcheng  Yin  Xueyan  Gao  Wei 《Multimedia Tools and Applications》2019,78(23):33593-33615
Multimedia Tools and Applications - Feature selection aims to select the optimal feature subset which can reduce time complexity, save storage space and improve the performances of various tasks....  相似文献   

6.
Exploratory data analysis methods are essential for getting insight into data. Identifying the most important variables and detecting quasi-homogenous groups of data are problems of interest in this context. Solving such problems is a difficult task, mainly due to the unsupervised nature of the underlying learning process. Unsupervised feature selection and unsupervised clustering can be successfully approached as optimization problems by means of global optimization heuristics if an appropriate objective function is considered. This paper introduces an objective function capable of efficiently guiding the search for significant features and simultaneously for the respective optimal partitions. Experiments conducted on complex synthetic data suggest that the function we propose is unbiased with respect to both the number of clusters and the number of features.  相似文献   

7.
This paper proposes a novel unsupervised feature selection method by jointing self-representation and subspace learning. In this method, we adopt the idea of self-representation and use all the features to represent each feature. A Frobenius norm regularization is used for feature selection since it can overcome the over-fitting problem. The Locality Preserving Projection (LPP) is used as a regularization term as it can maintain the local adjacent relations between data when performing feature space transformation. Further, a low-rank constraint is also introduced to find the effective low-dimensional structures of the data, which can reduce the redundancy. Experimental results on real-world datasets verify that the proposed method can select the most discriminative features and outperform the state-of-the-art unsupervised feature selection methods in terms of classification accuracy, standard deviation, and coefficient of variation.  相似文献   

8.
Toward unsupervised correlation preserving discretization   总被引:2,自引:0,他引:2  
Discretization is a crucial preprocessing technique used for a variety of data warehousing and mining tasks. In this paper, we present a novel PCA-based unsupervised algorithm for the discretization of continuous attributes in multivariate data sets. The algorithm leverages the underlying correlation structure in the data set to obtain the discrete intervals and ensures that the inherent correlations are preserved. Previous efforts on this problem are largely supervised and consider only piecewise correlation among attributes. We consider the correlation among continuous attributes and, at the same time, also take into account the interactions between continuous and categorical attributes. Our approach also extends easily to data sets containing missing values. We demonstrate the efficacy of the approach on real data sets and as a preprocessing step for both classification and frequent itemset mining tasks. We show that the intervals are meaningful and can uncover hidden patterns in data. We also show that large compression factors can be obtained on the discretized data sets. The approach is task independent, i.e., the same discretized data set can be used for different data mining tasks. Thus, the data sets can be discretized, compressed, and stored once and can be used again and again.  相似文献   

9.
为了在揭示数据全局结构的同时保留其局部结构,本文将特征自表达和图正则化统一到同一框架中,给出了一种新的无监督特征选择(unsupervised feature selection,UFS)模型与方法。模型使用特征自表达,用其余特征线性表示每一个特征,以保持特征的局部结构;用基于 ${L_{2, 1}}$ 范数的图正则化项,在保留数据的局部几何结构的同时可以降低噪声数据对特征选择的影响;除此之外,在权重矩阵上施加了低秩约束,保留数据的全局结构。在6个不同的公开数据集上的实验表明,所给算法明显优于其他5个对比算法,表明了所提出的UFS框架的有效性。  相似文献   

10.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

11.
Unsupervised feature selection is an important problem, especially for high‐dimensional data. However, until now, it has been scarcely studied and the existing algorithms cannot provide satisfying performance. Thus, in this paper, we propose a new unsupervised feature selection algorithm using similarity‐based feature clustering, Feature Selection‐based Feature Clustering (FSFC). FSFC removes redundant features according to the results of feature clustering based on feature similarity. First, it clusters the features according to their similarity. A new feature clustering algorithm is proposed, which overcomes the shortcomings of K‐means. Second, it selects a representative feature from each cluster, which contains most interesting information of features in the cluster. The efficiency and effectiveness of FSFC are tested upon real‐world data sets and compared with two representative unsupervised feature selection algorithms, Feature Selection Using Similarity (FSUS) and Multi‐Cluster‐based Feature Selection (MCFS) in terms of runtime, feature compression ratio, and the clustering results of K‐means. The results show that FSFC can not only reduce the feature space in less time, but also significantly improve the clustering performance of K‐means.  相似文献   

12.
在无标签高维数据普遍存在的数据挖掘和模式识别任务中,无监督特征选择是必不可少的预处理步骤。然而现有的大多数特征选择方法忽略了数据特征之间的相关性,选择出具有高冗余、低判别性的特征。本文提出一种基于联合不相关回归和非负谱分析的无监督特征选择方法(joint uncorrelated regression and nonnegative spectral analysis for unsupervised feature selection),在选择不相关且具有判别性特征的同时,自适应动态确定数据之间的相似性关系,从而能获得更准确的数据结构和标签信息。而且,模型中广义不相关约束能够避免平凡解,所以此方法具有不相关回归和非负谱聚类两种特征选择方法的优点。本文还设计出一种求解模型的高效算法,并在多个数据集上进行了大量实验与分析,验证模型的优越性。  相似文献   

13.
白鹤翔  王健  李德玉  陈千 《计算机应用》2015,35(8):2355-2359
针对"大数据"中常见的大规模无监督数据集中特征选择速度难以满足实际应用要求的问题,在经典粗糙集绝对约简增量式算法的基础上提出了一种快速的属性选择算法。首先,将大规模数据集看作一个随机到来的对象序列,并初始化候选约简为空集;然后每次都从大规模数据集中无放回地随机抽取一个对象,并且每次都判断使用当前候选约简能否区分这一对象和当前对象集中所有应当区分的对象,并将该对象放入到当前对象集中,如果不能区分则向候选约简中添加合适的属性;最后,如果连续I次都没有发现无法区分的对象,那么将候选约简作为大规模数据集的约简。在5个非监督大规模数据集上的实验表明,所求得的约简能够区分95%以上的对象对,并且求取该约简所需的时间不到基于区分矩阵的算法和增量式约简算法的1%;在文本主题挖掘的实验中,使用约简后的数据集挖掘出的文本主题同原始数据集挖掘出的主题基本一致。两组实验结果表明该方法能够有效快速对大规模数据集进行属性选择。  相似文献   

14.
The optimum finite set of linear observables for discriminating two Gaussian stochastic processes is derived using classical methods and distribution function theory. The results offer a new, accurate information-theoretic strategy and are superior to well-known conventional methods using statistical distance measures.  相似文献   

15.
林筠超  万源 《计算机应用》2021,41(5):1282-1289
非监督特征选择是机器学习领域的热点研究问题,对于高维数据的降维和分类都极为重要.数据点之间的相似性可以用多个不同的标准来衡量,这使得不同的数据点之间相似性度量标准难以一致;并且现有方法多数通过近邻分配得到相似矩阵,因此其连通分量数通常不够理想.针对这两个问题,将相似矩阵看作变量而非预先对其进行设定,提出了一种基于图结构...  相似文献   

16.
17.
Pattern Analysis and Applications - Search-based methods that use matrix- or vector-based representations of the dataset are commonly employed to solve the problem of feature selection. These...  相似文献   

18.
Principal components analysis (PCA) is probably the best-known approach to unsupervised dimensionality reduction. However, axes of the lower-dimensional space, ie., principal components (PCs), are a set of new variables carrying no clear physical meanings. Thus, interpretation of results obtained in the lower-dimensional PCA space and data acquisition for test samples still involve all of the original measurements. To deal with this problem, we develop two algorithms to link the physically meaningless PCs back to a subset of original measurements. The main idea of the algorithms is to evaluate and select feature subsets based on their capacities to reproduce sample projections on principal axes. The strength of the new algorithms is that the computaion complexity involved is significantly reduced, compared with the data structural similarity-based feature evaluation.  相似文献   

19.
We present an optimization-based unsupervised approach to automatic document summarization. In the proposed approach, text summarization is modeled as a Boolean programming problem. This model generally attempts to optimize three properties, namely, (1) relevance: summary should contain informative textual units that are relevant to the user; (2) redundancy: summaries should not contain multiple textual units that convey the same information; and (3) length: summary is bounded in length. The approach proposed in this paper is applicable to both tasks: single- and multi-document summarization. In both tasks, documents are split into sentences in preprocessing. We select some salient sentences from document(s) to generate a summary. Finally, the summary is generated by threading all the selected sentences in the order that they appear in the original document(s). We implemented our model on multi-document summarization task. When comparing our methods to several existing summarization methods on an open DUC2005 and DUC2007 data sets, we found that our method improves the summarization results significantly. This is because, first, when extracting summary sentences, this method not only focuses on the relevance scores of sentences to the whole sentence collection, but also the topic representative of sentences. Second, when generating a summary, this method also deals with the problem of repetition of information. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-SU4 metrics. In this paper, we also demonstrate that the summarization result depends on the similarity measure. Results of the experiment showed that combination of symmetric and asymmetric similarity measures yields better result than their use separately.  相似文献   

20.
An algebraic approach to feature interactions   总被引:1,自引:0,他引:1  
The various approaches proposed to provide communication between CAD systems and process planning systems share the major problem that, due to geometric interactions among features, there may be several equally valid sets of manufacturable features describing the same part, and different sets of features may differ in their manufacturability. Thus, to produce a good process plan-or, in some cases, any plan at ll-it may be necessary to interpret the part as a different set of features than the one initially obtained from the CAD model. This is addressed using an algebra of features. Given a set of features describing a machinable part, other equally valid interpretations of the part can be produced by performing operations in the algebra. This will enable automated process planning systems to examine these interpretations in order to see which one is most appropriate for use in manufacturing. The feature algebra has been implemented for a restricted domain and integrated with the Protosolid solid modeling system and the EFHA process planning system  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号