首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L2,1-norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity.

  相似文献   

2.
特征选择是一种常用的机器学习降维方法,然而传统非监督特征选择算法在保持数据样本维度的局部结构时,却忽略了排序局部性对特征选择的影响。利用数据的三元组局部结构,构建数据之间的排序关系并在特征选择过程中进行局部性保持,提出基于三元组排序局部性的同时正交基聚类特征选择(SOCFS)改进算法,选择具有局部结构保持性且判别区分度高的特征。实验结果表明,与传统非监督特征选择算法相比,SOCFS改进算法聚类效果更好、收敛速度更快。  相似文献   

3.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

4.
Simultaneous feature selection and clustering using mixture models   总被引:6,自引:0,他引:6  
Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters.  相似文献   

5.
在无标签高维数据普遍存在的数据挖掘和模式识别任务中,无监督特征选择是必不可少的预处理步骤。然而现有的大多数特征选择方法忽略了数据特征之间的相关性,选择出具有高冗余、低判别性的特征。本文提出一种基于联合不相关回归和非负谱分析的无监督特征选择方法(joint uncorrelated regression and nonnegative spectral analysis for unsupervised feature selection),在选择不相关且具有判别性特征的同时,自适应动态确定数据之间的相似性关系,从而能获得更准确的数据结构和标签信息。而且,模型中广义不相关约束能够避免平凡解,所以此方法具有不相关回归和非负谱聚类两种特征选择方法的优点。本文还设计出一种求解模型的高效算法,并在多个数据集上进行了大量实验与分析,验证模型的优越性。  相似文献   

6.
无监督特征选择可以降低数据维数,提高算法的学习性能,是机器学习和模式识别等领域中的重要研究课题。和大多数在目标函数中引入稀疏正则化解决松弛问题的方法不同,提出了一种基于最大熵和l2,0范数约束的无监督特征选择算法。使用具有唯一确定含义的l2,0范数等式约束,即选择特征的数量,不涉及正则化参数的选取,避免调整参数。结合谱分析探索数据的局部几何结构并基于最大熵原理自适应的构造相似矩阵。通过增广拉格朗日函数法,设计了一种交替迭代优化算法对模型求解。在四个真实数据集上与其他几种无监督特征选择算法的对比实验,验证了所提算法的有效性。  相似文献   

7.
在处理高维数据过程中,特征选择是一个非常重要的数据降维步骤。低秩表示模型具有揭示数据全局结构信息的能力和一定的鉴别能力。稀疏表示模型能够利用较少的连接关系揭示数据的本质结构信息。在低秩表示模型的基础上引入稀疏约束项,构建一种低秩稀疏表示模型学习数据间的低秩稀疏相似度矩阵;基于该矩阵提出一种低秩稀疏评分机制用于非监督特征选择。在不同数据库上将选择后的特征进行聚类和分类实验,同传统特征选择算法进行比较。实验结果表明了低秩特征选择算法的有效性。  相似文献   

8.
Feature selection is an important preprocessing step for dealing with high dimensional data. In this paper, we propose a novel unsupervised feature selection method by embedding a subspace learning regularization (i.e., principal component analysis (PCA)) into the sparse feature selection framework. Specifically, we select informative features via the sparse learning framework and consider preserving the principal components (i.e., the maximal variance) of the data at the same time, such that improving the interpretable ability of the feature selection model. Furthermore, we propose an effective optimization algorithm to solve the proposed objective function which can achieve stable optimal result with fast convergence. By comparing with five state-of-the-art unsupervised feature selection methods on six benchmark and real-world datasets, our proposed method achieved the best result in terms of classification performance.  相似文献   

9.
Features extracted from real world applications increase dramatically, while machine learning methods decrease their performance given the previous scenario, and feature reduction is required. Particularly, for fault diagnosis in rotating machinery, the number of extracted features are sizable in order to collect all the available information from several monitored signals. Several approaches lead to data reduction using supervised or unsupervised strategies, where the supervised ones are the most reliable and its main disadvantage is the beforehand knowledge of the fault condition. This work proposes a new unsupervised algorithm for feature selection based on attribute clustering and rough set theory. Rough set theory is used to compute similarities between features through the relative dependency. The clustering approach combines classification based on distance with clustering based on prototype to group similar features, without requiring the number of clusters as an input. Additionally, the algorithm has an evolving property that allows the dynamic adjustment of the cluster structure during the clustering process, even when a new set of attributes feeds the algorithm. That gives to the algorithm an incremental learning property, avoiding a retraining process. These properties define the main contribution and significance of the proposed algorithm. Two fault diagnosis problems of fault severity classification in gears and bearings are studied to test the algorithm. Classification results show that the proposed algorithm is able to select adequate features as accurate as other feature selection and reduction approaches.  相似文献   

10.
Zhong  Zhi  Chen  Long 《Multimedia Tools and Applications》2019,78(23):33339-33356

For many machine learning and data mining tasks in the information explosion environment, one is often confronted with very high dimensional heterogeneous data. Demands for new methods to select discrimination and valuable features that are beneficial to classification and cluster have increased. In this paper, we propose a novel feature selection method to jointly map original data from input space to kernel space and conduct both subspace learning (via locality preserving projection) and feature selection (via a sparsity constraint). Specifically, the nonlinear relationship between data is explored adequately through mapping data from original low-dimensional space to kernel space. Meanwhile, the subspace learning technique is leveraged to preserve available information of local structure in ambient space. Last, by restricting the sparsity of the coefficient matrix, the weight of some features is 0. As a result, we eliminate redundant and irrelevant features and thus make our method select informative and distinguishing features. By comparing our proposed method with some state-of-the-art methods, the experimental results demonstrate that the proposed method outperformed the comparisons in terms of clustering task.

  相似文献   

11.
The text clustering technique is an appropriate method used to partition a huge amount of text documents into groups. The documents size affects the text clustering by decreasing its performance. Subsequently, text documents contain sparse and uninformative features, which reduce the performance of the underlying text clustering algorithm and increase the computational time. Feature selection is a fundamental unsupervised learning technique used to select a new subset of informative text features to improve the performance of the text clustering and reduce the computational time. This paper proposes a hybrid of particle swarm optimization algorithm with genetic operators for the feature selection problem. The k-means clustering is used to evaluate the effectiveness of the obtained features subsets. The experiments were conducted using eight common text datasets with variant characteristics. The results show that the proposed algorithm hybrid algorithm (H-FSPSOTC) improved the performance of the clustering algorithm by generating a new subset of more informative features. The proposed algorithm is compared with the other comparative algorithms published in the literature. Finally, the feature selection technique encourages the clustering algorithm to obtain accurate clusters.  相似文献   

12.
特征选择是去除不相关和冗余特征,找到具有良好泛化能力的原始特征的紧凑表示,同时,数据中含有的噪声和离群点会使学习获得的系数矩阵的秩变大,使得算法无法捕捉到高维数据中真实的低秩结构。因此,利用Schatten-p范数逼近秩最小化问题和特征自表示重构无监督特征选择问题中的系数矩阵,建立一个基于Schatten-p范数和特征自表示的无监督特征选择(SPSR)算法,并使用增广拉格朗日乘子法和交替方向法乘子法框架进行求解。最后在6个公开数据集上与经典无监督特征选择算法进行实验比较,SPSR算法的聚类精度更高,可以有效地识别代表性特征子集。  相似文献   

13.

Classical linear discriminant analysis (LDA) has been applied to machine learning and pattern recognition successfully, and many variants based on LDA are proposed. However, the traditional LDA has several disadvantages as follows: Firstly, since the features selected by feature selection have good interpretability, LDA has poor performance in feature selection. Secondly, there are many redundant features or noisy data in the original data, but LDA has poor robustness to noisy data and outliers. Lastly, LDA only utilizes the global discriminant information, without consideration for the local discriminant structure. In order to overcome the above problems, we present a robust sparse manifold discriminant analysis (RSMDA) method. In RSMDA, by introducing the L2,1 norm, the most discriminant features can be selected for discriminant analysis. Meanwhile, the local manifold structure is used to capture the local discriminant information of the original data. Due to the introduction of L2,1 constraints and local discriminant information, the proposed method has excellent robustness to noisy data and has the potential to perform better than other methods. A large number of experiments on different data sets have proved the good effectiveness of RSMDA.

  相似文献   

14.
许航  张师超  吴兆江  李佳烨 《软件学报》2021,32(11):3440-3451
正则化属性选择算法减小噪音数据影响的效果不佳,而且样本空间的局部结构几乎没有被考虑,在将样本映射到属性子空间后,样本之间的联系与原空间不一致,导致数据挖掘算法的效果不能令人满意.提出一个抗噪音属性选择方法,可以有效地解决传统算法的这两个缺陷.该方法首先采用自步学习的训练方式,这不仅能大幅度降低离群点进入训练的可能性,而且有利于模型的快速收敛;然后,采用加入l2,1正则项的回归学习器进行嵌入式属性选择,兼顾“求得稀疏解”和“解决过拟合”,使模型更稳健;最后,融合局部保留投影的技术,将其投影矩阵转换成模型的回归参数矩阵,在属性选择的同时保持样本之间的原有局部结构.采用一系列基准数据集合测试该算法,在aCC和aRMSE上的实验结果,表明了该属性选择方法的有效性.  相似文献   

15.
Extreme learning machine (ELM) as a new learning approach has shown its good generalization performance in regression and classification applications. Clustering analysis is an important tool to explore the structure of data and has been employed in many disciplines and applications. In this paper, we present a method that builds on ELM projection of input data into a high-dimensional feature space and followed by unsupervised clustering using artificial bee colony (ABC) algorithm. While ELM projection facilitates separability of clusters, a metaheuristic technique such as ABC algorithm overcomes problems of dependence on initialization of cluster centers and convergence to local minima suffered by conventional algorithms such as K-means. The proposed ELM-ABC algorithm is tested on 12 benchmark data sets. The experimental results show that the ELM-ABC algorithm can effectively improve the quality of clustering.  相似文献   

16.
基于图的无监督特征选择方法大多选择投影矩阵的l2,1范数稀疏正则化代替非凸的l2,0范数约束,然而l2,1范数正则化方法根据得分高低逐个选择特征,未考虑特征的相关性.因此,文中提出基于l2,0范数稀疏性和模糊相似性的图优化无监督组特征选择方法,同时进行图学习和特征选择.在图学习中,学习具有精确连通分量的相似性矩阵.在特征选择过程中,约束投影矩阵的非零行个数,实现组特征选择.为了解决非凸的l2,0范数约束,引入元素为0或1的特征选择向量,将l2,0范数约束问题转化为0-1整数规划问题,并将离散的0-1整数约束转化为2个连续约束进行求解.最后,引入模糊相似性因子,拓展文中方法,学习更精确的图结构.在真实数据集上的实验表明文中方法的有效性.  相似文献   

17.
在计算机辅助诊断神经精神疾病研究中,需要专业人士为样本进行诊断级的语义标注,耗费大量时间和精力,因此,以无监督的方式开展神经精神疾病辅助诊断研究具有重要意义.文中提出基于自适应稀疏结构学习的无监督特征选择方法,用于精神分裂症和阿兹海默症辅助诊断.在统一框架下同时学习稀疏表示和数据流形结构,并在该框架中采用一般化范数对稀疏学习的重构误差进行建模,不断迭代更新数据集的流形结构,解决传统特征选择方法存在的鲁棒性不足问题.在精神分裂症和阿兹海默症两个公共数据集上的实验表明文中方法在神经精神疾病分类中的有效性  相似文献   

18.
Lu  Haohan  Chen  Hongmei  Li  Tianrui  Chen  Hao  Luo  Chuan 《Applied Intelligence》2022,52(10):11652-11671

The dimension of data in the domain of multi-label learning is usually high, which makes the calculation cost very high. As an important data dimension reduction technology, feature selection has attracted the attention of many researchers. And the imbalance of data labels is also one of the factors that perplex multi-label learning. To tackle these problems, we propose a new multi-label feature selection algorithm named IMRFS, which combines manifold learning and label imbalance. Firstly, in order to keep the manifold structure between samples, the Laplacian graph is used to construct the manifold regularization. In addition, the local manifold structure of each label is considered to find the correlation between labels. And the imbalance distribution of labels is also considered, which is embedded into the manifold structure of labels. Furthermore, in order to ensure the robustness and sparsity of the IMRFS method, the L2,1-norm is applied to loss function and sparse regularization term simultaneously. Then, we adopt an iterative strategy to optimize the objective function of IMRFS. Finally, comparison results on multiple datasets show the effectiveness of IMRFS method.

  相似文献   

19.
In this paper, we propose a novel approach of simultaneous localized feature selection and model detection for unsupervised learning. In our approach, local feature saliency, together with other parameters of Gaussian mixtures, are estimated by Bayesian variational learning. Experiments performed on both synthetic and real-world data sets demonstrate that our approach is superior over both global feature selection and subspace clustering methods.  相似文献   

20.
针对高维数据含有的冗余特征影响机器学习训练效率和泛化能力的问题,为提升模式识别准确率、降低计算复杂度,提出了一种基于正则互表示(RMR)性质的无监督特征选择方法。首先,利用特征之间的相关性,建立由Frobenius范数约束的无监督特征选择数学模型;然后,设计分治-岭回归优化算法对模型进行快速优化;最后,根据模型最优解综合评估每个特征的重要性,选出原始数据中具有代表性的特征子集。在聚类准确率指标上,RMR方法与Laplacian方法相比提升了7个百分点,与非负判别特征选择(NDFS)方法相比提升了7个百分点,与正则自表示(RSR)方法相比提升了6个百分点,与自表示特征选择(SR_FS)方法相比提升了3个百分点;在数据冗余率指标上,RMR方法与Laplacian方法相比降低了10个百分点,与NDFS方法相比降低了7个百分点,与RSR方法相比降低了3个百分点,与SR_FS方法相比降低了2个百分点。实验结果表明,RMR方法能够有效地选出重要特征,降低数据冗余率,提升样本聚类准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号