共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
3.
针对无标签高维数据的大量出现,对机器学习中无监督特征选择进行了研究。提出了一种结合自表示相似矩阵和流形学习的无监督特征选择算法。首先,通过数据的自表示性质,构建相似矩阵,结合低维流形能够表示高维数据结构这一流形学习思想,建立一种考虑流形学习的无监督特征选择优化模型。其次,为了保证选择更有用及更稀疏的特征,采用◢l◣▼2,1▽范数对优化模型进行约束,使特征之间相互竞争,消除冗余。进而,通过变量交替迭代对优化模型进行求解,并证明了算法的收敛性。最后,通过与其他几个无监督特征算法在四个数据集上的对比实验,表明所给算法的有效性。 相似文献
4.
基于Hessian半监督特征选择的网络图像标注 总被引:1,自引:0,他引:1
针对半监督特征选择算法进行了研究,采用有标签图像和无标签图像的半监督特征选择方法来提升网络图像标注的性能。基于二阶Hessian能提出一个新的半监督特征选择方法,该方法具有更好的局部拓扑结构保持特性和推断能力,从而能够克服基于图拉普拉斯半监督学习方法的缺点。将所提出的半监督特征选择算法应用到网络图像标注任务中,在两个大规模网络图像数据库上进行了实验,结果表明Hessian半监督特征选择方法优于拉普拉斯半监督特征选择方法,适合大规模网络图像标注。 相似文献
5.
针对高维无标签数据中的特征冗余问题,提出一种基于特征正则稀疏关联的无监督特征选择方法 (FRSA)。建立特征选择模型:利用Frobenius范数建立损失函数项表示特征之间的关联关系,对特征权重矩阵施加L1稀疏正则化约束。设计一种分治-收缩阈值迭代算法对目标函数进行优化。根据特征权重评估每个特征的重要性,选择出具有代表性的特征。在6个不同类型的标准数据集上与目前常用的无监督特征选择方法进行对比实验,实验结果表明,所提方法的性能优于其它无监督特征选择方法。 相似文献
6.
现有的多视图无监督特征选择方法大多存在以下问题:样本的相似度矩阵、不同视图的权重矩阵和特征的权重矩阵往往是预先定义的,不能有效刻画数据间的真实结构以及反映不同视图和特征的重要性,进而导致不能选出有用的特征。为解决上述问题,首先,在多视图模糊C均值聚类的基础上进行视图权重和特征权重的自适应学习,以同时实现特征选择并保证聚类性能;然后,在拉普拉斯秩约束下自适应地学习样本的相似度矩阵,并构建一个基于自适应学习的多视图无监督特征选择(ALMUFS)方法;最后,设计一种交替迭代优化算法对目标函数进行求解,并在8个真实数据集上将所提方法与6种无监督特征选择基线方法进行比较。实验结果表明,ALMUFS的聚类精度和F-measure优于其他方法,与自适应协作相似性学习(ACSL)相比,平均提高8.99和11.87个百分点;与ASVM(Adaptive Similarity and View Weight)相比,平均提高11.09和13.21个百分点,验证了所提方法的可行性和有效性。 相似文献
7.
基于类信息的文本聚类中特征选择算法 总被引:2,自引:0,他引:2
文本聚类属于无监督的学习方法,由于缺乏类信息还很难直接应用有监督的特征选择方法,因此提出了一种基于类信息的特征选择算法,此算法在密度聚类算法的聚类结果上使用信息增益特征选择法重新选择最有分类能力的特征,实验验证了算法的可行性和有效性。 相似文献
8.
9.
特征选择是模式识别与数据挖掘的关键问题之一,它可以移除数据集中的冗余和不相关特征以提升学习性能。基于最大相关最小冗余准则,提出一种新的基于相关性与冗余性分析的半监督特征选择方法(S2R2),S2R2方法独立于任何分类学习算法。该方法首先对无监督相关度信息度量进行分析与扩充,然后结合信息增益,设计一种半监督特征相关性与冗余性度量,可以有效识别与移除不相关和冗余特征,最后采用增量搜索技术贪婪地构建特征子集,避免搜索指数级大小的解空间,提高算法的运行效率。本文还提出S2R2方法的快速过滤版本,FS2R2,以更好地应对大规模特征选择问题。多个标准数据集上的实验结果表明了所提方法的有效性和优越性。 相似文献
10.
吴剑 《计算机工程与应用》2011,47(26):79-82
为提高入侵检测系统的检测速度和效果,结合遗传算法提出了一种基于特征选择的无监督入侵检测方法。一方面利用改进的遗传算法作为搜索策略;一方面使用K均值聚类算法对提取特征后的数据进行聚类,并将类间离散度和类内离散度的相关比值作为特征子集的评价指标,从而实现最优特征子集的求解并用于无监督的入侵检测。实验结果表明,该方法由于解决了入侵检测的特征选择问题,与未采用特征选择的无监督入侵检测相比具有更好的性能。 相似文献
11.
Hou Yuexian Zhang Peng Yan Tingxu Li Wenjie Song Dawei 《Knowledge and Data Engineering, IEEE Transactions on》2010,22(3):348-364
A fundamental goal of unsupervised feature selection is denoising, which aims to identify and reduce noisy features that are not discriminative. Due to the lack of information about real classes, denoising is a challenging task. The noisy features can disturb the reasonable distance metric and result in unreasonable feature spaces, i.e., the feature spaces in which common clustering algorithms cannot effectively find real classes. To overcome the problem, we make a primary observation that the relevance of features is intrinsic and independent of any metric scaling on the feature space. This observation implies that feature selection should be invariant, at least to some extent, with respect to metric scaling. In this paper, we clarify the necessity of considering the metric invariance in unsupervised feature selection and propose a novel model incorporating metric invariance. Our proposed method is motivated by the following observations: if the statistic that guides the unsupervised feature selection process is invariant with respect to possible metric scaling, the solution of this model will also be invariant. Hence, if a metric-invariant model can distinguish discriminative features from noisy ones in a reasonable feature space, it will also work on the unreasonable counterpart transformed from the reasonable one by metric scaling. A theoretical justification of the metric invariance of our proposed model is given and the empirical evaluation demonstrates its promising performance. 相似文献
12.
属性规约是应对“维数灾难”的有效技术,分形属性规约FDR(Fractal Dimensionality Reduction)是近年来出现的一种无监督属性选择技术,令人遗憾的是其需要多遍扫描数据集,因而难于应对高维数据集情况;基于遗传算法的属性规约技术对于高维数据而言优越于传统属性选择技术,但其无法应用于无监督学习领域。为此,结合遗传算法内在随机并行寻优机制及分形属性选择的无监督特点,设计并实现了基于遗传算法的无监督分形属性子集选择算法GABUFSS(Genetic Algorithm Based Unsupervised Feature Subset Selection)。基于合成与实际数据集的实验对比分析了GABUFSS算法与FDR算法的性能,结果表明GABUFSS相对优于FDR算法,并具有发现等价结果属性子集的特点。 相似文献
13.
基于图的无监督特征选择方法大多选择投影矩阵的l2,1范数稀疏正则化代替非凸的l2,0范数约束,然而l2,1范数正则化方法根据得分高低逐个选择特征,未考虑特征的相关性.因此,文中提出基于l2,0范数稀疏性和模糊相似性的图优化无监督组特征选择方法,同时进行图学习和特征选择.在图学习中,学习具有精确连通分量的相似性矩阵.在特征选择过程中,约束投影矩阵的非零行个数,实现组特征选择.为了解决非凸的l2,0范数约束,引入元素为0或1的特征选择向量,将l2,0范数约束问题转化为0-1整数规划问题,并将离散的0-1整数约束转化为2个连续约束进行求解.最后,引入模糊相似性因子,拓展文中方法,学习更精确的图结构.在真实数据集上的实验表明文中方法的有效性. 相似文献
14.
This paper proposes a novel unsupervised feature selection method by jointing self-representation and subspace learning. In this method, we adopt the idea of self-representation and use all the features to represent each feature. A Frobenius norm regularization is used for feature selection since it can overcome the over-fitting problem. The Locality Preserving Projection (LPP) is used as a regularization term as it can maintain the local adjacent relations between data when performing feature space transformation. Further, a low-rank constraint is also introduced to find the effective low-dimensional structures of the data, which can reduce the redundancy. Experimental results on real-world datasets verify that the proposed method can select the most discriminative features and outperform the state-of-the-art unsupervised feature selection methods in terms of classification accuracy, standard deviation, and coefficient of variation. 相似文献
15.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms. 相似文献
16.
特征选择是去除不相关和冗余特征,找到具有良好泛化能力的原始特征的紧凑表示,同时,数据中含有的噪声和离群点会使学习获得的系数矩阵的秩变大,使得算法无法捕捉到高维数据中真实的低秩结构。因此,利用Schatten-p范数逼近秩最小化问题和特征自表示重构无监督特征选择问题中的系数矩阵,建立一个基于Schatten-p范数和特征自表示的无监督特征选择(SPSR)算法,并使用增广拉格朗日乘子法和交替方向法乘子法框架进行求解。最后在6个公开数据集上与经典无监督特征选择算法进行实验比较,SPSR算法的聚类精度更高,可以有效地识别代表性特征子集。 相似文献
17.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here. 相似文献
18.
目前已有很多针对单值信息系统的无监督特征选择方法,但针对区间值信息系统的无监督特征选择方法却很少.针对区间序信息系统,文中提出模糊优势关系,并基于此关系扩展模糊排序信息熵和模糊排序互信息,用于评价特征的重要性.再结合一种综合考虑信息量和冗余度的无监督最大信息最小冗余(UmIMR)准则,构造无监督特征选择方法.最后通过实验证明文中方法的有效性. 相似文献
19.