首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Unsupervised feature selection attempts to select a small number of discriminative features from original high-dimensional data and preserve the intrinsic data structure without using data labels. As an unsupervised learning task, most previous methods often use a coefficient matrix for feature reconstruction or feature projection, and a certain similarity graph is widely utilized to regularize the intrinsic structure preservation of original data in a new feature space. However, a similarity gr...  相似文献   

2.
Lei  Cong  Zhu  Xiaofeng 《Multimedia Tools and Applications》2018,77(22):29605-29622
Multimedia Tools and Applications - Feature self-representation has become the backbone of unsupervised feature selection, since it is almost insensitive to noise data. However, feature selection...  相似文献   

3.
Elghazel  Haytham  Aussem  Alex 《Machine Learning》2015,98(1-2):157-180
Machine Learning - In this paper, we show that the way internal estimates are used to measure variable importance in Random Forests are also applicable to feature selection in unsupervised...  相似文献   

4.
He  Jinrong  Bi  Yingzhou  Ding  Lixin  Li  Zhaokui  Wang  Shenwen 《Neural computing & applications》2017,28(10):3047-3059

In applications of algorithms, feature selection has got much attention of researchers, due to its ability to overcome the curse of dimensionality, reduce computational costs, increase the performance of the subsequent classification algorithm and output the results with better interpretability. To remove the redundant and noisy features from original feature set, we define local density and discriminant distance for each feature vector, wherein local density is used for measuring the representative ability of each feature vector, and discriminant distance is used for measuring the redundancy and similarity between features. Based on the above two quantities, the decision graph score is proposed as the evaluation criterion of unsupervised feature selection. The method is intuitive and simple, and its performances are evaluated in the data classification experiments. From statistical tests on the averaged classification accuracies over 16 real-life dataset, it is observed that the proposed method obtains better or comparable ability of discriminant feature selection in 98% of the cases, compared with the state-of-the-art methods.

  相似文献   

5.
目的 特征降维是机器学习领域的热点研究问题。现有的低秩稀疏保持投影方法忽略了原始数据空间和降维后的低维空间之间的信息损失,且现有的方法不能有效处理少量有标签数据和大量无标签数据的情况,针对这两个问题,提出基于低秩稀疏图嵌入的半监督特征选择方法(LRSE)。方法 LRSE方法包含两步:第1步是充分利用有标签数据和无标签数据分别学习其低秩稀疏表示,第2步是在目标函数中同时考虑数据降维前后的信息差异和降维过程中的结构信息保持,其中通过最小化信息损失函数使数据中有用的信息尽可能地保留下来,将包含数据全局结构和内部几何结构的低秩稀疏图嵌入在低维空间中使得原始数据空间中的结构信息保留下来,从而能选择出更有判别性的特征。结果 将本文方法在6个公共数据集上进行测试,对降维后的数据采用KNN分类验证本文方法的分类准确率,并与其他现有的降维算法进行实验对比,本文方法分类准确率均有所提高,在其中的5个数据集上本文方法都有最高的分类准确率,其分类准确率分别在Wine数据集上比次高算法鲁棒非监督特征选择算法(RUFS)高11.19%,在Breast数据集上比次高算法RUFS高0.57%,在Orlraws10P数据集上比次高算法多聚类特征选择算法(MCFS)高1%,在Coil20数据集上比次高算法MCFS高1.07%,在数据集Orl64上比次高算法MCFS高2.5%。结论 本文提出的基于低秩稀疏图嵌入的半监督特征选择算法使得降维后的数据能最大限度地保留原始数据包含的信息,且能有效处理少量有标签样本和大量无标签样本的情况。实验结果表明,本文方法比现有算法的分类效果更好,此外,由于本文方法基于所有的特征都在线性流形上的假设,所以本文方法只适用于线性流形上的数据。  相似文献   

6.
Wu  Yue  Wang  Can  Zhang  Yue-qing  Bu  Jia-jun 《浙江大学学报:C卷英文版》2019,20(4):538-553

Feature selection has attracted a great deal of interest over the past decades. By selecting meaningful feature subsets, the performance of learning algorithms can be effectively improved. Because label information is expensive to obtain, unsupervised feature selection methods are more widely used than the supervised ones. The key to unsupervised feature selection is to find features that effectively reflect the underlying data distribution. However, due to the inevitable redundancies and noise in a dataset, the intrinsic data distribution is not best revealed when using all features. To address this issue, we propose a novel unsupervised feature selection algorithm via joint local learning and group sparse regression (JLLGSR). JLLGSR incorporates local learning based clustering with group sparsity regularized regression in a single formulation, and seeks features that respect both the manifold structure and group sparse structure in the data space. An iterative optimization method is developed in which the weights finally converge on the important features and the selected features are able to improve the clustering results. Experiments on multiple real-world datasets (images, voices, and web pages) demonstrate the effectiveness of JLLGSR.

  相似文献   

7.
为了提高无监督嵌入学习对图像特征的判别能力,提出一种基于深度聚类的无监督学习方法。通过对图像的嵌入特征进行聚类,获得图像之间的伪类别信息,然后最小化聚类损失来优化网络模型,使得模型能够学习到图像的高判别性特征。在三个标准数据集上的图像检索性能表明了该方法的有效性,并且优于目前大多数方法。  相似文献   

8.
龚永红  郑威  吴林  谭马龙  余浩 《计算机应用》2018,38(10):2856-2861
针对现有属性选择算法平等地对待每个样本而忽略样本之间的差异性,从而使学习模型无法避免噪声样本影响问题,提出一种融合自步学习理论的无监督属性选择(UFS-SPL)算法。首先自动选取一个重要的样本子集训练得到属性选择的鲁棒性初始模型,然后逐步自动引入次要样本提升模型的泛化能力,最终获得一个能避免噪声干扰而同时具有鲁棒性和泛化性的属性选择模型。在真实数据集上与凸半监督多标签属性选择(CSFS)、正则化自表达(RSR)和无监督属性选择的耦合字典学习方法(CDLFS)相比,UFS-SPL的聚类准确率、互信息和纯度平均提升12.06%、10.54%和10.5%。实验结果表明,UFS-SPL能够有效降低数据集中无关信息的影响。  相似文献   

9.
Wang  Changpeng  Zhang  Jiangshe  Song  Xueli  Wu  Tianjun 《Multimedia Tools and Applications》2020,79(39-40):29179-29198
Multimedia Tools and Applications - Face clustering aims to group the face images without any label information into clusters, and has recently attracted considerable attention in machine learning...  相似文献   

10.
Multimedia Tools and Applications - Transfer learning is proposed to solve a general problem in practical applications faced by traditional machine learning methods, that is, the training and test...  相似文献   

11.
Zheng  Wei  Zhu  Xiaofeng  Zhu  Yonghua  Hu  Rongyao  Lei  Cong 《Multimedia Tools and Applications》2018,77(22):29739-29755

Previous spectral feature selection methods generate the similarity graph via ignoring the negative effect of noise and redundancy of the original feature space, and ignoring the association between graph matrix learning and feature selection, so that easily producing suboptimal results. To address these issues, this paper joints graph learning and feature selection in a framework to obtain optimal selected performance. More specifically, we use the least square loss function and an ? 2,1-norm regularization to remove the effect of noisy and redundancy features, and use the resulting local correlations among the features to dynamically learn a graph matrix from a low-dimensional space of original data. Experimental results on real data sets show that our method outperforms the state-of-the-art feature selection methods for classification tasks.

  相似文献   

12.
Unsupervised feature selection using feature similarity   总被引:23,自引:0,他引:23  
In this article, we describe an unsupervised feature selection algorithm suitable for data sets, large in both dimension and size. The method is based on measuring similarity between features whereby redundancy therein is removed. This does not need any search and, therefore, is fast. A new feature similarity measure, called maximum information compression index, is introduced. The algorithm is generic in nature and has the capability of multiscale representation of data sets. The superiority of the algorithm, in terms of speed and performance, is established extensively over various real-life data sets of different sizes and dimensions. It is also demonstrated how redundancy and information loss in feature selection can be quantified with an entropy measure  相似文献   

13.
多标签特征选择是针对多标签数据的特征选择技术,提高多标签分类器性能的重要手段。提出一种基于流形学习的约束Laplacian分值多标签特征选择方法(Manifold-based Constraint Laplacian Score,M-CLS)。方法分别在数据特征空间和类别标签空间定义两种Laplacian分值:在特征空间利用逻辑型类别标签的相似性对邻接矩阵进行改进,定义特征空间的约束Laplacian分值;在标签空间基于流形学习将逻辑型类别标签映射为数值型,定义实值标签空间的Laplacian分值。将两种分值的乘积作为最终的特征评价指标。实验结果表明,所提方法性能优于多种多标签特征选择方法。  相似文献   

14.
15.
何杜博  孙胜祥  梁新  谢力  张侃 《控制与决策》2024,39(7):2295-2304
针对多目标回归中的特征选择问题,提出一种基于自适应图学习的多目标特征选择算法,在单个框架中同时考虑3种关系结构:输入特征与目标输出、不同目标输出以及样本间的相关结构,并基于上述结构信息进行特征选择.首先,在传统稀疏回归模型中对系数矩阵施加低秩约束,利用低秩学习对特征间相关性以及目标间的依赖关系进行解耦学习;然后,构建基于样本局部结构信息的自适应图学习项,充分利用样本间的相似结构进行特征选择;进一步地,引入基于输出相关性的结构矩阵优化项,使模型能够更加充分地考虑目标间的相关性;最后,提出一种交替优化算法求解目标函数,并从理论上证明算法的收敛性.在公开数据集上的实验表明,所提方法相较于现有主流的多目标特征选择方法具有更好的性能和适用性.  相似文献   

16.
Dimensionality reduction has been attracted extensive attention in machine learning. It usually includes two types: feature selection and subspace learning. Previously, many researchers have demonstrated that the dimensionality reduction is meaningful for real applications. Unfortunately, a large mass of these works utilize the feature selection and subspace learning independently. This paper explores a novel supervised feature selection algorithm by considering the subspace learning. Specifically, this paper employs an ? 2,1?norm and an ? 2,p ?norm regularizers, respectively, to conduct sample denoising and feature selection via exploring the correlation structure of data. Then this paper uses two constraints (i.e. hypergraph and low-rank) to consider the local structure and the global structure among the data, respectively. Finally, this paper uses the optimizing framework to iteratively optimize each parameter while fixing the other parameter until the algorithm converges. A lot of experiments show that our new supervised feature selection method can get great results on the eighteen public data sets.  相似文献   

17.
针对无标签高维数据的大量出现,对机器学习中无监督特征选择进行了研究。提出了一种结合自表示相似矩阵和流形学习的无监督特征选择算法。首先,通过数据的自表示性质,构建相似矩阵,结合低维流形能够表示高维数据结构这一流形学习思想,建立一种考虑流形学习的无监督特征选择优化模型。其次,为了保证选择更有用及更稀疏的特征,采用◢l◣▼2,1▽范数对优化模型进行约束,使特征之间相互竞争,消除冗余。进而,通过变量交替迭代对优化模型进行求解,并证明了算法的收敛性。最后,通过与其他几个无监督特征算法在四个数据集上的对比实验,表明所给算法的有效性。  相似文献   

18.
Computational Visual Media - As a kind of weaker supervisory information, pairwise constraints can be exploited to guide the data analysis process, such as data clustering. This paper formulates...  相似文献   

19.
Zhang  Leyuan  Li  Yangding  Zhang  Jilian  Li  Pengqing  Li  Jiaye 《Multimedia Tools and Applications》2019,78(23):33319-33337

The characteristics of non-linear, low-rank, and feature redundancy often appear in high-dimensional data, which have great trouble for further research. Therefore, a low-rank unsupervised feature selection algorithm based on kernel function is proposed. Firstly, each feature is projected into the high-dimensional kernel space by the kernel function to solve the problem of linear inseparability in the low-dimensional space. At the same time, the self-expression form is introduced into the deviation term and the coefficient matrix is processed with low rank and sparsity. Finally, the sparse regularization factor of the coefficient vector of the kernel matrix is introduced to implement feature selection. In this algorithm, kernel matrix is used to solve linear inseparability, low rank constraints to consider the global information of the data, and self-representation form determines the importance of features. Experiments show that comparing with other algorithms, the classification after feature selection using this algorithm can achieve good results.

  相似文献   

20.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号