首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Semi-supervised dimensionality reduction has attracted an increasing amount of attention in this big-data era. Many algorithms have been developed with a small number of pairwise constraints to achieve performances comparable to those of fully supervised methods. However, one challenging problem with semi-supervised approaches is the appropriate choice of the constraint set, including the cardinality and the composition of the constraint set which, to a large extent, affects the performance of the resulting algorithm. In this work, we address the problem by incorporating ensemble subspaces and active learning into dimensionality reduction and propose a new global and local scatter based semi-supervised dimensionality reduction method with active constraints selection. Unlike traditional methods that select the supervised information in one subspace, we pick up pairwise constraints in ensemble subspaces, where a novel active learning algorithm is designed with both exploration and filtering to generate informative pairwise constraints. The automatic constraint selection approach proposed in this paper can be generalized to be used with all constraint-based semi-supervised learning algorithms. Comparative experiments are conducted on four face database and the results validate the effectiveness of the proposed method.  相似文献   

2.
Dimensionality reduction is an essential data preprocessing technique for large-scale and streaming data classification tasks. It can be used to improve both the efficiency and the effectiveness of classifiers. Traditional dimensionality reduction approaches fall into two categories: feature extraction and feature selection. Techniques in the feature extraction category are typically more effective than those in feature selection category. However, they may break down when processing large-scale data sets or data streams due to their high computational complexities. Similarly, the solutions provided by the feature selection approaches are mostly solved by greedy strategies and, hence, are not ensured to be optimal according to optimized criteria. In this paper, we give an overview of the popularly used feature extraction and selection algorithms under a unified framework. Moreover, we propose two novel dimensionality reduction algorithms based on the orthogonal centroid algorithm (OC). The first is an incremental OC (IOC) algorithm for feature extraction. The second algorithm is an orthogonal centroid feature selection (OCFS) method which can provide optimal solutions according to the OC criterion. Both are designed under the same optimization criterion. Experiments on Reuters Corpus Volume-1 data set and some public large-scale text data sets indicate that the two algorithms are favorable in terms of their effectiveness and efficiency when compared with other state-of-the-art algorithms.  相似文献   

3.
Facial expression recognition generally requires that faces be described in terms of a set of measurable features. The selection and quality of the features representing each face have a considerable bearing on the success of subsequent facial expression classification. Feature selection is the process of choosing a subset of features in order to increase classifier efficiency and allow higher classification accuracy. Many current dimensionality reduction techniques, used for facial expression recognition, involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. In this paper, we present a methodology for the selection of features that uses nondominated sorting genetic algorithm-II (NSGA-II), which is one of the latest genetic algorithms developed for resolving problems with multiobjective approach with high accuracy. In the proposed feature selection process, NSGA-II optimizes a vector of feature weights, which increases the discrimination, by means of class separation. The proposed methodology is evaluated using 3D facial expression database BU-3DFE. Classification results validates the effectiveness and the flexibility of the proposed approach when compared with results reported in the literature using the same experimental settings.  相似文献   

4.
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.   相似文献   

5.
在多标记学习中,数据降维是一项重要且具有挑战性的任务,而特征选择又是一种高效的数据降维技术。在邻域粗糙集理论的基础上提出一种多标记专属特征选择方法,该方法从理论上确保了所得到的专属特征与相应标记具有较强的相关性,进而改善了约简效果。首先,该方法运用粗糙集理论的约简算法来减少冗余属性,在保持分类能力不变的情况下获得标记的专属特征;然后,在邻域精确度和邻域粗糙度概念的基础上,重新定义了基于邻域粗糙集的依赖度与重要度的计算方法,探讨了该模型的相关性质;最后,构建了一种基于邻域粗糙集的多标记专属特征选择模型,实现了多标记分类任务的特征选择算法。在多个公开的数据集上进行仿真实验,结果表明了该算法是有效的。  相似文献   

6.
7.
An efficient filter feature selection (FS) method is proposed in this paper, the SVM-FuzCoC approach, achieving a satisfactory trade-off between classification accuracy and dimensionality reduction. Additionally, the method has reasonably low computational requirements, even in high-dimensional feature spaces. To assess the quality of features, we introduce a local fuzzy evaluation measure with respect to patterns that embraces fuzzy membership degrees of every pattern in their classes. Accordingly, the above measure reveals the adequacy of data coverage provided by each feature. The required membership grades are determined via a novel fuzzy output kernel-based support vector machine, applied on single features. Based on a fuzzy complementary criterion (FuzCoC), the FS procedure iteratively selects features with maximum additional contribution in regard to the information content provided by previously selected features. This search strategy leads to small subsets of powerful and complementary features, alleviating the feature redundancy problem. We also devise different SVM-FuzCoC variants by employing seven other methods to derive fuzzy degrees from SVM outputs, based on probabilistic or fuzzy criteria. Our method is compared with a set of existing FS methods, in terms of performance capability, dimensionality reduction, and computational speed, via a comprehensive experimental setup, including synthetic and real-world datasets.  相似文献   

8.
The rapid advances in hyperspectral sensing technology have made it possible to collect remote-sensing data in hundreds of bands. However, the data-analysis methods that have been successfully applied to multispectral data are often limited in achieving satisfactory results for hyperspectral data. The major problem is the high dimensionality, which deteriorates the classification due to the Hughes Phenomenon. In order to avoid this problem, a large number of algorithms have been proposed, so far, for feature reduction. Based on the concept of multiple classifiers, we propose a new schema for the feature selection procedure. In this framework, instead of using feature selection for whole classes, we adopt feature selection for each class separately. Thus different subsets of features are selected at the first step. Once the feature subsets are selected, a Bayesian classifier is trained on each of these feature subsets. Finally, a combination mechanism is used to combine the outputs of these classifiers. Experiments are carried out on an Airborne Visible/Infrared Imaging Spectroradiometer (AVIRIS) data set. Encouraging results have been obtained in terms of classification accuracy, suggesting the effectiveness of the proposed algorithms.  相似文献   

9.
多源信息融合故障诊断方法可以有效提高设备故障的确诊率,但同时需要使用由不同传感器获取的多种故障特征数据.此时若将所有特征的数据用于诊断,则计算量过大,诊断的实时性差.对此,将证据理论与粗糙集相结合,提出基于信度区间的属性约简定理及相应的故障特征(属性)约简方法,力图利用约简后的重要特征进行快速诊断.利用随机模糊变量和K均值对特征数据进行离散化处理,通过压缩二进制矩阵获取核属性,再将属性的信度区间大小作为迭代约简过程中属性的选取标准,向核属性中添加重要属性,最终获得属性约简结果.最后进行电机转子的特征融合诊断实验,通过与经典的粗糙集简约方法对比验证所提出方法的有效性.  相似文献   

10.
11.
A new unsupervised feature selection algorithm, based on the concept of shared nearest neighbor distance between pattern pairs, is developed. A multi-objective framework is employed for the preservation of sample similarity, along with dimensionality reduction of the feature space. A reduced set of samples, chosen to preserve sample similarity, serves to reduce the effect of outliers on the feature selection procedure while also decreasing computational complexity. Experimental results on six sets of publicly available data demonstrate the effectiveness of this feature selection strategy. Comparative study with related methods based on different evaluation indices have demonstrated the superiority of the proposed algorithm.  相似文献   

12.
基于自动子空间划分的高光谱数据特征提取   总被引:7,自引:0,他引:7  
针对遥感高光谱图像数据量大、维数高的特点,提出了一种自动子空间划分方法用于高光谱图像数据量减小处理。该方法主要包括3个处理步骤:数据空间划分,子空间主成分分析和基于类别可分性准则的特征选择。该方法充分利用了高光谱图像各波段数据之间的局部相关性,将整个数据划分为若干个具有较强相关性的独立子空间,然后在子空间内利用主成分分析进行特征提取,根据各类地物间的类别可分性选择有效特征,最后利用地物分类来验证该方法的有效性。实验结果表明,该方法能够有效地实现高光谱图像数据维数减小和特征提取,同现有的自适应子空间分解方法和分段主成分变换方法相比,该方法所提取的特征用于分类时能获得较好的分类精度。利用该方法进行处理,当高光谱数据维数降低了90%时,9类地物分类实验的总体分类精度可以达到80.2%。  相似文献   

13.
Rough set theory (RS) has been a topic of general interest in the field of knowledge discovery and pattern recognition. Machine learning algorithms are known to degrade in performance when faced with many features (sometimes attributes) that are not necessary for rule discovery. Many methods for selecting a subset of features have been proposed. However, only one method cannot handle the complex system with many attributes or features, so a hybrid mechanism is proposed based on rough set integrating artificial neural network (Rough-ANN) for feature selection in pattern recognition. RS-based attributes reduction as the preprocessor can decrease the inputs of the NN and improve the speed of training. So the sensitivity of rough set to noise can be avoided and the system’s robustness is to be improved. A RS-based heuristic algorithm is proposed for feature selection. The approach can select an optimal subset of features quickly and effectively from a large database with a lot of features. Moreover, the validity of the proposed hybrid recognizer and solution is verified by the application of practical experiments and fault diagnosis in industrial process.  相似文献   

14.
王晓明  印莹 《计算机科学》2007,34(8):171-176
DNA微阵列技术使同时监测成千上万的基因表达水平成为可能.直接把传统聚类算法用于高维基因表达数据分析会受到"维难"的困扰.特征转换和特征选择是两种常用的降维方式,但前者产生的新特征难以用原来的领域知识解释,后者通常会丢失信息.另外,传统的聚类算法通常由用户指定聚类参数,参数设置不同对聚类结果有很大的影响.针对上述问题,本文提出了一种新的基于迭代扩张的微阵列数据聚类算法-CIS.它不采用特征转换和特征选择的方式,并自动确定聚类参数.CIS反复用最新得到的样本聚簇得到新的聚类基因,然后以新的基因聚簇为特征重新聚类样本,逐步求精,最终的结果容易解释且避免了信息的丢失.该方法降低了由于用户缺少领域知识引起的实验误差.CIS算法被应用于两个真实的微阵列数据集,实验结果证实了算法的有效性.  相似文献   

15.
本文首先简单分析了几种经典的特征选择方法,总结了它们的不足,然后提出了特征集中度的概念, 紧接着把差别对象对集引入粗糙集并提出了一个基于差别对象对集的属性约简算法,最后把该属性约简算法同特征 集中度结合起来,提出了一个综合性特征选择方法.该综合性方法首先利用特征集中度进行特征初选以过滤掉一些 词条来降低特征空间的稀疏性,然后再使用所提属性约简算法消除冗余,从而获得较具代表性的特征子集.实验结 果表明该综合性方法效果良好.  相似文献   

16.
In multimedia information retrieval, multimedia data are represented as vectors in high-dimensional space. To search these vectors efficiently, a variety of indexing methods have been proposed. However, the performance of these indexing methods degrades dramatically with increasing dimensionality, which is known as the dimensionality curse. To resolve the dimensionality curse, dimensionality reduction methods have been proposed. They map feature vectors in high-dimensional space into vectors in low-dimensional space before the data are indexed. This paper proposes a novel method for dimensionality reduction based on a function that approximates the Euclidean distance based on the norm and angle components of a vector. First, we identify the causes of, and discuss basic solutions to, errors in angle approximation during the approximation of the Euclidean distance. Then, this paper propose a new method for dimensionality reduction that extracts a set of subvectors from a feature vector and maintains only the norm and the approximated angle for every subvector. The selection of a good reference vector is crucial for accurate approximation of the angle component. We present criteria for being a good reference vector, and propose a method that chooses a good reference vector. Also, we define a novel distance function using the norm and angle components, and formally prove that the distance function consistently lower-bounds the Euclidean distance. This implies information retrieval with this function does not incur any false dismissals. Finally, the superiority of the proposed approach is verified via extensive experiments with synthetic and real-life data sets.
Byung-Uk ChoiEmail:
  相似文献   

17.
针对处理高维度属性的大数据的属性约减方法进行了研究。发现属性选择和子空间学习是属性约简的两种常见方法,其中属性选择具有很好的解释性,子空间学习的分类效果优于属性选择。而往往这两种方法是各自独立进行应用。为此,提出了综合这两种属性约简方法,设计出新的属性选择方法。即利用子空间学习的两种技术(即线性判别分析(LDA)和局部保持投影(LPP)),考虑数据的全局特性和局部特性,同时设置稀疏正则化因子实现属性选择。基于分类准确率、方差和变异系数等评价指标的实验结果比较,表明该算法相比其它对比算法,能更有效的选取判别属性,并能取得很好的分类效果。  相似文献   

18.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

19.
With the rapid development of information techniques, the dimensionality of data in many application domains, such as text categorization and bioinformatics, is getting higher and higher. The high‐dimensionality data may bring many adverse situations, such as overfitting, poor performance, and low efficiency, to traditional learning algorithms in pattern classification. Feature selection aims at reducing the dimensionality of data and providing discriminative features for pattern learning algorithms. Due to its effectiveness, feature selection is now gaining increasing attentions from a variety of disciplines and currently many efforts have been attempted in this field. In this paper, we propose a new supervised feature selection method to pick important features by using information criteria. Unlike other selection methods, the main characteristic of our method is that it not only takes both maximal relevance to the class labels and minimal redundancy to the selected features into account, but also works like feature clustering in an agglomerative way. To measure the relevance and redundancy of feature exactly, two different information criteria, i.e., mutual information and coefficient of relevance, have been adopted in our method. The performance evaluations on 12 benchmark data sets show that the proposed method can achieve better performance than other popular feature selection methods in most cases.  相似文献   

20.
Attribute selection is one of the important problems encountered in pattern recognition, machine learning, data mining, and bioinformatics. It refers to the problem of selecting those input attributes or features that are most effective to predict the sample categories. In this regard, rough set theory has been shown to be successful for selecting relevant and nonredundant attributes from a given data set. However, the classical rough sets are unable to handle real valued noisy features. This problem can be addressed by the fuzzy-rough sets, which are the generalization of classical rough sets. A feature selection method is presented here based on fuzzy-rough sets by maximizing both relevance and significance of the selected features. This paper also presents different feature evaluation criteria such as dependency, relevance, redundancy, and significance for attribute selection task using fuzzy-rough sets. The performance of different rough set models is compared with that of some existing feature evaluation indices based on the predictive accuracy of nearest neighbor rule, support vector machine, and decision tree. The effectiveness of the fuzzy-rough set based attribute selection method, along with a comparison with existing feature evaluation indices and different rough set models, is demonstrated on a set of benchmark and microarray gene expression data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号