首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
李华  李德玉  王素格  张晶 《计算机应用》2015,35(7):1939-1944
针对多标记数据特征提取方法中输出核函数没有准确刻画标记间的相关性的问题,在充分度量标记间相关性的基础上,提出了两种新的输出核函数构造方法。第一种方法首先将多标记数据转化为单标记数据,并使用标记集合来刻画标记间的相关性;然后从损失函数的角度出发定义新的输出核函数。第二种方法是利用互信息来度量标记间的两两相关性,在此基础上进一步构造新的输出核函数。3个多标记数据集上2种分类器的实验结果表明,与原有核函数对应的多标记特征提取方法相比,基于损失函数的输出核函数对应的特征提取方法性能最好,5个评价指标的性能平均提高了10%左右, 尤其在Yeast数据集上,Coverage指标下降幅度达到了30%左右;基于互信息的输出核函数次之,性能平均提高了5%左右。实验结果表明,基于新的输出核函数的特征提取方法能够更加有效地提取特征,并进一步简化分类器的学习过程,提高分类器的泛化性能。  相似文献   

2.
在多标记学习框架中,特征选择是解决维数灾难,提高多标记分类器的有效手段。提出了一种融合特征排序的多标记特征选择算法。该算法首先在各标记下进行自适应的粒化样本,以此来构造特征与类别标记之间的邻域互信息。其次,对得到邻域互信息进行排序,使得每个类别标记下均能得到一组特征排序。最后,多个独立的特征排序经过聚类融合成一组新的特征排序。在4个多标记数据集和4个评价指标上的实验结果表明,所提算法优于一些当前流行的多标记降维方法。  相似文献   

3.
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.   相似文献   

4.
In classification problems with hierarchical structures of labels, the target function must assign labels that are hierarchically organized and it can be used either for single-label (one label per instance) or multi-label classification problems (more than one label per instance). In parallel to these developments, the idea of semi-supervised learning has emerged as a solution to the problems found in a standard supervised learning procedure (used in most classification algorithms). It combines labelled and unlabelled data during the training phase. Some semi-supervised methods have been proposed for single-label classification methods. However, very little effort has been done in the context of multi-label hierarchical classification. Therefore, this paper proposes a new method for supervised hierarchical multi-label classification, called HMC-RAkEL. Additionally, we propose the use of semi-supervised learning, self-training, in hierarchical multi-label classification, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRAkEL. In order to validate the feasibility of these methods, an empirical analysis will be conducted, comparing the proposed methods with their corresponding supervised versions. The main aim of this analysis is to observe whether the semi-supervised methods proposed in this paper have similar performance of the corresponding supervised versions.  相似文献   

5.
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naïve Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.  相似文献   

6.
Nowadays, multi-label classification methods are of increasing interest in the areas such as text categorization, image annotation and protein function classification. Due to the correlation among the labels, traditional single-label classification methods are not directly applicable to the multi-label classification problem. This paper presents two novel multi-label classification algorithms based on the variable precision neighborhood rough sets, called multi-label classification using rough sets (MLRS) and MLRS using local correlation (MLRS-LC). The proposed algorithms consider two important factors that affect the accuracy of prediction, namely the correlation among the labels and the uncertainty that exists within the mapping between the feature space and the label space. MLRS provides a global view at the label correlation while MLRS-LC deals with the label correlation at the local level. Given a new instance, MLRS determines its location and then computes the probabilities of labels according to its location. The MLRS-LC first finds out its topic and then the probabilities of new instance belonging to each class is calculated in related topic. A series of experiments reported for seven multi-label datasets show that MLRS and MLRS-LC achieve promising performance when compared with some well-known multi-label learning algorithms.  相似文献   

7.
在多标记学习的任务中,多标记学习的每个样本可被多个标签标记,比单标记学习的应用空间更广关注度更高,多标记学习可以利用关联性提高算法的性能。在多标记学习中,传统特征选择算法已不再适用,一方面,传统的特征选择算法可被用于单标记的评估标准。多标记学习使得多个标记被同时优化;而且在多标记学习中关联信息存在于不同标记间。因此,可设计一种能够处理多标记问题的特征选择算法,使标记之间的关联信息能够被提取和利用。通过设计最优的目标损失函数,提出了基于指数损失间隔的多标记特征选择算法。该算法可以通过样本相似性的方法,将特征空间和标记空间的信息融合在一起,独立于特定的分类算法或转换策略。优于其他特征选择算法的分类性能。在现实世界的数据集上验证了所提算法的正确性以及较好的性能。  相似文献   

8.
吕佳 《计算机应用》2012,32(12):3308-3310
针对在求解半监督多标记分类问题时通常将其分解成若干个单标记半监督二类分类问题从而导致忽视类别之间内在联系的问题,提出基于局部学习的半监督多标记分类方法。该方法避开了多个单标记半监督二类分类问题的求解,采用“整体法”的研究思路,利用基于图的方法,引入基于样本的局部学习正则项和基于类别的拉普拉斯正则项,构建了问题的正则化框架。实验结果表明,所提算法具有较高的查全率和查准率。  相似文献   

9.
传统单标签挖掘技术研究中,每个样本只属于一个标签且标签之间两两互斥。而在多标签学习问题中,一个样本可能对应多个标签,并且各标签之间往往具有关联性。目前,标签间关联性研究逐渐成为多标签学习研究的热门问题。首先为适应大数据环境,对传统关联规则挖掘算法Apriori进行并行化改进,提出基于Hadoop的并行化算法Apriori_ING,实现各节点独立完成候选项集的生成、剪枝与支持数统计,充分发挥并行化的优势;通过Apriori_ING算法得到的频繁项集和关联规则生成标签集合,提出基于推理机的标签集合生成算法IETG。然后,将标签集合应用到多标签学习中,提出多标签学习算法FreLP。FreLP利用关联规则生成标签集合,将原始标签集分解为多个子集,再使用LP算法训练分类器。通过实验将FreLP与现有的多标签学习算法进行对比,结果表明在不同评价指标下所提算法可以取得更好的结果。  相似文献   

10.
Multi-label learning is an effective framework for learning with objects that have multiple semantic labels, and has been successfully applied into many real-world tasks. In contrast with traditional single-label learning, the cost of labeling a multi-label example is rather high, thus it becomes an important task to train an effectivemulti-label learning model with as few labeled examples as possible. Active learning, which actively selects the most valuable data to query their labels, is the most important approach to reduce labeling cost. In this paper, we propose a novel approach MADM for batch mode multi-label active learning. On one hand, MADM exploits representativeness and diversity in both the feature and label space by matching the distribution between labeled and unlabeled data. On the other hand, it tends to query predicted positive instances, which are expected to be more informative than negative ones. Experiments on benchmark datasets demonstrate that the proposed approach can reduce the labeling cost significantly.  相似文献   

11.
在多标记学习中,数据降维是一项重要且具有挑战性的任务,而特征选择又是一种高效的数据降维技术。在邻域粗糙集理论的基础上提出一种多标记专属特征选择方法,该方法从理论上确保了所得到的专属特征与相应标记具有较强的相关性,进而改善了约简效果。首先,该方法运用粗糙集理论的约简算法来减少冗余属性,在保持分类能力不变的情况下获得标记的专属特征;然后,在邻域精确度和邻域粗糙度概念的基础上,重新定义了基于邻域粗糙集的依赖度与重要度的计算方法,探讨了该模型的相关性质;最后,构建了一种基于邻域粗糙集的多标记专属特征选择模型,实现了多标记分类任务的特征选择算法。在多个公开的数据集上进行仿真实验,结果表明了该算法是有效的。  相似文献   

12.
Directly applying single-label classification methods to the multi-label learning problems substantially limits both the performance and speed due to the imbalance, dependence and high dimensionality of the given label matrix. Existing methods either ignore these three problems or reduce one with the price of aggravating another. In this paper, we propose a {0,1} label matrix compression and recovery method termed ??compressed labeling (CL)?? to simultaneously solve or at least reduce these three problems. CL first compresses the original label matrix to improve balance and independence by preserving the signs of its Gaussian random projections. Afterward, we directly utilize popular binary classification methods (e.g., support vector machines) for each new label. A fast recovery algorithm is developed to recover the original labels from the predicted new labels. In the recovery algorithm, a ??labelset distilling method?? is designed to extract distilled labelsets (DLs), i.e., the frequently appeared label subsets from the original labels via recursive clustering and subtraction. Given a distilled and an original label vector, we discover that the signs of their random projections have an explicit joint distribution that can be quickly computed from a geometric inference. Based on this observation, the original label vector is exactly determined after performing a series of Kullback-Leibler divergence based hypothesis tests on the distribution about the new labels. CL significantly improves the balance of?the training samples and reduces the dependence between different labels. Moreover, it accelerates the learning process by training fewer binary classifiers for compressed labels, and makes use of label dependence via DLs based tests. Theoretically, we prove the recovery bounds of CL which verifies the effectiveness of CL for label compression and multi-label classification performance improvement brought by label correlations preserved in DLs. We show the effectiveness, efficiency and robustness of CL via 5 groups of experiments on 21 datasets from text classification, image annotation, scene classification, music categorization, genomics and web page classification.  相似文献   

13.
In this paper, we propose a novel framework for multi-label classification, which directly models the dependencies among labels using a Bayesian network. Each node of the Bayesian network represents a label, and the links and conditional probabilities capture the probabilistic dependencies among multiple labels. We employ our Bayesian network structure learning method, which guarantees to find the global optimum structure, independent of the initial structure. After structure learning, maximum likelihood estimation is used to learn the conditional probabilities among nodes. Any current multi-label classifier can be employed to obtain the measurements of labels. Then, using the learned Bayesian network, the true labels are inferred by combining the relationship among labels with the labels? estimates obtained from a current multi-labeling method. We further extend the proposed multi-label classification method to deal with incomplete label assignments. Structural Expectation-Maximization algorithm is adopted for both structure and parameter learning. Experimental results on two benchmark multi-label databases show that our approach can effectively capture the co-occurrent and the mutual exclusive relation among labels. The relation modeled by our approach is more flexible than the pairwise or fixed subset labels captured by current multi-label learning methods. Thus, our approach improves the performance over current multi-label classifiers. Furthermore, our approach demonstrates its robustness to incomplete multi-label classification.  相似文献   

14.
多标签数据广泛存在于现实世界中,多标签特征选择是多标签学习中重要的预处理步骤.基于模糊粗糙集模型,研究人员已经提出了一些多标签特征选择算法,但是这些算法大多没有关注标签之间的共现特性.为了解决这一问题,基于样本标签间的共现关系评价样本在标签集下的相似关系,利用这种关系定义了特征与标签之间的模糊互信息,并结合最大相关与最小冗余原则设计了一种多标签特征选择算法LC-FS.在5个公开数据集上进行了实验,实验结果表明了所提算法的有效性.  相似文献   

15.
在多标记分类中,某个标记可能只由其自身的某些特有属性决定,这些特定属性称之为类属属性利用类属属性进行多标记分类,可以有效避免某些无用特征影响构建分类模型的性能然而类属属性算法仅从标记角度去提取重要特征,而忽略了从特征角度去提取重要标记事实上,如果能从特征角度提前关注某些标记,更容易获取这些标记的特有属性基于此,提出了一...  相似文献   

16.
gMLC: a multi-label feature selection framework for graph classification   总被引:1,自引:1,他引:0  
Graph classification has been showing critical importance in a wide variety of applications, e.g. drug activity predictions and toxicology analysis. Current research on graph classification focuses on single-label settings. However, in many applications, each graph data can be assigned with a set of multiple labels simultaneously. Extracting good features using multiple labels of the graphs becomes an important step before graph classification. In this paper, we study the problem of multi-label feature selection for graph classification and propose a novel solution, called gMLC, to efficiently search for optimal subgraph features for graph objects with multiple labels. Different from existing feature selection methods in vector spaces that assume the feature set is given, we perform multi-label feature selection for graph data in a progressive way together with the subgraph feature mining process. We derive an evaluation criterion to estimate the dependence between subgraph features and multiple labels of graphs. Then, a branch-and-bound algorithm is proposed to efficiently search for optimal subgraph features by judiciously pruning the subgraph search space using multiple labels. Empirical studies demonstrate that our feature selection approach can effectively boost multi-label graph classification performances and is more efficient by pruning the subgraph search space using multiple labels.  相似文献   

17.
在多标记学习中,发现与利用各标记之间的依赖关系能提高学习算法的性能。文中基于分类器链模型提出一种针对性的多标记分类算法。该算法首先量化标记间的依赖程度,并构建标记之间明确的树型依赖结构,从而可减弱分类器链算法中依赖关系的随机性,并将线性依赖关系泛化成树型依赖关系。为充分利用标记间的相互依赖关系,文中采用集成学习技术进一步学习并集成多个不同的标记树型依赖结构。实验结果表明,同分类器链等算法相比,该算法经过集成学习后有更好的分类性能,其能更有效地学习标记间的依赖关系。  相似文献   

18.
在多标记分类问题中,每个样本可以同时与多个标记类别相关,其中一些标记之间可能具有相关性,充分利用这些标记相关性,可优化分类性能.因此,文中利用标记的频繁项集对标记相关性进行挖掘,提出针对基于邻域粗糙集的多标记属性约简算法进行改进的特征选择算法,并进一步将训练样本根据特征之间的相似性进行聚类,结合局部样本上的标记相关性,进行属性约简及分类.在5个多标记分类数据集上的实验验证文中算法的有效性.  相似文献   

19.
Wang  Min  Feng  Tingting  Shan  Zhaohui  Min  Fan 《Applied Intelligence》2022,52(10):11131-11146

In multi-label learning, each instance is simultaneously associated with multiple class labels. A large number of labels in an application exacerbates the problem of label scarcity. An interesting issue concerns how to query as few labels as possible while obtaining satisfactory classification accuracy. For this purpose, we propose the attribute and label distribution driven multi-label active learning (MCAL) algorithm. MCAL considers the characteristics of both attributes and labels to enable the selection of critical instances based on different measures. Representativeness is measured by the probability density function obtained by non-parametric estimation, while informativeness is measured by the bilateral softmax predicted entropy. Diversity is measured by the distance metric among instances, and richness is measured by the number of softmax predicted labels. We describe experiments performed on eight benchmark datasets and eleven real Yahoo webpage datasets. The results verify the effectiveness of MCAL and its superiority over state-of-the-art multi-label algorithms and multi-label active learning algorithms.

  相似文献   

20.
特征选择一直是机器学习和数据挖掘中的一个重要问题。在多标签学习任务中,数据集中的每个样本都与多个标签相关联,标签与标签之间通常也是相关的。在多标签高维数据分析中,为降低特征维数和提高分类性能,研究者们提出了多标签特征选择方法。系统综述了多标签特征选择的研究进展。在介绍多标签分类以及评价准则之后,详细分析了多标签特征选择的三类方法,即过滤式算法、包裹式算法和嵌入式算法,对多标签特征选择未来的研究提出展望。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号