首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
赵静  韩京宇  钱龙  毛毅 《计算机应用》2022,42(6):1892-1897
心电图(ECG)数据通常包含多种病症,而ECG诊断是一个典型的多标签分类问题。在多标签分类方法中,RAKEL算法将标签集随机分解为若干个大小为k的子集,并建立LP分类器进行训练;然而由于没有充分考虑标签间的相关性,LP分类器中容易产生一些标签组合所对应样本稀少的情况,从而影响预测性能。为了充分考虑标签间的相关性,提出一种基于贝叶斯网络的RAKEL算法BN-RAKEL。首先利用贝叶斯网络找到标签间的相关性,确定候选标签子集;然后对每个标签采用基于信息增益的特征选择算法确定其最优特征空间,并针对每个候选标签子集利用最优特征空间相似性来检测其相关程度,以确定最终的具有强相关性的标签子集;最后在标签子集的最优特征空间上训练LP分类器。在实际的ECG数据集上,与多标签K近邻(ML-KNN)、RAKEL、CC和基于FP-Growth的RAKEL算法FI-RAKEL进行对比,结果显示所提算法在召回率和F-score上最少提高了3.6个百分点和2.3个百分点。实验结果表明,BN-RAKEL算法有较好的预测性能,能有效提升ECG诊断的准确性。  相似文献   

2.
The goal in multi-label classification is to tag a data point with the subset of relevant labels from a pre-specified set. Given a set of L labels, a data point can be tagged with any of the 2 L possible subsets. The main challenge therefore lies in optimising over this exponentially large label space subject to label correlations. Our objective, in this paper, is to design efficient algorithms for multi-label classification when the labels are densely correlated. In particular, we are interested in the zero-shot learning scenario where the label correlations on the training set might be significantly different from those on the test set. We propose a max-margin formulation where we model prior label correlations but do not incorporate pairwise label interaction terms in the prediction function. We show that the problem complexity can be reduced from exponential to linear while modelling dense pairwise prior label correlations. By incorporating relevant correlation priors we can handle mismatches between the training and test set statistics. Our proposed formulation generalises the effective 1-vs-All method and we provide a principled interpretation of the 1-vs-All technique. We develop efficient optimisation algorithms for our proposed formulation. We adapt the Sequential Minimal Optimisation (SMO) algorithm to multi-label classification and show that, with some book-keeping, we can reduce the training time from being super-quadratic to almost linear in the number of labels. Furthermore, by effectively re-utilizing the kernel cache and jointly optimising over all variables, we can be orders of magnitude faster than the competing state-of-the-art algorithms. We also design a specialised algorithm for linear kernels based on dual co-ordinate ascent with shrinkage that lets us effortlessly train on a million points with a hundred labels.  相似文献   

3.
In classification problems with hierarchical structures of labels, the target function must assign labels that are hierarchically organized and it can be used either for single-label (one label per instance) or multi-label classification problems (more than one label per instance). In parallel to these developments, the idea of semi-supervised learning has emerged as a solution to the problems found in a standard supervised learning procedure (used in most classification algorithms). It combines labelled and unlabelled data during the training phase. Some semi-supervised methods have been proposed for single-label classification methods. However, very little effort has been done in the context of multi-label hierarchical classification. Therefore, this paper proposes a new method for supervised hierarchical multi-label classification, called HMC-RAkEL. Additionally, we propose the use of semi-supervised learning, self-training, in hierarchical multi-label classification, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRAkEL. In order to validate the feasibility of these methods, an empirical analysis will be conducted, comparing the proposed methods with their corresponding supervised versions. The main aim of this analysis is to observe whether the semi-supervised methods proposed in this paper have similar performance of the corresponding supervised versions.  相似文献   

4.
Liu  Haiyang  Wang  Zhihai  Sun  Yange 《Neural computing & applications》2020,32(22):16763-16774

Exploiting dependencies between the labels is the key of improving the performance of multi-label classification. In this paper, we divide the utilizing methods of label dependence into two groups from the perspective of different ways of problem transformation: label grouping method and feature space extending method. As to the feature space extending method, we find that the common problem is how to measure the dependencies between labels and to select proper labels to add to the original feature space. Therefore, we propose a ReliefF-based pruning model for multi-label classification (ReliefF-based stacking, RFS). RFS measures the dependencies between labels in a feature selection perspective and then selects the more relative labels into the original feature space. Experimental results of 9 multi-label benchmark datasets shows that RFS is more effective compared to other advanced multi-label classification algorithms.

  相似文献   

5.
Rakel(Random k-labelsets)算法从原始标签集中随机选择一部分标签子集,并且使用LP(Label Powerset)算法训练相应的多标签子分类器。由于随机选择标签的原因,导致LP子分类器预测性能不好。本文基于标签的共现关系选择成对标签来训练LP分类器,提出PwRakel(Pairwise Random k-labelsets)算法。该算法通过挖掘标签相关性扩展训练集,有效提高分类性能。实验结果表明,所提出的算法与Rakel算法以及其他算法对比,分类准确度更高。  相似文献   

6.
Wang  Min  Feng  Tingting  Shan  Zhaohui  Min  Fan 《Applied Intelligence》2022,52(10):11131-11146

In multi-label learning, each instance is simultaneously associated with multiple class labels. A large number of labels in an application exacerbates the problem of label scarcity. An interesting issue concerns how to query as few labels as possible while obtaining satisfactory classification accuracy. For this purpose, we propose the attribute and label distribution driven multi-label active learning (MCAL) algorithm. MCAL considers the characteristics of both attributes and labels to enable the selection of critical instances based on different measures. Representativeness is measured by the probability density function obtained by non-parametric estimation, while informativeness is measured by the bilateral softmax predicted entropy. Diversity is measured by the distance metric among instances, and richness is measured by the number of softmax predicted labels. We describe experiments performed on eight benchmark datasets and eleven real Yahoo webpage datasets. The results verify the effectiveness of MCAL and its superiority over state-of-the-art multi-label algorithms and multi-label active learning algorithms.

  相似文献   

7.
在已有的特征选择算法中,常用策略是通过相关准则选择与标记集合相关性较强的特征,然而该策略不一定是最优选择,因为与标记集合相关性较弱的特征可能是决定某些类别标记的关键特征.基于这一假设,文中提出基于局部子空间的多标记特征选择算法.该算法首先利用特征与标记集合之间的互信息得到一个重要度由高到低的特征序列,然后将新的特征排序空间划分为几个局部子空间,并在每个子空间设置采样比例以选择冗余性较小的特征,最后融合各子空间的特征子集,得到一组合理的特征子集.在6个数据集和4个评价指标上的实验表明,文中算法优于一些通用的多标记特征选择算法.  相似文献   

8.
在多标记学习框架中,特征选择是解决维数灾难,提高多标记分类器的有效手段。提出了一种融合特征排序的多标记特征选择算法。该算法首先在各标记下进行自适应的粒化样本,以此来构造特征与类别标记之间的邻域互信息。其次,对得到邻域互信息进行排序,使得每个类别标记下均能得到一组特征排序。最后,多个独立的特征排序经过聚类融合成一组新的特征排序。在4个多标记数据集和4个评价指标上的实验结果表明,所提算法优于一些当前流行的多标记降维方法。  相似文献   

9.
Multi-label classification exhibits several challenges not present in the binary case. The labels may be interdependent, so that the presence of a certain label affects the probability of other labels’ presence. Thus, exploiting dependencies among the labels could be beneficial for the classifier’s predictive performance. Surprisingly, only a few of the existing algorithms address this issue directly by identifying dependent labels explicitly from the dataset. In this paper we propose new approaches for identifying and modeling existing dependencies between labels. One principal contribution of this work is a theoretical confirmation of the reduction in sample complexity that is gained from unconditional dependence. Additionally, we develop methods for identifying conditionally and unconditionally dependent label pairs; clustering them into several mutually exclusive subsets; and finally, performing multi-label classification incorporating the discovered dependencies. We compare these two notions of label dependence (conditional and unconditional) and evaluate their performance on various benchmark and artificial datasets. We also compare and analyze labels identified as dependent by each of the methods. Moreover, we define an ensemble framework for the new methods and compare it to existing ensemble methods. An empirical comparison of the new approaches to existing base-line and state-of-the-art methods on 12 various benchmark datasets demonstrates that in many cases the proposed single-classifier and ensemble methods outperform many multi-label classification algorithms. Perhaps surprisingly, we discover that the weaker notion of unconditional dependence plays the decisive role.  相似文献   

10.
Fisher Score (FS)是一种快速高效的评价特征分类能力的指标,但传统的FS指标既无法直接应用于多标记学习,也不能有效处理样本极值导致的类中心与实际类中心的误差。提出一种结合中心偏移和多标记集合关联性的FS多标记特征选择算法,找出不同标记下每类样本的极值点,以极值点到该类样本的中心距离乘以半径系数筛选新的样本,从而获得分布更为密集的样本集合,以此计算特征的FS得分,通过整体遍历全体样本的标记集合中的每个标记,并在遍历过程中针对具有更多标记数量的样本自适应地赋以标记权值,得到整体特征的平均FS得分,以特征的FS得分进行排序过滤出目标子集实现特征选择目标。在8个公开的多标记文本数据集上进行参数分析及5种指标性能比较,结果表明,该算法具有一定的有效性和鲁棒性,在多数指标上优于MLNB、MLRF、PMU、MLACO等多标记特征选择算法。  相似文献   

11.
Currently a consensus on multi-label classification is to exploit label correlations for performance improvement. Many approaches build one classifier for each label based on the one-versus-all strategy, and integrate classifiers by enforcing a regularization term on the global weights to exploit label correlations. However, this strategy might be suboptimal since it may be only part of the global weights that support the assumption. This paper proposes clustered intrinsic label correlations for multi-label classification (CILC), which extends traditional support vector machine to the multi-label setting. The predictive function of each classifier consists of two components: one component is the common information among all labels, and the other component is a label-specific one which highly depends on the corresponding label. The label-specific one representing the intrinsic label correlations is regularized by clustered structure assumption. The appealing features of the proposed method are that it separates the common information and the label-specific information of the labels and utilizes clustered structures among labels represented by the label-specific parts. The practical multi-label classification problems can be directly solved by the proposed CILC method, such as text categorization, image annotation and sentiment analysis. Experiments across five data sets validate the effectiveness of CILC, compared with six well-established multi-label classification algorithms.  相似文献   

12.
基于深度学习的多标签文本分类方法存在两个主要缺陷:缺乏对文本信息多粒度的学习,以及对标签间约束性关系的利用.针对这些问题,提出一种多粒度信息关系增强的多标签文本分类方法.首先,通过联合嵌入的方式将文本与标签嵌入到同一空间,并利用BERT预训练模型获得文本和标签的隐向量特征表示.然后,构建3个多粒度信息关系增强模块:文档级信息浅层标签注意力分类模块、词级信息深层标签注意力分类模块和标签约束性关系匹配辅助模块.其中,前两个模块针对共享特征表示进行多粒度学习:文档级文本信息与标签信息浅层交互学习,以及词级文本信息与标签信息深层交互学习.辅助模块通过学习标签间关系来提升分类性能.最后,所提方法在3个代表性数据集上,与当前主流的多标签文本分类算法进行了比较.结果表明,在主要指标Micro-F1、MacroF1、nDCG@k、P@k上均达到了最佳效果.  相似文献   

13.
张要  马盈仓  朱恒东  李恒  陈程 《计算机工程》2022,48(3):90-99+106
对于多标签特征选择算法,通常假设数据与标签间呈现某种关系,以该关系为基础并通过正则项的约束可解决多标签特征选择问题,但该关系也可能是两种或多种关系的结合。为准确描述数据与标签间的关系并去除不相关的特征和冗余特征,基于logistic回归模型与标签流形结构提出多标签特征选择算法FSML。使用logistic回归模型的损失函数学习回归系数矩阵,利用标签流形结构学习数据特征的权重矩阵,通过L2,1-范数将系数矩阵和权重矩阵进行柔性结合,约束系数矩阵与权重矩阵的稀疏性并实现多标签特征选择。在经典多标签数据集上的实验结果表明,与CMLS、SCLS等特征选择算法相比,FSML算法在汉明损失、排名损失、1-错误率、覆盖率、平均精度等5个性能评价指标上表现良好,能更准确地描述数据与标签间的关系。  相似文献   

14.
Feature selection for multi-label naive Bayes classification   总被引:4,自引:0,他引:4  
In multi-label learning, the training set is made up of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances. In this paper, this learning problem is addressed by using a method called Mlnb which adapts the traditional naive Bayes classifiers to deal with multi-label instances. Feature selection mechanisms are incorporated into Mlnb to improve its performance. Firstly, feature extraction techniques based on principal component analysis are applied to remove irrelevant and redundant features. After that, feature subset selection techniques based on genetic algorithms are used to choose the most appropriate subset of features for prediction. Experiments on synthetic and real-world data show that Mlnb achieves comparable performance to other well-established multi-label learning algorithms.  相似文献   

15.
Nowadays, multi-label classification methods are of increasing interest in the areas such as text categorization, image annotation and protein function classification. Due to the correlation among the labels, traditional single-label classification methods are not directly applicable to the multi-label classification problem. This paper presents two novel multi-label classification algorithms based on the variable precision neighborhood rough sets, called multi-label classification using rough sets (MLRS) and MLRS using local correlation (MLRS-LC). The proposed algorithms consider two important factors that affect the accuracy of prediction, namely the correlation among the labels and the uncertainty that exists within the mapping between the feature space and the label space. MLRS provides a global view at the label correlation while MLRS-LC deals with the label correlation at the local level. Given a new instance, MLRS determines its location and then computes the probabilities of labels according to its location. The MLRS-LC first finds out its topic and then the probabilities of new instance belonging to each class is calculated in related topic. A series of experiments reported for seven multi-label datasets show that MLRS and MLRS-LC achieve promising performance when compared with some well-known multi-label learning algorithms.  相似文献   

16.
吴磊  张敏灵 《软件学报》2014,25(9):1992-2001
在多标记学习框架中,每个对象由一个示例(属性向量)描述,却同时具有多个类别标记.在已有的多标记学习算法中,一种常用的策略是将相同的属性集合应用于所有类别标记的预测中.然而,该策略并不一定是最优选择,原因在于每个标记可能具有其自身独有的特征.基于这个假设,目前已经出现了基于标记的类属属性进行建模的多标记学习算法LIFT.LIFT包含两个步骤:属属性构建与分类模型训练.LIFT首先通过在标记的正类与负类示例上进行聚类分析,构建该标记的类属属性;然后,使用每个标记的类属属性训练对应的二类分类模型.在保留LIFT分类模型训练方法的同时,考察了另外3种多标记类属属性构造机制,从而实现LIFT算法的3种变体——LIFT-MDDM,LIFT-INSDIF以及LIFT-MLF.在12个数据集上进行了两组实验,验证了类属属性对多标记学习系统性能的影响以及LIFT采用的类属属性构造方法的有效性.  相似文献   

17.
在多标记学习中,特征选择是处理数据高维问题和提升分类性能的一种有效手段,然而现有特征选择算法大多是基于标记分布大致平衡这一假设,鲜有考虑标记分布不平衡的问题。针对这一问题,本文提出了一种边缘标记弱化的多标记特征选择算法(Multi-label feature selection algorithm with weakening marginal labels,WML),计算不同标记下正负标记的频数比率作为该标记的权值,然后通过赋权方式弱化边缘标记,将标记空间信息融入到特征选择的过程中,得到一组更为高效的特征序列,提升标记对样本描述的精确性。在多个数据集上的实验结果表明,本文算法具有一定优势,通过稳定性分析和统计假设检验进一步证明本文算法的有效性和合理性。  相似文献   

18.
张志浩  林耀进  卢舜  郭晨  王晨曦 《计算机应用》2021,41(10):2849-2857
多标记特征选择已在图像分类、疾病诊断等领域得到广泛应用;然而,现实中数据的标记空间往往存在部分标记缺失的问题,这破坏了标记间的结构性和关联性,使得学习算法难以准确地选择重要特征。针对此问题,提出一种缺失标记下基于类属属性的多标记特征选择(MFSLML)算法。首先,通过利用稀疏学习方法获取每个类标记的类属属性;同时基于线性回归模型构建类属属性与标记的映射关系,以用于恢复缺失标记;最后,选取7组数据集以及4个评价指标进行实验。实验结果表明:相比基于最大依赖度和最小冗余度的多标记特征选择算法(MDMR)和基于特征交互的多标记特征选择算法(MFML)等一些先进的多标记特征选择算法,MFSLML在平均查准率指标上能够提升4.61~5.5个百分点,由此可见MFSLML具有更优的分类性能。  相似文献   

19.
在多标记学习系统中,每个样本同时与多个类别标记相关,却均由一个属性特征向量描述。大部分已有的多标记分类算法采用的共同策略是使用相同的属性特征集合预测所有的类别标记,但它并非最佳选择,原因在于每个标记可能与其自身独有的属性特征相关性最大。针对这一问题,提出了融合标记独有属性特征的k近邻多标记分类算法—IML-kNN。首先对多标记数据的特征向量进行预处理,分别为每类标记构造对该类标记最具有判别能力的属性特征;然后基于得到的属性特征使用改进后的ML-kNN算法进行分类。实验结果表明,IML-kNN算法在yeast和image数据集上的性能明显优于ML-kNN算法以及其他3种常用的多标记分类算法。  相似文献   

20.
传统的多标签分类算法是以二值标签预测为基础的,而二值标签由于仅能指示数据是否具有相关类别,所含语义信息较少,无法充分表示标签语义信息。为充分挖掘标签空间的语义信息,提出了一种基于非负矩阵分解和稀疏表示的多标签分类算法(MLNS)。该算法结合非负矩阵分解与稀疏表示技术,将数据的二值标签转化为实值标签,从而丰富标签语义信息并提升分类效果。首先,对标签空间进行非负矩阵分解以获得标签潜在语义空间,并将标签潜在语义空间与原始特征空间结合以形成新的特征空间;然后,对此特征空间进行稀疏编码来获得样本间的全局相似关系;最后,利用该相似关系重构二值标签向量,从而实现二值标签与实值标签的转化。在5个标准多标签数据集和5个评价指标上将所提算法与MLBGM、ML2、LIFT和MLRWKNN等算法进行对比。实验结果表明,所提MLNS在多标签分类中优于对比的多标签分类算法,在50%的案例中排名第一,在76%的案例中排名前二,在全部的案例中排名前三。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号