共查询到20条相似文献,搜索用时 0 毫秒
1.
Applied Intelligence - Complex objects can be represented as multiple modal features and associated with multiple labels. The major challenge of complex object classification is how to jointly... 相似文献
2.
A common approach to solving multi-label learning problems is to use problem transformation methods and dichotomizing classifiers as in the pair-wise decomposition strategy. One of the problems with this strategy is the need for querying a quadratic number of binary classifiers for making a prediction that can be quite time consuming, especially in learning problems with a large number of labels. To tackle this problem, we propose a Two Stage Architecture (TSA) for efficient multi-label learning. We analyze three implementations of this architecture the Two Stage Voting Method (TSVM), the Two Stage Classifier Chain Method (TSCCM) and the Two Stage Pruned Classifier Chain Method (TSPCCM). Eight different real-world datasets are used to evaluate the performance of the proposed methods. The performance of our approaches is compared with the performance of two algorithm adaptation methods (Multi-Label k-NN and Multi-Label C4.5) and five problem transformation methods (Binary Relevance, Classifier Chain, Calibrated Label Ranking with majority voting, the Quick Weighted method for pair-wise multi-label learning and the Label Powerset method). The results suggest that TSCCM and TSPCCM outperform the competing algorithms in terms of predictive accuracy, while TSVM has comparable predictive performance. In terms of testing speed, all three methods show better performance as compared to the pair-wise methods for multi-label learning. 相似文献
3.
Image annotation is posed as multi-class classification problem. Pursuing higher accuracy is a permanent but not stale challenge in the field of image annotation. To further improve the accuracy of image annotation, we propose a multi-view multi-label (abbreviated by MVML) learning algorithm, in which we take multiple feature (i.e., view) and ensemble learning into account simultaneously. By doing so, we make full use of the complementarity among the views and the base learners of ensemble learning, leading to higher accuracy of image annotation. With respect to the different distribution of positive and negative training examples, we propose two versions of MVML: the Boosting and Bagging versions of MVML. The former is suitable for learning over balanced examples while the latter applies to the opposite scenario. Besides, the weights of base learner is evaluated on validation data instead of training data, which will improve the generalization ability of the final ensemble classifiers. The experimental results have shown that the MVML is superior to the ensemble SVM of single view. 相似文献
4.
As an effective method for mining latent information between labels, label correlation is widely adopted by many scholars to model multi-label learning algorithms. Most existing multi-label algorithms usually ignore that the correlation between labels may be asymmetric while asymmetry correlation commonly exists in the real-world scenario. To tackle this problem, a multi-label learning algorithm with asymmetry label correlation (ACML, Asymmetry Label Correlation for Multi-Label Learning) is proposed in this paper. First, measure the adjacency between labels to construct the label adjacency matrix. Then, cosine similarity is utilized to construct the label correlation matrix. Finally, we constrain the label correlation matrix with the label adjacency matrix. Thus, asymmetry label correlation is modeled for multi-label learning. Experiments on multiple multi-label benchmark datasets show that the ACML algorithm has certain advantages over other comparison algorithms. The results of statistical hypothesis testing further illustrate the effectiveness of the proposed algorithm. 相似文献
5.
传统的机器学习主要解决单标记学习,即一个样本仅有一个标记.在生物信息学中,一个基因通常至少具有一个功能,即至少具有一个标记,与传统学习方法相比,多标记学习能更有效地识别生物相关基因组的功能.目前的研究主要集中在监督多标记学习算法.然而,研究半监督多标记学习算法,从已标记和未标记的基因表达数据中学习,仍然是未解决问题.提出一种有效的基因功能分析的半监督多标记学习算法SML_SVM.首先,SML_SVM根据PT4方法,将半监督多标记学习问题转化为半监督单标记学习问题,然后根据最大后验概率原则(MAP)和K近邻方法估计未标记样本的标记,最后,用SVM求解单标记学习问题.在yeast基因数据和genbase蛋白质数据上的实验表明,SML_SVM性能比基于PT4方法的MLSVM和自训练MLSVM更优. 相似文献
6.
In classification problems with hierarchical structures of labels, the target function must assign labels that are hierarchically organized and it can be used either for single-label (one label per instance) or multi-label classification problems (more than one label per instance). In parallel to these developments, the idea of semi-supervised learning has emerged as a solution to the problems found in a standard supervised learning procedure (used in most classification algorithms). It combines labelled and unlabelled data during the training phase. Some semi-supervised methods have been proposed for single-label classification methods. However, very little effort has been done in the context of multi-label hierarchical classification. Therefore, this paper proposes a new method for supervised hierarchical multi-label classification, called HMC-RA kEL. Additionally, we propose the use of semi-supervised learning, self-training, in hierarchical multi-label classification, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRA kEL. In order to validate the feasibility of these methods, an empirical analysis will be conducted, comparing the proposed methods with their corresponding supervised versions. The main aim of this analysis is to observe whether the semi-supervised methods proposed in this paper have similar performance of the corresponding supervised versions. 相似文献
7.
Machine Learning - We introduce a Gaussian process latent factor model for multi-label classification that can capture correlations among class labels by using a small set of latent Gaussian... 相似文献
8.
Directly applying single-label classification methods to the multi-label learning problems substantially limits both the performance and speed due to the imbalance, dependence and high dimensionality of the given label matrix. Existing methods either ignore these three problems or reduce one with the price of aggravating another. In this paper, we propose a {0,1} label matrix compression and recovery method termed ??compressed labeling (CL)?? to simultaneously solve or at least reduce these three problems. CL first compresses the original label matrix to improve balance and independence by preserving the signs of its Gaussian random projections. Afterward, we directly utilize popular binary classification methods (e.g., support vector machines) for each new label. A fast recovery algorithm is developed to recover the original labels from the predicted new labels. In the recovery algorithm, a ??labelset distilling method?? is designed to extract distilled labelsets (DLs), i.e., the frequently appeared label subsets from the original labels via recursive clustering and subtraction. Given a distilled and an original label vector, we discover that the signs of their random projections have an explicit joint distribution that can be quickly computed from a geometric inference. Based on this observation, the original label vector is exactly determined after performing a series of Kullback-Leibler divergence based hypothesis tests on the distribution about the new labels. CL significantly improves the balance of?the training samples and reduces the dependence between different labels. Moreover, it accelerates the learning process by training fewer binary classifiers for compressed labels, and makes use of label dependence via DLs based tests. Theoretically, we prove the recovery bounds of CL which verifies the effectiveness of CL for label compression and multi-label classification performance improvement brought by label correlations preserved in DLs. We show the effectiveness, efficiency and robustness of CL via 5 groups of experiments on 21 datasets from text classification, image annotation, scene classification, music categorization, genomics and web page classification. 相似文献
9.
Multi-instance multi-label learning (MIML) is a newly proposed framework, in which the multi-label problems are investigated by representing each sample with multiple feature vectors named instances. In this framework, the multi-label learning task becomes to learn a many-to-many relationship, and it also offers a possibility for explaining why a concerned sample has the certain class labels. The connections between instances and labels as well as the correlations among labels are equally crucial information for MIML. However, the existing MIML algorithms can rarely exploit them simultaneously. In this paper, a new MIML algorithm is proposed based on Gaussian process. The basic idea is to suppose a latent function with Gaussian process prior in the instance space for each label and infer the predictive probability of labels by integrating over uncertainties in these functions using the Bayesian approach, so that the connection between instances and every label can be exploited by defining a likelihood function and the correlations among labels can be identified by the covariance matrix of the latent functions. Moreover, since different relationships between instances and labels can be captured by defining different likelihood functions, the algorithm may be used to deal with the problems with various multi-instance assumptions. Experimental results on several benchmark data sets show that the proposed algorithm is valid and can achieve superior performance to the existing ones. 相似文献
10.
In multi-label learning, each instance is simultaneously associated with multiple class labels. A large number of labels in an application exacerbates the problem of label scarcity. An interesting issue concerns how to query as few labels as possible while obtaining satisfactory classification accuracy. For this purpose, we propose the attribute and label distribution driven multi-label active learning (MCAL) algorithm. MCAL considers the characteristics of both attributes and labels to enable the selection of critical instances based on different measures. Representativeness is measured by the probability density function obtained by non-parametric estimation, while informativeness is measured by the bilateral softmax predicted entropy. Diversity is measured by the distance metric among instances, and richness is measured by the number of softmax predicted labels. We describe experiments performed on eight benchmark datasets and eleven real Yahoo webpage datasets. The results verify the effectiveness of MCAL and its superiority over state-of-the-art multi-label algorithms and multi-label active learning algorithms. 相似文献
11.
基于标记特征的多标记分类算法通过对标记的正反样例集合进行聚类,计算样例与聚类中心间的距离构造样例针对标记的特征子集,并生成新的训练集,在新的训练集上利用传统的二分类器进行分类。算法在构造特征子集的过程中采用等权重方式,忽略了样例之间的相关性。提出了一种改进的多标记分类算法,通过加权方式使生成的特征子集更加准确,有助于提高样例的分类精度。实验表明改进的算法性能优于其他常用的多标记分类算法。 相似文献
12.
Neural Computing and Applications - For multi-label learning, the specific features are extracted from the instances under the supervised of class label is meaningful, and the "purified"... 相似文献
13.
Multi-label learning has attracted many attentions. However, the continuous data generated in the fields of sensors, network access, etc., that is data streams, the scenario brings challenges such as real-time, limited memory, once pass. Several learning algorithms have been proposed for offline multi-label classification, but few researches develop it for dynamic multi-label incremental learning models based on cascading schemes. Deep forest can perform representation learning layer by layer, and does not rely on backpropagation, using this cascading scheme, this paper proposes a multi-label data stream deep forest (VDSDF) learning algorithm based on cascaded Very Fast Decision Tree (VFDT) forest, which can receive examples successively, perform incremental learning, and adapt to concept drift. Experimental results show that the proposed VDSDF algorithm, as an incremental classification algorithm, is more competitive than batch classification algorithms on multiple indicators. Moreover, in dynamic flow scenarios, the adaptability of VDSDF to concept drift is better than that of the contrast algorithm. 相似文献
14.
In this paper, we address two important issues in the video concept detection problem: the insufficiency of labeled videos and the multiple labeling issue. Most existing solutions merely handle the two issues separately. We propose an integrated approach to handle them together, by presenting an effective transductive multi-label classification approach that simultaneously models the labeling consistency between the visually similar videos and the multi-label interdependence for each video. We compare the performance between the proposed approach and several representative transductive and supervised multi-label classification approaches for the video concept detection task over the widely used TRECVID data set. The comparative results demonstrate the superiority of the proposed approach. 相似文献
16.
In multi-instance multi-label learning (MIML), each example is not only represented by multiple instances but also associated with multiple class labels. Several learning frameworks, such as the traditional supervised learning, can be regarded as degenerated versions of MIML. Therefore, an intuitive way to solve MIML problem is to identify its equivalence in its degenerated versions. However, this identification process would make useful information encoded in training examples get lost and thus impair the learning algorithm's performance. In this paper, RBF neural networks are adapted to learn from MIML examples. Connections between instances and labels are directly exploited in the process of first layer clustering and second layer optimization. The proposed method demonstrates superior performance on two real-world MIML tasks. 相似文献
17.
Multi-label learning has received significant attention in the research community over the past few years: this has resulted in the development of a variety of multi-label learning methods. In this paper, we present an extensive experimental comparison of 12 multi-label learning methods using 16 evaluation measures over 11 benchmark datasets. We selected the competing methods based on their previous usage by the community, the representation of different groups of methods and the variety of basic underlying machine learning methods. Similarly, we selected the evaluation measures to be able to assess the behavior of the methods from a variety of view-points. In order to make conclusions independent from the application domain, we use 11 datasets from different domains. Furthermore, we compare the methods by their efficiency in terms of time needed to learn a classifier and time needed to produce a prediction for an unseen example. We analyze the results from the experiments using Friedman and Nemenyi tests for assessing the statistical significance of differences in performance. The results of the analysis show that for multi-label classification the best performing methods overall are random forests of predictive clustering trees (RF-PCT) and hierarchy of multi-label classifiers (HOMER), followed by binary relevance (BR) and classifier chains (CC). Furthermore, RF-PCT exhibited the best performance according to all measures for multi-label ranking. The recommendation from this study is that when new methods for multi-label learning are proposed, they should be compared to RF-PCT and HOMER using multiple evaluation measures. 相似文献
18.
For a multi-label learning framework, each instance may belong to multiple labels simultaneously. The classification accuracy can be improved significantly by exploiting various correlations, such as label correlations, feature correlations, or the correlations between features and labels. There are few studies on how to combine the feature and label correlations, and they deal more with complete data sets. However, missing labels or other phenomena often occur because of the cost or technical limitations in the data acquisition process. A few label completion algorithms currently suitable for missing multi-label learning, ignore the noise interference of the feature space. At the same time, the threshold of the discriminant function often affects the classification results, especially those of the labels near the threshold. All these factors pose considerable difficulties in dealing with missing labels using label correlations. Therefore, we propose a missing multi-label learning algorithm with non-equilibrium based on a two-level autoencoder. First, label density is introduced to enlarge the classification margin of the label space. Then, a new supplementary label matrix is augmented from the missing label matrix with the non-equilibrium label completion method. Finally, considering feature space noise, a two-level kernel extreme learning machine autoencoder is constructed to implement the information feature and label correlation. The effectiveness of the proposed algorithm is verified by many experiments on both missing and complete label data sets. A statistical analysis of hypothesis validates our approach. 相似文献
19.
The goal in multi-label classification is to tag a data point with the subset of relevant labels from a pre-specified set. Given a set of L labels, a data point can be tagged with any of the 2 L possible subsets. The main challenge therefore lies in optimising over this exponentially large label space subject to label correlations. Our objective, in this paper, is to design efficient algorithms for multi-label classification when the labels are densely correlated. In particular, we are interested in the zero-shot learning scenario where the label correlations on the training set might be significantly different from those on the test set. We propose a max-margin formulation where we model prior label correlations but do not incorporate pairwise label interaction terms in the prediction function. We show that the problem complexity can be reduced from exponential to linear while modelling dense pairwise prior label correlations. By incorporating relevant correlation priors we can handle mismatches between the training and test set statistics. Our proposed formulation generalises the effective 1-vs-All method and we provide a principled interpretation of the 1-vs-All technique. We develop efficient optimisation algorithms for our proposed formulation. We adapt the Sequential Minimal Optimisation (SMO) algorithm to multi-label classification and show that, with some book-keeping, we can reduce the training time from being super-quadratic to almost linear in the number of labels. Furthermore, by effectively re-utilizing the kernel cache and jointly optimising over all variables, we can be orders of magnitude faster than the competing state-of-the-art algorithms. We also design a specialised algorithm for linear kernels based on dual co-ordinate ascent with shrinkage that lets us effortlessly train on a million points with a hundred labels. 相似文献
20.
多示例多标签学习框架是一种针对解决多义性问题而提出的新型机器学习框架,在多示例多标签学习框架中,一个对象是用一组示例集合来表示,并且和一组类别标签相关联。E-MIMLSVM~+算法是多示例多标签学习框架中利用退化思想的经典分类算法,针对其无法利用无标签样本进行学习从而造成泛化能力差等问题,使用半监督支持向量机对该算法进行改进。改进后的算法可以利用少量有标签样本和大量没有标签的样本进行学习,有助于发现样本集内部隐藏的结构信息,了解样本集的真实分布情况。通过对比实验可以看出,改进后的算法有效提高了分类器的泛化性能。 相似文献
|