首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate the performance of statistical, mathematical programming and heuristic linear models for cost‐sensitive classification. In particular, we use five cost‐sensitive techniques including Fisher's discriminant analysis (DA), asymmetric misclassification cost mixed integer programming (AMC‐MIP), cost‐sensitive support vector machine (CS‐SVM), a hybrid support vector machine and mixed integer programming (SVMIP) and heuristic cost‐sensitive genetic algorithm (CGA) techniques. Using simulated datasets of varying group overlaps, data distributions and class biases, and real‐world datasets from financial and medical domains, we compare the performances of our five techniques based on overall holdout sample misclassification cost. The results of our experiments on simulated datasets indicate that when group overlap is low and data distribution is exponential, DA appears to provide superior performance. For all other situations with simulated datasets, CS‐SVM provides superior performance. In case of real‐world datasets from financial domain, CGA and AMC‐MIP hold a slight edge over the two SVM‐based classifiers. However, for medical domains with mixed continuous and discrete attributes, SVM classifiers perform better than heuristic (CGA) and AMC‐MIP classifiers. The SVMIP model is the most computationally inefficient model and poor performing model.  相似文献   

2.
3.
唐诗淇  文益民  秦一休 《软件学报》2017,28(11):2940-2960
近年来,迁移学习得到越来越多的关注.现有的在线迁移学习算法一般从单个源领域迁移知识,然而,当源领域与目标领域相似度较低时,很难进行有效的迁移学习.基于此,提出了一种基于局部分类精度的多源在线迁移学习方法——LC-MSOTL.LC-MSOTL存储多个源领域分类器,计算新到样本与目标领域已有样本之间的距离以及各源领域分类器对其最近邻样本的分类精度,从源领域分类器中挑选局部精度最高的分类器与目标领域分类器加权组合,从而实现多个源领域知识到目标领域的迁移学习.在人工数据集和实际数据集上的实验结果表明,LC-MSOTL能够有效地从多个源领域实现选择性迁移,相对于单源在线迁移学习算法OTL,显示出了更高的分类准确率.  相似文献   

4.
周胜  刘三民 《计算机工程》2020,46(5):139-143,149
为解决数据流分类中的概念漂移和噪声问题,提出一种基于样本确定性的多源迁移学习方法。该方法存储多源领域上由训练得到的分类器,求出各源领域分类器对目标领域数据块中每个样本的类别后验概率和样本确定性值。在此基础上,将样本确定性值满足当前阈值限制的源领域分类器与目标领域分类器进行在线集成,从而将多个源领域的知识迁移到目标领域。实验结果表明,该方法能够有效消除噪声数据流给不确定分类器带来的不利影响,与基于准确率选择集成的多源迁移学习方法相比,具有更高的分类准确率和抗噪稳定性。  相似文献   

5.
This paper presents a method for combining domain knowledge and machine learning (CDKML) for classifier generation and online adaptation. The method exploits advantages in domain knowledge and machine learning as complementary information sources. Whereas machine learning may discover patterns in interest domains that are too subtle for humans to detect, domain knowledge may contain information on a domain not present in the available domain dataset. CDKML has three steps. First, prior domain knowledge is enriched with relevant patterns obtained by machine learning to create an initial classifier. Second, genetic algorithms refine the classifier. Third, the classifier is adapted online on the basis of user feedback using the Markov decision process. CDKML was applied in fall detection. Tests showed that the classifiers developed by CDKML have better performance than machine‐learning classifiers generated on a training dataset that does not adequately represent all real‐life cases of the learned concept. The accuracy of the initial classifier was 10 percentage points higher than the best machine‐learning classifier and the refinement added 3 percentage points. The online adaptation improved the accuracy of the refined classifier by an additional 15 percentage points.  相似文献   

6.
一种异构直推式迁移学习算法   总被引:1,自引:1,他引:0  
杨柳  景丽萍  于剑 《软件学报》2015,26(11):2762-2780
目标领域已有类别标注的数据较少时会影响学习性能,而与之相关的其他源领域中存在一些已标注数据.迁移学习针对这一情况,提出将与目标领域不同但相关的源领域上学习到的知识应用到目标领域.在实际应用中,例如文本-图像、跨语言迁移学习等,源领域和目标领域的特征空间是不相同的,这就是异构迁移学习.关注的重点是利用源领域中已标注的数据来提高目标领域中未标注数据的学习性能,这种情况是异构直推式迁移学习.因为源领域和目标领域的特征空间不同,异构迁移学习的一个关键问题是学习从源领域到目标领域的映射函数.提出采用无监督匹配源领域和目标领域的特征空间的方法来学习映射函数.学到的映射函数可以把源领域中的数据在目标领域中重新表示.这样,重表示之后的已标注源领域数据可以被迁移到目标领域中.因此,可以采用标准的机器学习方法(例如支持向量机方法)来训练分类器,以对目标领域中未标注的数据进行类别预测.给出一个概率解释以说明其对数据中的一些噪声是具有鲁棒性的.同时还推导了一个样本复杂度的边界,也就是寻找映射函数时需要的样本数.在4个实际的数据库上的实验结果,展示了该方法的有效性.  相似文献   

7.
State-of-the-art statistical NLP systems for a variety of tasks learn from labeled training data that is often domain specific. However, there may be multiple domains or sources of interest on which the system must perform. For example, a spam filtering system must give high quality predictions for many users, each of whom receives emails from different sources and may make slightly different decisions about what is or is not spam. Rather than learning separate models for each domain, we explore systems that learn across multiple domains. We develop a new multi-domain online learning framework based on parameter combination from multiple classifiers. Our algorithms draw from multi-task learning and domain adaptation to adapt multiple source domain classifiers to a new target domain, learn across multiple similar domains, and learn across a large number of disparate domains. We evaluate our algorithms on two popular NLP domain adaptation tasks: sentiment classification and spam filtering.  相似文献   

8.
A Study of Approaches to Hypertext Categorization   总被引:34,自引:2,他引:34  
Hypertext poses new research challenges for text classification. Hyperlinks, HTML tags, category labels distributed over linked documents, and meta data extracted from related Web sites all provide rich information for classifying hypertext documents. How to appropriately represent that information and automatically learn statistical patterns for solving hypertext classification problems is an open question. This paper seeks a principled approach to providing the answers. Specifically, we define five hypertext regularities which may (or may not) hold in a particular application domain, and whose presence (or absence) may significantly influence the optimal design of a classifier. Using three hypertext datasets and three well-known learning algorithms (Naive Bayes, Nearest Neighbor, and First Order Inductive Learner), we examine these regularities in different domains, and compare alternative ways to exploit them. Our results show that the identification of hypertext regularities in the data and the selection of appropriate representations for hypertext in particular domains are crucial, but seldom obvious, in real-world problems. We find that adding the words in the linked neighborhood to the page having those links (both inlinks and outlinks) were helpful for all our classifiers on one data set, but more harmful than helpful for two out of the three classifiers on the remaining datasets. We also observed that extracting meta data from related Web sites was extremely useful for improving classification accuracy in some of those domains. Finally, the relative performance of the classifiers being tested provided insights into their strengths and limitations for solving classification problems involving diverse and often noisy Web pages.  相似文献   

9.
Engineering activities often produce considerable documentation as a by-product of the development process. Due to their complexity, technical analysts can benefit from text processing techniques able to identify concepts of interest and analyze deficiencies of the documents in an automated fashion. In practice, text sentences from the documentation are usually transformed to a vector space model, which is suitable for traditional machine learning classifiers. However, such transformations suffer from problems of synonyms and ambiguity that cause classification mistakes. For alleviating these problems, there has been a growing interest in the semantic enrichment of text. Unfortunately, using general-purpose thesaurus and encyclopedias to enrich technical documents belonging to a given domain (e.g. requirements engineering) often introduces noise and does not improve classification. In this work, we aim at boosting text classification by exploiting information about semantic roles. We have explored this approach when building a multi-label classifier for identifying special concepts, called domain actions, in textual software requirements. After evaluating various combinations of semantic roles and text classification algorithms, we found that this kind of semantically-enriched data leads to improvements of up to 18% in both precision and recall, when compared to non-enriched data. Our enrichment strategy based on semantic roles also allowed classifiers to reach acceptable accuracy levels with small training sets. Moreover, semantic roles outperformed Wikipedia- and WordNET-based enrichments, which failed to boost requirements classification with several techniques. These results drove the development of two requirements tools, which we successfully applied in the processing of textual use cases.  相似文献   

10.
多分类问题代价敏感AdaBoost算法   总被引:8,自引:2,他引:6  
付忠良 《自动化学报》2011,37(8):973-983
针对目前多分类代价敏感分类问题在转换成二分类代价敏感分类问题存在的代价合并问题, 研究并构造出了可直接应用于多分类问题的代价敏感AdaBoost算法.算法具有与连续AdaBoost算法 类似的流程和误差估计. 当代价完全相等时, 该算法就变成了一种新的多分类的连续AdaBoost算法, 算法能够确保训练错误率随着训练的分类器的个数增加而降低, 但不直接要求各个分类器相互独立条件, 或者说独立性条件可以通过算法规则来保证, 但现有多分类连续AdaBoost算法的推导必须要求各个分类器相互独立. 实验数据表明, 算法可以真正实现分类结果偏向错分代价较小的类, 特别当每一类被错分成其他类的代价不平衡但平均代价相等时, 目前已有的多分类代价敏感学习算法会失效, 但新方法仍然能 实现最小的错分代价. 研究方法为进一步研究集成学习算法提供了一种新的思路, 得到了一种易操作并近似满足分类错误率最小的多标签分类问题的AdaBoost算法.  相似文献   

11.
We address the sequence classification problem using a probabilistic model based on hidden Markov models (HMMs). In contrast to commonly-used likelihood-based learning methods such as the joint/conditional maximum likelihood estimator, we introduce a discriminative learning algorithm that focuses on class margin maximization. Our approach has two main advantages: (i) As an extension of support vector machines (SVMs) to sequential, non-Euclidean data, the approach inherits benefits of margin-based classifiers, such as the provable generalization error bounds. (ii) Unlike many algorithms based on non-parametric estimation of similarity measures that enforce weak constraints on the data domain, our approach utilizes the HMM’s latent Markov structure to regularize the model in the high-dimensional sequence space. We demonstrate significant improvements in classification performance of the proposed method in an extensive set of evaluations on time-series sequence data that frequently appear in data mining and computer vision domains.  相似文献   

12.
类别混叠度是指不同类别数据之间互相交叠、混合的程度,其量化指标包含基于几何统计的和基于信息论的两类,用于衡量数据分类的难易。实际分类任务中存在大量的非均衡数据,大类与小类样本之间悬殊的数量差别给分类造成了极大的困难。本文采用实验研究的方法,验证类别混叠度量化指标指导非均衡数据分类的有效性,以减少甚至避免盲目试错带来的庞大计算开销。首先,针对两类分类问题,设计验证实验,在不同类数据非均衡率,不同别边界形状、不同特征类型、不同概率分布的非均衡仿真数据上研究类别混叠度的有效性。其次,在实验研究的基础上,分析数据的非均衡性对类别混叠度的影响规律,找出类别混叠度指导非均衡分类的有效方法。最后,在真实的非均衡数据上验证类别混叠度指导非均衡分类的实际效果。实验结果表明,对数据的非均衡率具有较强鲁棒性的类别混叠度量化指标可以有效地指导非均衡数据的分类器选择。  相似文献   

13.
Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods.  相似文献   

14.
Many real‐world problems require multilabel classification, in which each training instance is associated with a set of labels. There are many existing learning algorithms for multilabel classification; however, these algorithms assume implicit negativity, where missing labels in the training data are automatically assumed to be negative. Additionally, many of the existing algorithms do not handle incremental learning in which new labels could be encountered later in the learning process. A novel multilabel adaptation of the backpropagation algorithm is proposed that does not assume implicit negativity. In addition, this algorithm can, using a naïve Bayesian approach, infer missing labels in the training data. This algorithm can also be trained incrementally as it dynamically considers new labels. This solution is compared with existing multilabel algorithms using data sets from multiple domains, and the performance is measured with standard multilabel evaluation metrics. It is shown that our algorithm improves classification performance for all metrics by an overall average of 7.4% when at least 40% of the labels are missing from the training data and improves by 18.4% when at least 90% of the labels are missing.  相似文献   

15.
Recently, Learning Classifier Systems (LCS) and particularly XCS have arisen as promising methods for classification tasks and data mining. This paper investigates two models of accuracy-based learning classifier systems on different types of classification problems. Departing from XCS, we analyze the evolution of a complete action map as a knowledge representation. We propose an alternative, UCS, which evolves a best action map more efficiently. We also investigate how the fitness pressure guides the search towards accurate classifiers. While XCS bases fitness on a reinforcement learning scheme, UCS defines fitness from a supervised learning scheme. We find significant differences in how the fitness pressure leads towards accuracy, and suggest the use of a supervised approach specially for multi-class problems and problems with unbalanced classes. We also investigate the complexity factors which arise in each type of accuracy-based LCS. We provide a model on the learning complexity of LCS which is based on the representative examples given to the system. The results and observations are also extended to a set of real world classification problems, where accuracy-based LCS are shown to perform competitively with respect to other learning algorithms. The work presents an extended analysis of accuracy-based LCS, gives insight into the understanding of the LCS dynamics, and suggests open issues for further improvement of LCS on classification tasks.  相似文献   

16.
汪云云  孙顾威  赵国祥  薛晖 《软件学报》2022,33(4):1170-1182
无监督域适应(unsupervised domain adaptation,UDA)旨在利用带大量标注数据的源域帮助无任何标注信息的目标域学习.在UDA中,通常假设源域和目标域间的数据分布不同,但共享相同的类标签空间.但在真实开放学习场景中,域间的标签空间很可能存在差异.在极端情形下,域间的类别不存在交集,即目标域中类...  相似文献   

17.
现实中许多领域产生的数据通常具有多个类别并且是不平衡的。在多类不平衡分类中,类重叠、噪声和多个少数类等问题降低了分类器的能力,而有效解决多类不平衡问题已经成为机器学习与数据挖掘领域中重要的研究课题。根据近年来的多类不平衡分类方法的文献,从数据预处理和算法级分类方法两方面进行了分析与总结,并从优缺点和数据集等方面对所有算法进行了详细的分析。在数据预处理方法中,介绍了过采样、欠采样、混合采样和特征选择方法,对使用相同数据集算法的性能进行了比较。从基分类器优化、集成学习和多类分解技术三个方面对算法级分类方法展开介绍和分析。最后对多类不平衡数据分类研究领域的未来发展方向进行总结归纳。  相似文献   

18.
Transfer learning aims to enhance performance in a target domain by exploiting useful information from auxiliary or source domains when the labeled data in the target domain are insufficient or difficult to acquire. In some real-world applications, the data of source domain are provided in advance, but the data of target domain may arrive in a stream fashion. This kind of problem is known as online transfer learning. In practice, there can be several source domains that are related to the target domain. The performance of online transfer learning is highly associated with selected source domains, and simply combining the source domains may lead to unsatisfactory performance. In this paper, we seek to promote classification performance in a target domain by leveraging labeled data from multiple source domains in online setting. To achieve this, we propose a new online transfer learning algorithm that merges and leverages the classifiers of the source and target domain with an ensemble method. The mistake bound of the proposed algorithm is analyzed, and the comprehensive experiments on three real-world data sets illustrate that our algorithm outperforms the compared baseline algorithms.  相似文献   

19.
This paper addresses the classification problem for applications with extensive amounts of data and a large number of features. The learning system developed utilizes a hierarchical multiple classifier scheme and is flexible, efficient, highly accurate and of low cost. The system has several novel features: (1) It uses a graph-theoretic clustering algorithm to group the training data into possibly overlapping cluster, each representing a dense region in the data space; (2) component classifiers trained on these dense regions are specialists whose probabilistic outputs are gated inputs to a super-classifier. Only those classifiers whose training clusters are most related to an unknown data instance send their outputs to the super-classifier; and (3) sub-class labelling is used to improve the classification of super-classes. The learning system achieves the goals of reducing the training cost and increasing the prediction accuracy compared to other multiple classifier algorithms. The system was tested on three large sets of data, two from the medical diagnosis domain and one from a forest cover classification problem. The results are superior to those obtained by several other learning algorithms.  相似文献   

20.
Automated classification is usually not adjusted to specialized domains due to a lack of suitable data collections and insufficient characterization of the domain‐specific content and its effect on the classification process. This work describes an approach for the automated multiclass classification of content components used in technical communication based on a vector space model. We show that differences in the form and substance of content components require an adaption of document‐based classification methods and validate our assumptions with multiple real‐world data sets in 2 languages. As a result, we propose general adaptions on feature selection and token weighting, as well as new ideas for the measurement of classifier confidence and the semantic weighting of XML‐based training data. We introduce several potential applications of our method and provide prototypical implementation. Our contribution beyond the state of the art is a dedicated procedure model for the automated classification of content components in technical communication, which outperforms current document‐centered or domain‐agnostic approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号