首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 95 毫秒
1.
单一技术无法有效解决多类分类问题。为此,提出一种基于一对多支持向量机(SVM)的基本概率分配输出方法,并与置信最大熵模型的D-S证据组合方法结合,给出基于SVM概率输出和证据理论的多分类模型。在3种UCI标准数据集上的仿真结果表明,该方法的分类精度优于传统的一对多和一对一硬输出方法,是一种有效的多类分类方法。  相似文献   

2.
多分类孪生支持向量机研究进展   总被引:3,自引:0,他引:3  
孪生支持向量机因其简单的模型、快速的训练速度和优秀的性能而受到广泛关注.该算法最初是为解决二分类问题而提出的,不能直接用于解决现实生活中普遍存在的多分类问题.近来,学者们致力于将二分类孪生支持向量机扩展为多分类方法并提出了多种多分类孪生支持向量机.多分类孪生支持向量机的研究已经取得了一定的进展.本文主要工作是回顾多分类孪生支持向量机的发展,对多分类孪生支持向量机进行合理归类,分析各个类型的多分类孪生支持向量机的理论和几何意义.本文以多分类孪生支持向量机的子分类器组织结构为依据,将多分类孪生支持向量机分为:基于“一对多”策略的多分类孪生支持向量机、基于“一对一”策略的多分类孪生支持向量机、基于“一对一对余”策略的多分类孪生支持向量机、基于二叉树结构的多分类孪生支持向量机和基于“多对一”策略的多分类孪生支持向量机.基于有向无环图的多分类孪生支持向量机训练过程与基于“一对一”策略的多分类孪生支持向量机类似,但是其决策方式有其特殊的优缺点,因此本文将其也独立为一类.本文分析和总结了这六种类型的多分类孪生支持向量机的算法思想、理论基础.此外,还通过实验对比了分类性能.本文工作为各种多分类孪生支持向量机之间建立了联系比较,使得初学者能够快速理解不同多分类孪生支持向量机之间的本质区别,也对实际应用中选取合适的多分类孪生支持向量机起到一定的指导作用.  相似文献   

3.
最小二乘支持向量机在睡眠打鼾诊断中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
支持向量机是数据挖掘和机器学习领域中的重要方法之一,最小二乘支持向量机是支持向量机学习算法的重要扩展,在训练速度方面有明显优势。对支持向量机现有的多类分类算法(一对一方法、一对多方法、纠错输出编码方法和最小输出编码方法)引入了最小二乘支持向量机,并应用于睡眠打鼾疾病的诊断预测中,取得了较好的效果。  相似文献   

4.
针对信息融合中冲突证据组合时易出现的一般冲突、一票否决和鲁棒性等常见问题,有两类改进策略:一类修改DS(Dempster-Shafer)组合规则,另一类修改证据源模型.提出一种基于封闭世界的修改模型方法.引入Jousselme距离函数来量化焦元属性及证据之间的相互关联性,进而计算各证据的支持度.对证据支持度进行加权平均后得到参考证据,利用该参考证据对各原始证据进行不确定性判定,获得各原始证据与参考证据之间的大小相似度和方向相似度.在此基础上建立一个相似度动态修正模型,利用DS组合规则进行证据组合,对动态修正模型的多组组合结果求平均作为最终结果.通过仿真实验验证所提出方法的有效性和合理性.  相似文献   

5.
在电力系统中,利用图像识别技术对没有数据传送接口的数字仪表进行识别有利于系统自动化水平的提高和安全运行。文章介绍了图像处理过程和数字仪表显示值的识别方法,阐述了支持向量机方法的基本原理,分别采用一对多和一对一的策略方法组合多个二值分类器解决了10类数字的识别问题,并利用这两种多分类器对仪表显示值进行了识别。最后,比较了支持向量机方法和其它方法的识别结果。实验结果表明,支持向量机方法具有更高的识别率。  相似文献   

6.
支持向量机多值分类器的研究   总被引:4,自引:0,他引:4  
在阐述二值支持向量机分类器的基础上,针对M类问题重点讨论了一对多与一对一分类器组合求解方案。对于纽合方案中出现的不可分类问题,论述了引入模糊隶属函数、构造有向非循环图等相关的解决思路,针对标准数据库分类问题比较了不同组合方案的分类性能。  相似文献   

7.
基于融合的多类支持向量机   总被引:2,自引:1,他引:1       下载免费PDF全文
支持向量机可以处理2类问题,通过“一对一”和“一对多”方式能将2类支持向量机扩展为多类支持向量机。提出一种基于两类支持向量机融合的多类支持向量机构成方法。对分类器融合采用极大值法、极小值法、乘积法、均值法、中值法、投票法和各种决策模板融合方法。在日本女性表情数据库JAFFE上应用该方法进行人脸表情识别,结果证明了其有效性。  相似文献   

8.
韩虎  任恩恩 《计算机工程与设计》2007,28(18):4454-4455,4458
采用支持向量机解决多类分类问题一般通过多个两类分类器的组合来求解,如何组合这些两类分类器就是该方法的关键.提出一种改进的支持向量机决策树多类分类模型,该模型通过引入类间可分性度量来确定决策树结构,以类间可分性度量的高低来决定不同类别在决策树中的位置,将容易分离的类尽可能早地划分出来.最后通过一组实验证明了该模型的有效性.  相似文献   

9.
支持向量机多类分类算法新研究   总被引:2,自引:1,他引:1       下载免费PDF全文
支持向量机最初是针对两类分类问题提出的,如何将其推广至多类分类问题是当前SVM研究中的热点问题之一。主要针对支持向量机多类分类方法中的分解重构法进行了深入分析,详细讨论了影响分类器性能的两个关键因素:分解策略和组合策略,并通过实验验证了该观点。最后,通过实验对比了包括M-ary 支持向量机和模糊支持向量机的SVM多类分类方法。  相似文献   

10.
金宏斌  蓝江桥  高效 《计算机应用》2010,30(10):2588-2591
针对DS理论(DST)中Dempster组合规则在处理高冲突证据时的不足,提出一种解决冲突证据的两级组合方法。该方法将高冲突和低冲突区别对待,在第一级组合中采用基于DSm理论(DSmT)的PCR6规则,化解可能的高冲突证据;在第二级组合中采用Dempster规则,保证良好的收敛速度和计算性能,从而合理、有效地处理各种程度的冲突证据。通过算例分析验证了该方法的有效性。  相似文献   

11.
Ensemble learning is attracting much attention from pattern recognition and machine learning domains for good generalization. Both theoretical and experimental researches show that combining a set of accurate and diverse classifiers will lead to a powerful classification system. An algorithm, called FS-PP-EROS, for selective ensemble of rough subspaces is proposed in this paper. Rough set-based attribute reduction is introduced to generate a set of reducts, and then each reduct is used to train a base classifier. We introduce an accuracy-guided forward search and post-pruning strategy to select part of the base classifiers for constructing an efficient and effective ensemble system. The experiments show that classification accuracies of ensemble systems with accuracy-guided forward search strategy will increase at first, arrive at a maximal value, then decrease in sequentially adding the base classifiers. We delete the base classifiers added after the maximal accuracy. The experimental results show that the proposed ensemble systems outperform bagging and random subspace methods in terms of accuracy and size of ensemble systems. FS-PP-EROS can keep or improve the classification accuracy with very few base classifiers, which leads to a powerful and compact classification system.  相似文献   

12.
Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature.While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC).Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases.RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems.  相似文献   

13.
集成分类通过将若干个弱分类器依据某种规则进行组合,能有效改善分类性能。在组合过程中,各个弱分类器对分类结果的重要程度往往不一样。极限学习机是最近提出的一个新的训练单隐层前馈神经网络的学习算法。以极限学习机为基分类器,提出了一个基于差分进化的极限学习机加权集成方法。提出的方法通过差分进化算法来优化集成方法中各个基分类器的权值。实验结果表明,该方法与基于简单投票集成方法和基于Adaboost集成方法相比,具有较高的分类准确性和较好的泛化能力。  相似文献   

14.

Dementia is one of the leading causes of severe cognitive decline, it induces memory loss and impairs the daily life of millions of people worldwide. In this work, we consider the classification of dementia using magnetic resonance (MR) imaging and clinical data with machine learning models. We adapt univariate feature selection in the MR data pre-processing step as a filter-based feature selection. Bagged decision trees are also implemented to estimate the important features for achieving good classification accuracy. Several ensemble learning-based machine learning approaches, namely gradient boosting (GB), extreme gradient boost (XGB), voting-based, and random forest (RF) classifiers, are considered for the diagnosis of dementia. Moreover, we propose voting-based classifiers that train on an ensemble of numerous basic machine learning models, such as the extra trees classifier, RF, GB, and XGB. The implementation of a voting-based approach is one of the important contributions, and the performance of different classifiers are evaluated in terms of precision, accuracy, recall, and F1 score. Moreover, the receiver operating characteristic curve (ROC) and area under the ROC curve (AUC) are used as metrics for comparing these classifiers. Experimental results show that the voting-based classifiers often perform better compared to the RF, GB, and XGB in terms of precision, recall, and accuracy, thereby indicating the promise of differentiating dementia from imaging and clinical data.

  相似文献   

15.
In machine learning, a combination of classifiers, known as an ensemble classifier, often outperforms individual ones. While many ensemble approaches exist, it remains, however, a difficult task to find a suitable ensemble configuration for a particular dataset. This paper proposes a novel ensemble construction method that uses PSO generated weights to create ensemble of classifiers with better accuracy for intrusion detection. Local unimodal sampling (LUS) method is used as a meta-optimizer to find better behavioral parameters for PSO. For our empirical study, we took five random subsets from the well-known KDD99 dataset. Ensemble classifiers are created using the new approaches as well as the weighted majority algorithm (WMA) approach. Our experimental results suggest that the new approach can generate ensembles that outperform WMA in terms of classification accuracy.  相似文献   

16.
The ability to predict a student’s performance could be useful in a great number of different ways associated with university-level distance learning. Students’ marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool.  相似文献   

17.
Ensemble learning has attracted considerable attention owing to its good generalization performance. The main issues in constructing a powerful ensemble include training a set of diverse and accurate base classifiers, and effectively combining them. Ensemble margin, computed as the difference of the vote numbers received by the correct class and the another class received with the most votes, is widely used to explain the success of ensemble learning. This definition of the ensemble margin does not consider the classification confidence of base classifiers. In this work, we explore the influence of the classification confidence of the base classifiers in ensemble learning and obtain some interesting conclusions. First, we extend the definition of ensemble margin based on the classification confidence of the base classifiers. Then, an optimization objective is designed to compute the weights of the base classifiers by minimizing the margin induced classification loss. Several strategies are tried to utilize the classification confidences and the weights. It is observed that weighted voting based on classification confidence is better than simple voting if all the base classifiers are used. In addition, ensemble pruning can further improve the performance of a weighted voting ensemble. We also compare the proposed fusion technique with some classical algorithms. The experimental results also show the effectiveness of weighted voting with classification confidence.  相似文献   

18.
结合随机子空间和核极端学习机集成提出了一种新的高光谱遥感图像分类方法。首先利用随机子空间方法从高光谱遥感图像数据的整体特征中随机生成多个大小相同的特征子集;然后利用核极端学习机在这些特征子集上进行训练从而获得基分类器;最后将所有基分类器的输出集成起来,通过投票机制得到分类结果。在高光谱遥感图像数据集上的实验结果表明:所提方法能够提高分类效果,且其分类总精度要高于核极端学习机和随机森林方法。  相似文献   

19.
Kernel Matching Pursuit Classifier (KMPC), a novel classification machine in pattern recognition, has an excellent advantage in solving classification problems for the sparsity of the solution. Unfortunately, the performance of the KMPC is far from the theoretically expected level of it. Ensemble Methods are learning algorithms that construct a collection of individual classifiers which are independent and yet accurate, and then classify a new data point by taking vote of their predictions. In such a way, the performance of classifiers can be improved greatly. In this paper, on a thorough investigation into the principle of KMPC and Ensemble Method, we expatiate on the theory of KMPC ensemble and pointed out the ways to construct it. The experiments performed on the artificial data and UCI data show KMPC ensemble combines the advantages of KMPC with ensemble method, and improves classification performance remarkably.  相似文献   

20.
Along with the increase of data and information, incremental learning ability turns out to be more and more important for machine learning approaches. The online algorithms try not to remember irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Today, combining classifiers is proposed as a new road for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. For this reason, we propose an incremental ensemble that combines five classifiers that can operate incrementally: the Naive Bayes, the Averaged One-Dependence Estimators (AODE), the 3-Nearest Neighbors, the Non-Nested Generalised Exemplars (NNGE) and the Kstar algorithms using the voting methodology. We performed a large-scale comparison of the proposed ensemble with other state-of-the-art algorithms on several datasets and the proposed method produce better accuracy in most cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号