首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

2.
3.
针对二支决策TAN分类器在处理不确定数据时有较高的错误率,提出一种新的三支扩展TAN贝叶斯分类器(3WDTAN).首先通过构建TAN贝叶斯分类模型,采用先验概率和类条件概率估计三支决策中的条件概率;其次构建3WD-TAN分类器,制定3WD-TAN分类器中正域,负域和边界域的三支分类规则,结合边界域处理不确定性数据的优势,在一定程度上纠正了传统TAN贝叶斯分类器产生的分类错误;最后通过在5个UCI数据集上选取NB、TAN、SETAN算法进行对比实验,表明3WD-TAN具有较高的准确率和召回率,且适用于不同规模数据集的分类问题.  相似文献   

4.
5.
一种限定性的双层贝叶斯分类模型   总被引:29,自引:1,他引:28  
朴素贝叶斯分类模型是一种简单而有效的分类方法,但它的属性独立性假设使其无法表达属性变量间存在的依赖关系,影响了它的分类性能.通过分析贝叶斯分类模型的分类原则以及贝叶斯定理的变异形式,提出了一种基于贝叶斯定理的新的分类模型DLBAN(double-level Bayesian network augmented naive Bayes).该模型通过选择关键属性建立属性之间的依赖关系.将该分类方法与朴素贝叶斯分类器和TAN(tree augmented naive Bayes)分类器进行实验比较.实验结果表明,在大多数数据集上,DLBAN分类方法具有较高的分类正确率.  相似文献   

6.
Bauer  Eric  Kohavi  Ron 《Machine Learning》1999,36(1-2):105-139
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only hard areas but also outliers and noise.  相似文献   

7.
多层组合分类器研究   总被引:3,自引:0,他引:3  
为了提高监督分类的精度,本文从组合分类器的结构出发,提出一种横向多层组合模型,并对这种模型的运行方式与组合特性进行分析。该模型每层含有一个分类器,每个分类器的输入和输出一起作为其后面一层的输入。我们将简单贝叶斯法与BP神经网络组合成两层分类器。实验结果表明,这种组合方式有效地提高了单个方法的分类精度。  相似文献   

8.
由于作为朴素贝叶斯分类器的主要特征的条件独立性假设条件过强且在不同数据集上表现出的差异,所以独立性假设成为众多改进算法的切入点。但也有研究指出不满足该假设并没有对分类器造成预想的影响。从降低后验概率的估计误差入手提出一种条件熵匹配的半朴素贝叶斯分类器。实验证明,该方法能有效提高朴素贝叶斯分类器的性能。  相似文献   

9.
This paper presents a novel way to speed up the evaluation time of a boosting classifier. We make a shallow (flat) network deep (hierarchical) by growing a tree from decision regions of a given boosting classifier. The tree provides many short paths for speeding up while preserving the reasonably smooth decision regions of the boosting classifier for good generalisation. For converting a boosting classifier into a decision tree, we formulate a Boolean optimisation problem, which has been previously studied for circuit design but limited to a small number of binary variables. In this work, a novel optimisation method is proposed for, firstly, several tens of variables i.e. weak-learners of a boosting classifier, and then any larger number of weak-learners by using a two-stage cascade. Experiments on the synthetic and face image data sets show that the obtained tree achieves a significant speed up both over a standard boosting classifier and the Fast-exit??a previously described method for speeding-up boosting classification, at the same accuracy. The proposed method as a general meta-algorithm is also useful for a boosting cascade, where it speeds up individual stage classifiers by different gains. The proposed method is further demonstrated for fast-moving object tracking and segmentation problems.  相似文献   

10.
基于改进属性加权的朴素贝叶斯分类模型   总被引:1,自引:0,他引:1       下载免费PDF全文
构造了一种新的属性间相关性度量方法,提出了改进属性加权的朴素贝叶斯分类模型。经实验证明,提出的朴素贝叶斯分类模型明显优于张舜仲等人提出的分类模型。  相似文献   

11.
针对朴素贝叶斯分类器在分类过程中,不同类别的同一特征量之间存在相似性,易导致误分类的现象,提出基于引力模型的朴素贝叶斯分类算法。提出以引力公式中的距离变量的平方作为“相似距离”,应用引力模型来刻画特征与其所属类别之间的相似度,从而克服朴素贝叶斯分类算法容易受到条件独立假设的影响,将所有特征同质化的缺点,并能有效地避免噪声干扰,达到修正先验概率、提高分类精度的目的。对遥感图像的分类实验表明,基于引力模型的朴素贝叶斯分类算法易于实现,可操作性强,且具有更高的平均分类准确率。  相似文献   

12.
基于多重判别分析的朴素贝叶斯分类器   总被引:4,自引:1,他引:4  
通过分析朴素贝叶斯分类器的分类原理,并结合多重判别分析的优点,提出了一种基于多重判别分析的朴素贝叶斯分类器DANB(Discriminant Analysis Naive Bayesian classifier).将该分类方法与朴素贝叶斯分类器(Naive Bayesian classifier, NB)和TAN分类器(Tree Augmented Naive Bayesian classifier)进行实验比较,实验结果表明在大多数数据集上,DANB分类器具有较高的分类正确率.  相似文献   

13.
Classification performance of an ensemble method can be deciphered by studying the bias and variance contribution to its classification error. Statistically, the bias and variance of a single classifier is controlled by the size of the training set and the complexity of the classifier. It has been both theoretically and empirically established that the classification performance (hence bias and variance) of a single classifier can be improved partially by using a suitable ensemble method of the classifier and resampling the original training set. In this paper, we have empirically examined the bias-variance decomposition of three different types of ensemble methods with different training sample sizes consisting of 10% to maximum 63% of the observations from the original training sample. First ensemble is bagging, second one is a boosting type ensemble named adaboost and the last one is a bagging type hybrid ensemble method, called bundling. All the ensembles are trained on training samples constructed with small subsampling ratios (SSR) 0.10, 0.20, 0.30, 0.40, 0.50 and bootstrapping. The experiments are all done on 20 UCI Machine Learning repository datasets and designed to find out the optimal training sample size (smaller than the original training sample) for each ensemble and then find out the optimal ensemble with smaller trianing sets with respect to the bias-variance performance. The bias-variance decomposition of bundling shows that this ensemble method with small subsamples has significantly lower bias and variance than subsampled and bootstrapped version of bagging and adaboost.  相似文献   

14.
The positive unlabeled learning term refers to the binary classification problem in the absence of negative examples. When only positive and unlabeled instances are available, semi-supervised classification algorithms cannot be directly applied, and thus new algorithms are required. One of these positive unlabeled learning algorithms is the positive naive Bayes (PNB), which is an adaptation of the naive Bayes induction algorithm that does not require negative instances. In this work we propose two ways of enhancing this algorithm. On one hand, we have taken the concept behind PNB one step further, proposing a procedure to build more complex Bayesian classifiers in the absence of negative instances. We present a new algorithm (named positive tree augmented naive Bayes, PTAN) to obtain tree augmented naive Bayes models in the positive unlabeled domain. On the other hand, we propose a new Bayesian approach to deal with the a priori probability of the positive class that models the uncertainty over this parameter by means of a Beta distribution. This approach is applied to both PNB and PTAN, resulting in two new algorithms. The four algorithms are empirically compared in positive unlabeled learning problems based on real and synthetic databases. The results obtained in these comparisons suggest that, when the predicting variables are not conditionally independent given the class, the extension of PNB to more complex networks increases the classification performance. They also show that our Bayesian approach to the a priori probability of the positive class can improve the results obtained by PNB and PTAN.  相似文献   

15.
本文使用“事件研究”方法分析了证券分析师推荐股票的总体特征,试图找出符合这些特征的股票而获得超额回报,并应用基本贝叶斯分类方法进行选股。经对上证A股的所选股票的收益率统计分析,通过合理地选取贝叶斯分类器参数可以获得较好回报。结果表明了这种方法是有实际意义和效果的。  相似文献   

16.
伴随着互联网的广泛流行,以微博为代表的社交网络产生了大量的数据. 从这些数据中挖掘到有用的信息成为当今研究的一项重要方向. 根据微博文本的特点,本文提出来一种基于联合分类器过滤掉噪声微博,然后利用LDA模型进行主题发现. 联合分类器模型是由朴素贝叶斯、支持向量机和决策树三种模型通过简单投票机制结合构成的,实验结果联合分类器的准确度达到87%,显然这种分类方法是可行的,也是有效的.  相似文献   

17.

朴素贝叶斯分类器不能有效地利用属性之间的依赖信息, 而目前所进行的依赖扩展更强调效率, 使扩展后分类器的分类准确性还有待提高. 针对以上问题, 在使用具有平滑参数的高斯核函数估计属性密度的基础上, 结合分类器的分类准确性标准和属性父结点的贪婪选择, 进行朴素贝叶斯分类器的网络依赖扩展. 使用UCI 中的连续属性分类数据进行实验, 结果显示网络依赖扩展后的分类器具有良好的分类准确性.

  相似文献   

18.
基于“3σ”规则的贝叶斯分类器   总被引:1,自引:0,他引:1  
在软测量建模问题中为了提高模型的估计精度,通常需要将原始数据集分类,以构造多个子模型。数据分类中利用朴素贝叶斯分类器简单高效的优点,首先对连续的类变量进行类别范围划分,然后用概率论中的3σ规则对连续的属性变量离散。可以消除训练样本中干扰数据的影响,利用遗传算法从训练样本集中优选样本。对连续变量的离散和样本的优选作为对数据的预处理,预处理后的训练样本构建贝叶斯分类器。通过对UC I数据集和双酚A生产过程在线监测数据集的实验仿真,实验结果表明,遗传算法优选样本集的3σ规则朴素贝叶斯分类方法比其它方法有更高的分类精度。  相似文献   

19.
针对传统的基于传输层端口和基于特征码的流量分类技术准确率低、应用范围有限等缺点,提出了使用树扩展的贝叶斯分类器的方法,该方法利用网络流量的统计属性和基于统计理论的贝叶斯方法构建分类模型,并利用该模型对未知流量进行分类。实验分析了不同权值、不同规模的数据集对其性能的影响,并与NB、C4.5算法做了比较。实验结果表明,该方法具有较好的分类性能和较高的分类准确率。  相似文献   

20.
本文使用"事件研究"方法分析了证券分析师推荐股票的总体特征,试图找出符合这些特征的股票而获得超额回报,并应用基本贝叶斯分类方法进行选股。经对上证A股的所选股票的收益率统计分析,通过合理地选取贝叶斯分类器参数可以获得较好回报。结果表明了这种方法是有实际意义和效果的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号