首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
基于属性加权的朴素贝叶斯分类算法   总被引:3,自引:0,他引:3       下载免费PDF全文
朴素贝叶斯分类是一种简单而高效的方法,但是它的属性独立性假设,影响了它的分类性能。通过放松朴素贝叶斯假设可以增强其分类效果,但通常会导致计算代价大幅提高。提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器性能,加权参数直接从训练数据中学习得到。权值可以看作是计算某个类的后验概率时,某属性取值对该类别的影响程度。实验结果表明,该算法可行而且有效。  相似文献   

2.
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。提出一种新的算法,该算法为避免数据预处理时,训练集的噪声及数据规模使属性约简的效果不太理想,并进而影响分类效果,在训练集上通过随机属性选取生成若干属性子集,并以这些子集构建相应的贝叶斯分类器,进而采用遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的分类精度。  相似文献   

3.
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。文章提出一种新的算法,该算法为避免数据预处理时的属性约简对分类效果的直接影响,在训练集上通过随机属性选取生成若干属性子集,以这些子集构建相应的朴素贝叶斯分类器,采用模拟退火遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的性能。  相似文献   

4.
基于遗传算法的朴素贝叶斯分类   总被引:1,自引:0,他引:1  
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。提出一种新的算法,该算法为避免数据预处理时,训练集的噪声及数据规模使属性约简的效果不太理想,并进而影响分类效果,在训练集上通过随机属性选取生成若干属性子集,并以这些子集构建相应的贝叶斯分类器,进而采用遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的分类精度。  相似文献   

5.
属性加权的朴素贝叶斯集成分类器   总被引:2,自引:1,他引:1  
为提高朴素贝叶斯分类器的分类精度和泛化能力,提出了基于属性相关性的加权贝叶斯集成方法(WEBNC)。根据每个条件属性与决策属性的相关度对其赋以相应的权值,然后用AdaBoost训练属性加权后的BNC。该分类方法在16个UCI标准数据集上进行了测试,并与BNC、贝叶斯网和由AdaBoost训练出的BNC进行比较,实验结果表明,该分类器具有更高的分类精度与泛化能力。  相似文献   

6.
根据RoughSet属性重要度理论,构建了基于互信息的属性子集重要度,提出属性相关性的加权朴素贝叶斯分类算法,该算法同时放宽了朴素贝叶斯算法属性独立性、属性重要性相同的假设。通过在UCI部分数据集上进行仿真实验,与基于属性相关性分析的贝叶斯(CB)和加权朴素贝叶斯(WNB)两种算法做比较,证明了该算法的有效性。  相似文献   

7.
朴素贝叶斯由于条件独立性假设使其分类效果不明显,同时在处理海量数据时缺乏灵活性。针对以上情况,提出一种基于动态约简的增量贝叶斯分类算法。算法首先利用(F-λ)广义动态约简计算出数据集的核属性,然后根据训练集的先验信息构造分类器对测试实例进行分类,最后利用类置信度进行选择性增量学习,增强处理增量数据的能力。实验结果表明,该算法在处理属性少的小量数据时,分类效果有一定的改善,在处理多属性大量数据时,分类效果明显提高。  相似文献   

8.
李凯  郝丽锋 《计算机工程》2009,35(5):183-184
针对朴素贝叶斯模型的稳定性,进一步提高朴素贝叶斯模型的性能,通过集成学习方法克服朴素贝叶斯模型中属性独立的限制条件,提出一种基于Oracle选择的朴素贝叶斯集成算法,使用Oracle选择机制破坏其稳定性,并从中选取较好的分类器作为集成学习中的个体成员,使用投票方法对结果进行融合。实验结果证明,该算法能提高朴素贝叶斯模型分类的正确率,表明OSBE的性能在一些数据集上优于Bagging与Adaboost集成学习的性能。  相似文献   

9.
一种朴素贝叶斯分类增量学习算法   总被引:1,自引:0,他引:1  
朴素贝叶斯(Nave Bayes,NB)分类方法是一种简单而有效的概率分类方法,但是贝叶斯算法存在训练集数据不完备这个缺陷。传统的贝叶斯分类方法在有新的训练样本加入时,需要重新学习已经学习过的样本,耗费大量时间。为此引入增量学习算法,算法在已有的分类器的基础上,自主选择学习新的文本来修正分类器。本文给出词频加权朴素贝叶斯分类增量学习算法思想及其具体算法,并对算法给予证明。通过算法分析可知,相比无增量学习的贝叶斯分类,本算法额外的空间复杂度与时间复杂度都在可接受范围。  相似文献   

10.
基于特征加权的朴素贝叶斯分类器   总被引:13,自引:0,他引:13  
程克非  张聪 《计算机仿真》2006,23(10):92-94,150
朴素贝叶斯分类器是一种广泛使用的分类算法,其计算效率和分类效果均十分理想。但是,由于其基础假设“朴素贝叶斯假设”与现实存在一定的差异,因此在某些数据上可能导致较差的分类结果。现在存在多种方法试图通过放松朴素贝叶斯假设来增强贝叶斯分类器的分类效果,但是通常会导致计算代价大幅提高。该文利用特征加权技术来增强朴素贝叶斯分类器。特征加权参数直接从数据导出,可以看作是计算某个类别的后验概率时,某个属性对于该计算的影响程度。数值实验表明,特征加权朴素贝叶斯分类器(FWNB)的效果与其他的一些常用分类算法,例如树扩展朴素贝叶斯(TAN)和朴素贝叶斯树(NBTree)等的分类效果相当,其平均错误率都在17%左右;在计算速度上,FWNB接近于NB,比TAN和NBTree快至少一个数量级。  相似文献   

11.
Due to being fast, easy to implement and relatively effective, some state-of-the-art naive Bayes text classifiers with the strong assumption of conditional independence among attributes, such as multinomial naive Bayes, complement naive Bayes and the one-versus-all-but-one model, have received a great deal of attention from researchers in the domain of text classification. In this article, we revisit these naive Bayes text classifiers and empirically compare their classification performance on a large number of widely used text classification benchmark datasets. Then, we propose a locally weighted learning approach to these naive Bayes text classifiers. We call our new approach locally weighted naive Bayes text classifiers (LWNBTC). LWNBTC weakens the attribute conditional independence assumption made by these naive Bayes text classifiers by applying the locally weighted learning approach. The experimental results show that our locally weighted versions significantly outperform these state-of-the-art naive Bayes text classifiers in terms of classification accuracy.  相似文献   

12.
基于改进属性加权的朴素贝叶斯分类模型   总被引:1,自引:0,他引:1       下载免费PDF全文
构造了一种新的属性间相关性度量方法,提出了改进属性加权的朴素贝叶斯分类模型。经实验证明,提出的朴素贝叶斯分类模型明显优于张舜仲等人提出的分类模型。  相似文献   

13.
In this paper, we describe three Bayesian classifiers for mineral potential mapping: (a) a naive Bayesian classifier that assumes complete conditional independence of input predictor patterns, (b) an augmented naive Bayesian classifier that recognizes and accounts for conditional dependencies amongst input predictor patterns and (c) a selective naive classifier that uses only conditionally independent predictor patterns. We also describe methods for training the classifiers, which involves determining dependencies amongst predictor patterns and estimating conditional probability of each predictor pattern given the target deposit-type. The output of a trained classifier determines the extent to which an input feature vector belongs to either the mineralized class or the barren class and can be mapped to generate a favorability map. The procedures are demonstrated by an application to base metal potential mapping in the proterozoic Aravalli Province (western India). The results indicate that although the naive Bayesian classifier performs well and shows significant tolerance for the violation of the conditional independence assumption, the augmented naive Bayesian classifier performs better and exhibits finer generalization capability. The results also indicate that the rejection of conditionally dependent predictor patterns degrades the performance of a naive classifier.  相似文献   

14.

朴素贝叶斯分类器不能有效地利用属性之间的依赖信息, 而目前所进行的依赖扩展更强调效率, 使扩展后分类器的分类准确性还有待提高. 针对以上问题, 在使用具有平滑参数的高斯核函数估计属性密度的基础上, 结合分类器的分类准确性标准和属性父结点的贪婪选择, 进行朴素贝叶斯分类器的网络依赖扩展. 使用UCI 中的连续属性分类数据进行实验, 结果显示网络依赖扩展后的分类器具有良好的分类准确性.

  相似文献   

15.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

16.
Boosted Bayesian network classifiers   总被引:2,自引:0,他引:2  
The use of Bayesian networks for classification problems has received a significant amount of recent attention. Although computationally efficient, the standard maximum likelihood learning method tends to be suboptimal due to the mismatch between its optimization criteria (data likelihood) and the actual goal of classification (label prediction accuracy). Recent approaches to optimizing classification performance during parameter or structure learning show promise, but lack the favorable computational properties of maximum likelihood learning. In this paper we present boosted Bayesian network classifiers, a framework to combine discriminative data-weighting with generative training of intermediate models. We show that boosted Bayesian network classifiers encompass the basic generative models in isolation, but improve their classification performance when the model structure is suboptimal. We also demonstrate that structure learning is beneficial in the construction of boosted Bayesian network classifiers. On a large suite of benchmark data-sets, this approach outperforms generative graphical models such as naive Bayes and TAN in classification accuracy. Boosted Bayesian network classifiers have comparable or better performance in comparison to other discriminatively trained graphical models including ELR and BNC. Furthermore, boosted Bayesian networks require significantly less training time than the ELR and BNC algorithms.  相似文献   

17.
针对朴素贝叶斯算法存在的三方面约束和限制,提出一种数据缺失条件下的贝叶斯优化算法。该算法计算任两个属性的灰色相关度,根据灰色相关度完成相关属性的联合、冗余属性的删除和属性加权;根据灰色相关度执行改进EM算法完成缺失数据的填补,对经过处理的数据集用朴素贝叶斯算法进行分类。实验结果验证了该优化算法的有效性。  相似文献   

18.
基于朴素贝叶斯与ID3算法的决策树分类   总被引:2,自引:0,他引:2       下载免费PDF全文
v在朴素贝叶斯算法和ID3算法的基础上,提出一种改进的决策树分类算法。引入客观属性重要度参数,给出弱化的朴素贝叶斯条件独立性假设,并采用加权独立信息熵作为分类属性的选取标准。理论分析和实验结果表明,改进算法能在一定程度上克服ID3算法的多值偏向问题,并且具有较高的执行效率和分类准确度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号