首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 375 毫秒
1.
分类准确性是分类器最重要的性能指标,特征子集选择是提高分类器分类准确性的一种有效方法。现有的特征子集选择方法主要针对静态分类器,缺少动态分类器特征子集选择方面的研究。首先给出具有连续属性的动态朴素贝叶斯网络分类器和动态分类准确性评价标准,在此基础上建立动态朴素贝叶斯网络分类器的特征子集选择方法,并使用真实宏观经济时序数据进行实验与分析。  相似文献   

2.
连续属性完全贝叶斯分类器的学习与优化   总被引:2,自引:0,他引:2  
针对连续属性朴素贝叶斯分类器不能有效利用属性之间的条件依赖信息,而依赖扩展又很难实现属性条件联合密度估计和结构学习协同优化的问题,文中在使用多元高斯核函数估计属性条件联合密度的基础上,建立了具有多平滑参数的连续属性完全贝叶斯分类器,并给出将分类准确性标准与区间异步长划分完全搜索相结合的平滑参数优化方法,再通过时序扩展构建了动态完全贝叶斯分类器.我们使用UCI机器学习数据仓库中连续属性分类数据和宏观经济数据进行实验,结果显示,经过优化的两种分类器均具有良好的分类准确性.  相似文献   

3.
操作风险数据积累比较困难,而且往往不完整,朴素贝叶斯分类器是目前进行小样本分类最优秀的分类器之一,适合于操作风险等级预测。在对具有完整数据朴素贝叶斯分类器学习和分类的基础上,提出了基于星形结构和Gibbs sampling的具有丢失数据朴素贝叶斯分类器学习方法,能够避免目前常用的处理丢失数据方法所带来的局部最优、信息丢失和冗余等方面的问题。  相似文献   

4.
通过分析朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器,提出了一种新的属性依赖度量方法,并依此对TAN分类器的构造方法进行了改进.将该分类方法(XINTAN)与朴素贝叶斯分类器和TAN分类器进行了实验比较.实验结果表明,此分类方法集中了朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器的优点,性能要优于TAN分类器.  相似文献   

5.
由于作为朴素贝叶斯分类器的主要特征的条件独立性假设条件过强且在不同数据集上表现出的差异,所以独立性假设成为众多改进算法的切入点。但也有研究指出不满足该假设并没有对分类器造成预想的影响。从降低后验概率的估计误差入手提出一种条件熵匹配的半朴素贝叶斯分类器。实验证明,该方法能有效提高朴素贝叶斯分类器的性能。  相似文献   

6.
随着信息量的快速增长,获取和筛选相关信息变得越来越重要。文章研究了基于朴素贝叶斯算法的信息过滤方法。首先,介绍了朴素贝叶斯算法的基本原理,包括贝叶斯定理、朴素贝叶斯分类器及该算法的优缺点。其次,探讨了朴素贝叶斯算法在信息过滤领域的应用,包括信息过滤的分类、文本表示方法、基于朴素贝叶斯的信息过滤模型构建。最后,通过实验评估了该方法在文本分类任务上的性能,包括不同特征表示方法的对比以及与其他分类算法的性能对比。实验结果表明,基于朴素贝叶斯算法的信息过滤具有较好的性能,可以有效分类不同主题的文本。  相似文献   

7.
徐光美  杨炳儒  钱榕 《计算机工程》2008,34(13):49-50,53
众多研究者致力于将朴素贝叶斯方法与原有的ILP系统结合,形成各种各样的多关系朴素贝叶斯分类器(MRNBC).该文提出形成朴素贝叶斯分类器的一阶扩展的一般方法.现实中关系数据库广泛存在,可以直接作用于数据库表,而无须转换表示形式的MRNBC则是研究的重点,该方法主要基于关系数据库理论,分析了进行一阶扩展的关键问题.  相似文献   

8.
徐光美  杨炳儒  钱榕 《计算机工程》2008,34(13):49-50,5
众多研究者致力于将朴素贝叶斯方法与原有的ILP系统结合,形成各种各样的多关系朴素贝叶斯分类器(MRNBC)。该文提出形成朴素贝叶斯分类器的一阶扩展的一般方法。现实中关系数据库广泛存在,可以直接作用于数据库表,而无须转换表示形式的MRNBC则是研究的重点,该方法主要基于关系数据库理论,分析了进行一阶扩展的关键问题。  相似文献   

9.
基于多重判别分析的朴素贝叶斯分类器   总被引:4,自引:1,他引:4  
通过分析朴素贝叶斯分类器的分类原理,并结合多重判别分析的优点,提出了一种基于多重判别分析的朴素贝叶斯分类器DANB(Discriminant Analysis Naive Bayesian classifier).将该分类方法与朴素贝叶斯分类器(Naive Bayesian classifier, NB)和TAN分类器(Tree Augmented Naive Bayesian classifier)进行实验比较,实验结果表明在大多数数据集上,DANB分类器具有较高的分类正确率.  相似文献   

10.
秦锋  任诗流  程泽凯  罗慧 《计算机工程与设计》2007,28(20):4873-4874,4877
朴素贝叶斯分类器是一种简单而高效的分类器,但需要属性独立性假设,无法表示现实世界中属性之间的依赖关系,影响了其分类性能.利用独立分量分析提升朴素贝叶斯分类性能,把样本投影到由独立分量所确定的特征空间,提高了朴素贝叶斯分类器的分类性能.实验结果表明,这种基于独立分量分析的朴素贝叶斯分类器具有良好的性能.  相似文献   

11.
This article investigates boosting naive Bayesian classification. It first shows that boosting does not improve the accuracy of the naive Bayesian classifier as much as we expected in a set of natural domains. By analyzing the reason for boosting's weakness, we propose to introduce tree structures into naive Bayesian classification to improve the performance of boosting when working with naive Bayesian classification. The experimental results show that although introducing tree structures into naive Bayesian classification increases the average error of the naive Bayesian classification for individual models, boosting naive Bayesian classifiers with tree structures can achieve significantly lower average error than both the naive Bayesian classifier and boosting the naive Bayesian classifier, providing a method of successfully applying the boosting technique to naive Bayesian classification. A bias and variance analysis confirms our expectation that the naive Bayesian classifier is a stable classifier with low variance and high bias. We show that the boosted naive Bayesian classifier has a strong bias on a linear form, exactly the same as its base learner. Introducing tree structures reduces the bias and increases the variance, and this allows boosting to gain advantage.  相似文献   

12.
本文使用“事件研究”方法分析了证券分析师推荐股票的总体特征,试图找出符合这些特征的股票而获得超额回报,并应用基本贝叶斯分类方法进行选股。经对上证A股的所选股票的收益率统计分析,通过合理地选取贝叶斯分类器参数可以获得较好回报。结果表明了这种方法是有实际意义和效果的。  相似文献   

13.
本文使用"事件研究"方法分析了证券分析师推荐股票的总体特征,试图找出符合这些特征的股票而获得超额回报,并应用基本贝叶斯分类方法进行选股。经对上证A股的所选股票的收益率统计分析,通过合理地选取贝叶斯分类器参数可以获得较好回报。结果表明了这种方法是有实际意义和效果的。  相似文献   

14.
Our objective here is to provide an extension of the naive Bayesian classifier in a manner that gives us more parameters for matching data. We first describe the naive Bayesian classifier, and then discuss the ordered weighted averaging (OWA) aggregation operators. We introduce a new class of OWA operators which are based on a combining the OWA operators with t-norm’s operators. We show that the naive Bayesian classifier can seen as a special case of this. We use this to suggest an extended version of the naive Bayesian classifier which involves a weighted summation of products of the probabilities. An algorithm is suggested to obtain the weights associated with this extended naive Bayesian classifier.  相似文献   

15.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

16.
In this paper, we describe three Bayesian classifiers for mineral potential mapping: (a) a naive Bayesian classifier that assumes complete conditional independence of input predictor patterns, (b) an augmented naive Bayesian classifier that recognizes and accounts for conditional dependencies amongst input predictor patterns and (c) a selective naive classifier that uses only conditionally independent predictor patterns. We also describe methods for training the classifiers, which involves determining dependencies amongst predictor patterns and estimating conditional probability of each predictor pattern given the target deposit-type. The output of a trained classifier determines the extent to which an input feature vector belongs to either the mineralized class or the barren class and can be mapped to generate a favorability map. The procedures are demonstrated by an application to base metal potential mapping in the proterozoic Aravalli Province (western India). The results indicate that although the naive Bayesian classifier performs well and shows significant tolerance for the violation of the conditional independence assumption, the augmented naive Bayesian classifier performs better and exhibits finer generalization capability. The results also indicate that the rejection of conditionally dependent predictor patterns degrades the performance of a naive classifier.  相似文献   

17.
Bayesian Network Classifiers   总被引:154,自引:0,他引:154  
Friedman  Nir  Geiger  Dan  Goldszmidt  Moises 《Machine Learning》1997,29(2-3):131-163
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号