首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到5条相似文献,搜索用时 0 毫秒
1.
In this paper we present the Dempster-Shafer theory as a framework within which the results of a Bayesian network classifier and a fuzzy logic-based classifier are combined to produce a better final classification. We deal with the case when the two original classifiers use different classes for the outcome. The problem of different classes is solved by using a superset of finer classes which can be combined to produce classes according to either of the two classifiers. Within the Dempster-Shafer formalism not only can the problem of different number of classes be solved, but the relative reliability of the classifiers can also be considered. ID="A1"Correspondance and offprint requests to: M. R. Ahmadzadeh, Centre for Vision, Speech and Signal Processing, School of Electronics, Computing and Mathematics, University of Surrey, Guildford, UK  相似文献   

2.
The growing availability of sensor networks brings practical situations where a large number of classifiers can be used for building a classifier ensemble. In the most general case involving sensor networks, the classifiers are fed with multiple inputs collected at different locations. However, classifier fusion is often studied within an idealized formulation where each classifier is fed with the same point in the feature space, and estimate the posterior class probability given this input. We first expand this formulation to situations where classifiers are fed with multiple inputs, demonstrating the relevance of the formulation to situations involving sensor networks, and a large number of classifiers. Following that, we determine the rate of convergence of the classification error of a classifier ensemble for three fusion strategies (average, median and maximum) when the number of classifiers becomes large. As the size of the ensemble increases, the best strategy is defined as the one that results in fastest convergence of the classification error to zero. The best strategy is analytically shown to depend on the distribution of the individual classification errors: average is the best for normal distributions; maximum is the best for uniform distributions; and median is the best for Cauchy distributions. The general effect of heavy-tailedness is also analytically investigated for the average and median strategies. The median strategy is shown to be robust to heavy-tailedness, while performance of the average strategy is shown to degrade as heavy-tailedness becomes more pronounced. The combined effects of bimodality and heavy-tailedness are also investigated when the number of classifiers become large.  相似文献   

3.
This article deals with the combination of pattern classifiers with two reject options. Such classifiers operate in two steps and differ on the managing of ambiguity and distance rejection (independently or not). We propose to combine the first steps of these classifiers using concepts from the theory of evidence. We propose some intelligent basic probability assignment to reject classes before using the combination rule. After combination, a decision rule is proposed for classifying or rejecting patterns either for distance or for ambiguity. We emphasize that rejection is not related to a lack of consensus between the classifiers, but to the initial reject options. In the case of ambiguity rejection, a class-selective approach has been used. Some illustrative results on artificial and real data are given. Received: 21 November 2000, Received in revised form: 25 October 2001, Accepted: 26 November 2001  相似文献   

4.
In Bayesian probabilistic approach for uncertain reasoning, one basic assumption is that a priori knowledge about the uncertain variable is modeled by a probability distribution. When new evidence representable by a constant set is available, the Bayesian conditioning is used to update a priori knowledge. In the conventional D-S evidence theory, all bodies of evidence about the uncertain variable are imprecise and uncertain. All bodies of evidence are combined by so-called Dempster’s rule of combination to achieve a combined body of evidence without considering a priori knowledge. From our point of view, when identifying the true value of an uncertain variable, Bayesian approach and evidence theory can cooperate to deal with uncertain reasoning. Firstly all imprecise and uncertain bodies of evidence about the uncertain variable are fused to achieve a combined evidence based on a priori knowledge, then the a posteriori probability distribution is achieved from a priori probability distribution by conditioning on the combined evidence. In this paper we firstly deal with the knowledge updating problem where a priori knowledge is represented by a probability distribution and new evidence is represented by a random set. Then we review the conditional evidence theory which resolves the knowledge combining problem based on a priori probabilistic knowledge. Finally we discuss the close relationship between knowledge updating procedure and knowledge combining procedure presented in this paper. We show that a posteriori probability conditioned on fused body of evidence satisfies the Bayesian parallel combination rule.  相似文献   

5.
An important issue in text mining is how to make use of multiple pieces knowledge discovered to improve future decisions. In this paper, we propose a new approach to combining multiple sets of rules for text categorization using Dempster’s rule of combination. We develop a boosting-like technique for generating multiple sets of rules based on rough set theory and model classification decisions from multiple sets of rules as pieces of evidence which can be combined by Dempster’s rule of combination. We apply these methods to 10 of the 20-newsgroups—a benchmark data collection (Baker and McCallum 1998), individually and in combination. Our experimental results show that the performance of the best combination of the multiple sets of rules on the 10 groups of the benchmark data is statistically significant and better than that of the best single set of rules. The comparative analysis between the Dempster–Shafer and the majority voting (MV) methods along with an overfitting study confirm the advantage and the robustness of our approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号