首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this study, we present a nearest neighbour cuckoo search algorithm with probabilistic mutation, called NNCS. In the proposed approach, the nearest neighbour strategy is utilized to select guides to search for new solutions by using the nearest neighbour solutions instead of the best solution obtained so far. In the proposed strategy, we respectively employ a solution-based and a fitness-based similar metrics to select the nearest neighbour solutions for implementation. Furthermore, the probabilistic mutation strategy is used to control the new solutions learn from the nearest neighbour ones in partial dimensions only. In addition, the nearest neighbour strategy helps the best solution participate in searching too. Extensive experiments, which are carried on 20 benchmark functions with different properties, demonstrate the improvement in effectiveness and efficiency of the nearest neighbour strategy and the probabilistic mutation strategy.  相似文献   

2.
In this paper, a theoretical and experimental analysis of the error-reject trade-off achievable by linearly combining the outputs of an ensemble of classifiers is presented. To this aim, the theoretical framework previously developed by Tumer and Ghosh for the analysis of the simple average rule without the reject option has been extended. Analytical results that allow to evaluate the improvement of the error-reject trade-off achievable by simple averaging their outputs under different assumptions about the distributions of the estimation errors affecting a posteriori probabilities, are provided. The conditions under which the weighted average can provide a better error-reject trade-off than the simple average are then determined. From the theoretical results obtained under the assumption of unbiased and uncorrelated estimation errors, simple guidelines for the design of multiple classifier systems using linear combiners are given. Finally, an experimental evaluation and comparison of the error-reject trade-off of the simple and weighted averages is reported for five real data sets. The results show the practical relevance of the proposed guidelines in the design of linear combiners.  相似文献   

3.
The problem of classifier combination is considered in the context of the two main fusion scenarios: fusion of opinions based on identical and on distinct representations. We develop a theoretical framework for classifier combination for these two scenarios. For multiple experts using distinct representations we argue that many existing schemes such as the product rule, sum rule, min rule, max rule, majority voting, and weighted combination, can be considered as special cases of compound classification. We then consider the effect of classifier combination in the case of multiple experts using a shared representation where the aim of fusion is to obtain a better estimate of the appropriatea posteriori class probabilities. We also show that the two theoretical frameworks can be used for devising fusion strategies when the individual experts use features some of which are shared and the remaining ones distinct. We show that in both cases (distinct and shared representations), the expert fusion involves the computation of a linear or nonlinear function of thea posteriori class probabilities estimated by the individual experts. Classifier combination can therefore be viewed as a multistage classification process whereby thea posteriori class probabilities generated by the individual classifiers are considered as features for a second stage classification scheme. Most importantly, when the linear or nonlinear combination functions are obtained by training, the distinctions between the two scenarios fade away, and one can view classifier fusion in a unified way.  相似文献   

4.
This paper presents a novel reject rule for support vector classifiers, based on the receiver operating characteristic (ROC) curve. The rule minimises the expected classification cost, defined on the basis of classification and the error costs for the particular application at hand. The rationale of the proposed approach is that the ROC curve of the SVM contains all of the necessary information to find the optimal threshold values that minimise the expected classification cost. To evaluate the effectiveness of the proposed reject rule, a large number of tests has been performed on several data sets, and with different kernels. A comparison technique, based on the Wilcoxon rank sum test, has been defined and employed to provide the results at an adequate significance level. The experiments have definitely confirmed the effectiveness of the proposed reject rule.  相似文献   

5.
Randomized or probabilistic Hough transform: unified performance evaluation   总被引:1,自引:0,他引:1  
Rapid computation of the Hough transform is necessary in very many computer vision applications. One of the major approaches for fast Hough transform computation is based on the use of a small random sample of the data set rather than the full set. Two different algorithms within this family are the randomized Hough transform (RHT) and the probabilistic Hough transform (PHT). There have been contradictory views on the relative merits and drawbacks of the RHT and the PHT. In this paper, a unified theoretical framework for analyzing the RHT and the PHT is established. The performance of the two algorithms is characterized both theoretically and experimentally. Clear guidelines for selecting the algorithm that is most suitable for a given application are provided. We show that, when considering the basic algorithms, the RHT is better suited for the analysis of high quality low noise edge images, while for the analysis of noisy low quality images the PHT should be selected.  相似文献   

6.
With the widespread usage of social networks, forums and blogs, customer reviews emerged as a critical factor for the customers’ purchase decisions. Since the beginning of 2000s, researchers started to focus on these reviews to automatically categorize them into polarity levels such as positive, negative, and neutral. This research problem is known as sentiment classification. The objective of this study is to investigate the potential benefit of multiple classifier systems concept on Turkish sentiment classification problem and propose a novel classification technique. Vote algorithm has been used in conjunction with three classifiers, namely Naive Bayes, Support Vector Machine (SVM), and Bagging. Parameters of the SVM have been optimized when it was used as an individual classifier. Experimental results showed that multiple classifier systems increase the performance of individual classifiers on Turkish sentiment classification datasets and meta classifiers contribute to the power of these multiple classifier systems. The proposed approach achieved better performance than Naive Bayes, which was reported the best individual classifier for these datasets, and Support Vector Machines. Multiple classifier systems (MCS) is a good approach for sentiment classification, and parameter optimization of individual classifiers must be taken into account while developing MCS-based prediction systems.  相似文献   

7.
Yue  Chew Lim   《Pattern recognition》2002,35(12):2823-2832
Combination of multiple classifiers is regarded as an effective strategy for achieving a practical system of handwritten character recognition. A great deal of research on the methods of combining multiple classifiers has been reported to improve the recognition performance of single characters. However, in a practical application, the recognition performance of a group of characters (such as a postcode or a word) is more significant and more crucial. With the motivation of optimizing the recognition performance of postcode rather than that of single characters, this paper presents an approach to combine multiple classifiers in such a way that the combination decision is carried out at the postcode level rather than at the single character level, in which a probabilistic postcode dictionary is utilized as well to improve the postcode recognition ability. It can be seen from the experimental results that the proposed approach markedly improves the postcode recognition performance and outperforms the commonly used methods of combining multiple classifiers at the single character level. Furthermore, the sorting performance of some particular bins with respect to the postcodes with low frequency of occurrence can be improved significantly at the same time.  相似文献   

8.
Multiple fault diagnosis (MFD) is used as an effective measure to tackle the problems of real-shop floor environment for reducing the total lifetime maintenance cost of the system. It is a well-known computationally complex problem, where computational complexity increases exponentially as the number of faults increases. Thus, warrants the application of heuristic techniques or AI-based optimization tools to diagnose the exact faults in real time. In this research, rollout strategy-based probabilistic causal model (RSPCM) has been proposed to solve graph-based multiple fault diagnosis problems. Rollout strategy is a single-step iterative process, implemented in this research to improve the efficiency and robustness of probabilistic causal model. In RSPCM instead of finding all possible combinations of faults, collect the faults corresponding to each observed manifestations that can give the best possible result in compared to other methods. Intensive computational experiments on well-known data sets witness the superiority of the proposed heuristic over earlier approaches existing in the literature. From experimental results it can easily inferred that proposed methodology can diagnosed the exact fault in the minimum fault isolation time as compared to other approaches.  相似文献   

9.
In this paper we extend de Nicola and Hennessy’s testing theory to deal with probabilities. We say that two processes are testing equivalent if the probabilities with which they pass any test are equal. We present three alternative semantic views of our testing equivalence. First, we introduce adequate extensions of acceptance sets (inducing an operational characterization) and acceptance trees (inducing a denotational semantics). We also present a sound and complete axiomatization of our testing equivalence. So, this paper represents a complete study of the adaptation of the classical testing theory for probabilistic processes.  相似文献   

10.
A novel probabilistic fuzzy control system is proposed to treat the congestion avoidance problem in transmission control protocol (TCP) networks. Studies on traffic measurement of TCP networks have shown that the packet traffic exhibits long range dependent properties called self-similarity, which degrades the network performance greatly. The probabilistic fuzzy control (PFC) system is used to handle the complex stochastic features of self-similar traffic and the modeling uncertainties in the network system. A three-dimensional (3-D) membership function (MF) is embedded in the PFC to express and describe the stochastic feature of network traffic. The 3-D MF has extended the traditional fuzzy planar mapping and further provides a spatial mapping among "fuzziness-randomness-state". The additional stochastic expression of 3-D MF provides the PFC an additional freedom to handle the stochastic features of self-similar traffic. Simulation experiments show that the proposed control method achieves superior performance compared to traditional control schemes in a stochastic environment.  相似文献   

11.
PRL: A probabilistic relational language   总被引:1,自引:0,他引:1  
In this paper, we describe the syntax and semantics for a probabilistic relational language (PRL). PRL is a recasting of recent work in Probabilistic Relational Models (PRMs) into a logic programming framework. We show how to represent varying degrees of complexity in the semantics including attribute uncertainty, structural uncertainty and identity uncertainty. Our approach is similar in spirit to the work in Bayesian Logic Programs (BLPs), and Logical Bayesian Networks (LBNs). However, surprisingly, there are still some important differences in the resulting formalism; for example, we introduce a general notion of aggregates based on the PRM approaches. One of our contributions is that we show how to support richer forms of structural uncertainty in a probabilistic logical language than have been previously described. Our goal in this work is to present a unifying framework that supports all of the types of relational uncertainty yet is based on logic programming formalisms. We also believe that it facilitates understanding the relationship between the frame-based approaches and alternate logic programming approaches, and allows greater transfer of ideas between them. Editors: Hendrik Blockeel, David Jensen and Stefan Kramer An erratum to this article is available at .  相似文献   

12.
This paper discusses the critical infrastructure protection problem in supply systems regarding potential intentional attacks. Considering that the protection of a facility cannot be always successful, we present the r-interdiction median problem with probabilistic protection. Defensive resources are allocated according to the degree of importance of service facilities. Computational experiments demonstrate the benefits brought by centralizing resources to a few critical sites, as well as the importance of introducing probabilistic factors. Furthermore, we discuss the problem in a scenario of multiple interdictors. It is found that the worst-case interdictions made by multiple interdictors may cause much more serious system impairment than a single interdictor. Such losses can sometimes be effectively alleviated by adjustment of fortification plans. To solve the problem, we propose an iterated greedy search method which produces good approximations to optimal fortification strategies rather fast, even for instances of considerable size.  相似文献   

13.
We explore an approach to developing Datalog parallelization strategies that aims at good expected rather than worst-case performance. To illustrate, we consider a very simple parallelization strategy that applies to all Datalog programs. We prove that this has very good expected performance under equal distribution of inputs. This is done using an extension of 0–1 laws adapted to this context. The analysis is confirmed by experimental results on randomly generated data.  相似文献   

14.
应用分类方法进行聚类评价*   总被引:1,自引:1,他引:0  
针对现有基于几何结构的聚类有效性指标不能有效解决不同结构数据的聚类结果评价问题,提出了一种使用分类对聚类结果进行评价的方法。该方法把聚类得到的对象类标志作为分类问题的已知类标志,使用交叉验证法对数据集重新分类,通过对比聚类结果与分类结果之间的差异来衡量聚类有效性。一个易于聚类的数据集的结构意味着也容易进行分类,对模拟数据和真实数据的实验和分析验证了该方法的可行性和有效性。  相似文献   

15.
16.
A large number of stakeholders take part in the process of decision making, namely, large-scale group decision making (LGDM) problems. Every stakeholder utilises a linguistic preference relation (LPR) to represent her/his preference information for alternatives. Then, a probabilistic LPR (PLPR) is established to represent the group preference. However, some stakeholders may only provide partial preference information about the alternatives. Thus, a PLPR with incomplete probabilities can be used to manage LGDM problems in complex environments. Based on the defined expected multiplicative consistency of PLPR, a probability computation model is established by mathematical programming to derive the missing probabilities of PLPR. In addition, an iterative algorithm to improve the consistency is proposed to obtain the PLPR with satisfactory consistency. Finally, a real-world investment decision-making problem with multiple stakeholders is solved to demonstrate the effectiveness of the proposed method.  相似文献   

17.
A nonsupervised parametric learning model using a randomized labeling procedure is discussed. Our model is an extension of the Agrawala's model and is applicable even in the case where the probability of occurrence of each category is unknown. Furthermore, the method proposed here is computationally feasible to identify a finite mixture. The learning algorithm for multivariate normal distribution is presented in this paper.  相似文献   

18.
A linear process-algebraic format with data for probabilistic automata   总被引:1,自引:0,他引:1  
This paper presents a novel linear process-algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar techniques for traditional process algebras with data, and — more importantly — treats data and data-dependent probabilistic choice in a fully symbolic manner, leading to the symbolic analysis of parameterised probabilistic systems. We discuss several reduction techniques that can easily be applied to our models. A validation of our approach on two benchmark leader election protocols shows reductions of more than an order of magnitude.  相似文献   

19.
This paper presents a novel revision of the framework of Hybrid Probabilistic Logic Programming, along with a complete semantics characterization, to enable the encoding of and reasoning about real-world applications. The language of Hybrid Probabilistic Logic Programs framework is extended to allow the use of non-monotonic negation, and two alternative semantical characterizations are defined: stable probabilistic model semantics and probabilistic well-founded semantics. These semantics generalize the stable model semantics and well-founded semantics of traditional normal logic programs, and they reduce to the semantics of Hybrid Probabilistic Logic programs for programs without negation. It is the first time that two different semantics for Hybrid Probabilistic Programs with non-monotonic negation as well as their relationships are described. This proposal provides the foundational grounds for developing computational methods for implementing the proposed semantics. Furthermore, it makes it clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning. An erratum to this article can be found at  相似文献   

20.
In many online applications of machine learning, the computational resources available for classification will vary from time to time. Most techniques are designed to operate within the constraints of the minimum expected resources and fail to utilize further resources when they are available. We propose a novel anytime classification algorithm, anytime averaged probabilistic estimators (AAPE), which is capable of delivering strong prediction accuracy with little CPU time and utilizing additional CPU time to increase classification accuracy. The idea is to run an ordered sequence of very efficient Bayesian probabilistic estimators (single improvement steps) until classification time runs out. Theoretical studies and empirical validations reveal that by properly identifying, ordering, invoking and ensembling single improvement steps, AAPE is able to accomplish accurate classification whenever it is interrupted. It is also able to output class probability estimates beyond simple 0/1-loss classifications, as well as adeptly handle incremental learning. Editor: David Page.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号