首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this study, we present a nearest neighbour cuckoo search algorithm with probabilistic mutation, called NNCS. In the proposed approach, the nearest neighbour strategy is utilized to select guides to search for new solutions by using the nearest neighbour solutions instead of the best solution obtained so far. In the proposed strategy, we respectively employ a solution-based and a fitness-based similar metrics to select the nearest neighbour solutions for implementation. Furthermore, the probabilistic mutation strategy is used to control the new solutions learn from the nearest neighbour ones in partial dimensions only. In addition, the nearest neighbour strategy helps the best solution participate in searching too. Extensive experiments, which are carried on 20 benchmark functions with different properties, demonstrate the improvement in effectiveness and efficiency of the nearest neighbour strategy and the probabilistic mutation strategy.  相似文献   

2.
In this paper, a theoretical and experimental analysis of the error-reject trade-off achievable by linearly combining the outputs of an ensemble of classifiers is presented. To this aim, the theoretical framework previously developed by Tumer and Ghosh for the analysis of the simple average rule without the reject option has been extended. Analytical results that allow to evaluate the improvement of the error-reject trade-off achievable by simple averaging their outputs under different assumptions about the distributions of the estimation errors affecting a posteriori probabilities, are provided. The conditions under which the weighted average can provide a better error-reject trade-off than the simple average are then determined. From the theoretical results obtained under the assumption of unbiased and uncorrelated estimation errors, simple guidelines for the design of multiple classifier systems using linear combiners are given. Finally, an experimental evaluation and comparison of the error-reject trade-off of the simple and weighted averages is reported for five real data sets. The results show the practical relevance of the proposed guidelines in the design of linear combiners.  相似文献   

3.
The problem of classifier combination is considered in the context of the two main fusion scenarios: fusion of opinions based on identical and on distinct representations. We develop a theoretical framework for classifier combination for these two scenarios. For multiple experts using distinct representations we argue that many existing schemes such as the product rule, sum rule, min rule, max rule, majority voting, and weighted combination, can be considered as special cases of compound classification. We then consider the effect of classifier combination in the case of multiple experts using a shared representation where the aim of fusion is to obtain a better estimate of the appropriatea posteriori class probabilities. We also show that the two theoretical frameworks can be used for devising fusion strategies when the individual experts use features some of which are shared and the remaining ones distinct. We show that in both cases (distinct and shared representations), the expert fusion involves the computation of a linear or nonlinear function of thea posteriori class probabilities estimated by the individual experts. Classifier combination can therefore be viewed as a multistage classification process whereby thea posteriori class probabilities generated by the individual classifiers are considered as features for a second stage classification scheme. Most importantly, when the linear or nonlinear combination functions are obtained by training, the distinctions between the two scenarios fade away, and one can view classifier fusion in a unified way.  相似文献   

4.
This paper presents a novel reject rule for support vector classifiers, based on the receiver operating characteristic (ROC) curve. The rule minimises the expected classification cost, defined on the basis of classification and the error costs for the particular application at hand. The rationale of the proposed approach is that the ROC curve of the SVM contains all of the necessary information to find the optimal threshold values that minimise the expected classification cost. To evaluate the effectiveness of the proposed reject rule, a large number of tests has been performed on several data sets, and with different kernels. A comparison technique, based on the Wilcoxon rank sum test, has been defined and employed to provide the results at an adequate significance level. The experiments have definitely confirmed the effectiveness of the proposed reject rule.  相似文献   

5.
Yue  Chew Lim   《Pattern recognition》2002,35(12):2823-2832
Combination of multiple classifiers is regarded as an effective strategy for achieving a practical system of handwritten character recognition. A great deal of research on the methods of combining multiple classifiers has been reported to improve the recognition performance of single characters. However, in a practical application, the recognition performance of a group of characters (such as a postcode or a word) is more significant and more crucial. With the motivation of optimizing the recognition performance of postcode rather than that of single characters, this paper presents an approach to combine multiple classifiers in such a way that the combination decision is carried out at the postcode level rather than at the single character level, in which a probabilistic postcode dictionary is utilized as well to improve the postcode recognition ability. It can be seen from the experimental results that the proposed approach markedly improves the postcode recognition performance and outperforms the commonly used methods of combining multiple classifiers at the single character level. Furthermore, the sorting performance of some particular bins with respect to the postcodes with low frequency of occurrence can be improved significantly at the same time.  相似文献   

6.
Multiple fault diagnosis (MFD) is used as an effective measure to tackle the problems of real-shop floor environment for reducing the total lifetime maintenance cost of the system. It is a well-known computationally complex problem, where computational complexity increases exponentially as the number of faults increases. Thus, warrants the application of heuristic techniques or AI-based optimization tools to diagnose the exact faults in real time. In this research, rollout strategy-based probabilistic causal model (RSPCM) has been proposed to solve graph-based multiple fault diagnosis problems. Rollout strategy is a single-step iterative process, implemented in this research to improve the efficiency and robustness of probabilistic causal model. In RSPCM instead of finding all possible combinations of faults, collect the faults corresponding to each observed manifestations that can give the best possible result in compared to other methods. Intensive computational experiments on well-known data sets witness the superiority of the proposed heuristic over earlier approaches existing in the literature. From experimental results it can easily inferred that proposed methodology can diagnosed the exact fault in the minimum fault isolation time as compared to other approaches.  相似文献   

7.
In this paper we extend de Nicola and Hennessy’s testing theory to deal with probabilities. We say that two processes are testing equivalent if the probabilities with which they pass any test are equal. We present three alternative semantic views of our testing equivalence. First, we introduce adequate extensions of acceptance sets (inducing an operational characterization) and acceptance trees (inducing a denotational semantics). We also present a sound and complete axiomatization of our testing equivalence. So, this paper represents a complete study of the adaptation of the classical testing theory for probabilistic processes.  相似文献   

8.
PRL: A probabilistic relational language   总被引:1,自引:0,他引:1  
In this paper, we describe the syntax and semantics for a probabilistic relational language (PRL). PRL is a recasting of recent work in Probabilistic Relational Models (PRMs) into a logic programming framework. We show how to represent varying degrees of complexity in the semantics including attribute uncertainty, structural uncertainty and identity uncertainty. Our approach is similar in spirit to the work in Bayesian Logic Programs (BLPs), and Logical Bayesian Networks (LBNs). However, surprisingly, there are still some important differences in the resulting formalism; for example, we introduce a general notion of aggregates based on the PRM approaches. One of our contributions is that we show how to support richer forms of structural uncertainty in a probabilistic logical language than have been previously described. Our goal in this work is to present a unifying framework that supports all of the types of relational uncertainty yet is based on logic programming formalisms. We also believe that it facilitates understanding the relationship between the frame-based approaches and alternate logic programming approaches, and allows greater transfer of ideas between them. Editors: Hendrik Blockeel, David Jensen and Stefan Kramer An erratum to this article is available at .  相似文献   

9.
This paper discusses the critical infrastructure protection problem in supply systems regarding potential intentional attacks. Considering that the protection of a facility cannot be always successful, we present the r-interdiction median problem with probabilistic protection. Defensive resources are allocated according to the degree of importance of service facilities. Computational experiments demonstrate the benefits brought by centralizing resources to a few critical sites, as well as the importance of introducing probabilistic factors. Furthermore, we discuss the problem in a scenario of multiple interdictors. It is found that the worst-case interdictions made by multiple interdictors may cause much more serious system impairment than a single interdictor. Such losses can sometimes be effectively alleviated by adjustment of fortification plans. To solve the problem, we propose an iterated greedy search method which produces good approximations to optimal fortification strategies rather fast, even for instances of considerable size.  相似文献   

10.
11.
A nonsupervised parametric learning model using a randomized labeling procedure is discussed. Our model is an extension of the Agrawala's model and is applicable even in the case where the probability of occurrence of each category is unknown. Furthermore, the method proposed here is computationally feasible to identify a finite mixture. The learning algorithm for multivariate normal distribution is presented in this paper.  相似文献   

12.
We explore an approach to developing Datalog parallelization strategies that aims at good expected rather than worst-case performance. To illustrate, we consider a very simple parallelization strategy that applies to all Datalog programs. We prove that this has very good expected performance under equal distribution of inputs. This is done using an extension of 0–1 laws adapted to this context. The analysis is confirmed by experimental results on randomly generated data.  相似文献   

13.
A linear process-algebraic format with data for probabilistic automata   总被引:1,自引:0,他引:1  
This paper presents a novel linear process-algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar techniques for traditional process algebras with data, and — more importantly — treats data and data-dependent probabilistic choice in a fully symbolic manner, leading to the symbolic analysis of parameterised probabilistic systems. We discuss several reduction techniques that can easily be applied to our models. A validation of our approach on two benchmark leader election protocols shows reductions of more than an order of magnitude.  相似文献   

14.
This paper presents a novel revision of the framework of Hybrid Probabilistic Logic Programming, along with a complete semantics characterization, to enable the encoding of and reasoning about real-world applications. The language of Hybrid Probabilistic Logic Programs framework is extended to allow the use of non-monotonic negation, and two alternative semantical characterizations are defined: stable probabilistic model semantics and probabilistic well-founded semantics. These semantics generalize the stable model semantics and well-founded semantics of traditional normal logic programs, and they reduce to the semantics of Hybrid Probabilistic Logic programs for programs without negation. It is the first time that two different semantics for Hybrid Probabilistic Programs with non-monotonic negation as well as their relationships are described. This proposal provides the foundational grounds for developing computational methods for implementing the proposed semantics. Furthermore, it makes it clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning. An erratum to this article can be found at  相似文献   

15.
Color-based tracking is prone to failure in situations where visually similar targets are moving in a close proximity or occlude each other. To deal with the ambiguities in the visual information, we propose an additional color-independent visual model based on the target's local motion. This model is calculated from the optical flow induced by the target in consecutive images. By modifying a color-based particle filter to account for the target's local motion, the combined color/local-motion-based tracker is constructed. We compare the combined tracker to a purely color-based tracker on a challenging dataset from hand tracking, surveillance and sports. The experiments show that the proposed local-motion model largely resolves situations when the target is occluded by, or moves in front of, a visually similar object.  相似文献   

16.
In many online applications of machine learning, the computational resources available for classification will vary from time to time. Most techniques are designed to operate within the constraints of the minimum expected resources and fail to utilize further resources when they are available. We propose a novel anytime classification algorithm, anytime averaged probabilistic estimators (AAPE), which is capable of delivering strong prediction accuracy with little CPU time and utilizing additional CPU time to increase classification accuracy. The idea is to run an ordered sequence of very efficient Bayesian probabilistic estimators (single improvement steps) until classification time runs out. Theoretical studies and empirical validations reveal that by properly identifying, ordering, invoking and ensembling single improvement steps, AAPE is able to accomplish accurate classification whenever it is interrupted. It is also able to output class probability estimates beyond simple 0/1-loss classifications, as well as adeptly handle incremental learning. Editor: David Page.  相似文献   

17.
In this paper, we prove that a query plan is safe in tuple independent probabilistic databases if and only if its every answer tuple is tree structured in probabilistic graphical models. We classify hierarchical queries into core and non-core hierarchical queries and show that the existing methods can only generate safe plans for core hierarchical queries. Inspired by the bucket elimination framework, we give the sufficient and necessary conditions for the answer relation of every candidate sub-query to be used as a base relation. Finally, the proposed algorithm generates safe plans for extensional query evaluation on non-boolean hierarchical queries and invokes the SPROUT algorithm [24] for intensional query evaluation on boolean queries. A case study on the TPC-H benchmark reveals that the safe plans of Q7 and Q8 can be evaluated efficiently. Furthermore, extensive experiments show that safe plans generated by the proposed algorithm scale well.  相似文献   

18.
Pervasive applications, such as natural habitat monitoring and location-based services, have attracted plenty of research interest. These applications, which deploy a lot of sensor devices to collect data from external environments, often have limited network bandwidth and battery resources. The sensors also cannot record accurate values. The uncertainty of data captured by a sensor should thus be considered for query evaluation. To this end, probabilistic queries, which consider data impreciseness and provide statistical guarantees in answers, have been recently studied.  相似文献   

19.
Several researchers have shown that substantial improvements can be achieved in difficult pattern recognition problems by combining the outputs of multiple neural networks. In this work, we present and test a pattern classification multi-net system based on both supervised and unsupervised learning. Following the ‘divide-and-conquer’ framework, the input space is partitioned into overlapping subspaces and neural networks are subsequently used to solve the respective classification subtasks. Finally, the outputs of individual classifiers are appropriately combined to obtain the final classification decision. Two clustering methods have been applied for input space partitioning and two schemes have been considered for combining the outputs of the multiple classifiers. Experiments on well-known data sets indicate that the multi-net classification system exhibits promising performance compared with the case of single network training, both in terms of error rates and in terms of training speed (especially if the training of the classifiers is done in parallel). ID="A1"Correspondance and offprint requests to: D. Frosyniotis, National Technical University of Athens, Department of Electrical and Computer Engineering, Zographou 157 73, Athens, Greece. E-mail: andreas@cs.ntua.gr  相似文献   

20.
Interval methods have been shown to be efficient, robust and reliable to solve difficult set-membership localization problems. However, they are unsuitable in a probabilistic context, where the approximation of an unbounded probability density function by a set cannot be accepted. This paper proposes a new probabilistic approach which makes possible to use classical set-membership localization methods which are robust with respect to outliers. The approach is illustrated on two simulated examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号