首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy” belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.  相似文献   

2.
3.
Belief propagation (BP) on cyclic graphs is an efficient algorithm for computing approximate marginal probability distributions over single nodes and neighboring nodes in the graph. However, it does not prescribe a way to compute joint distributions over pairs of distant nodes in the graph. In this article, we propose two new algorithms for approximating these pairwise probabilities, based on the linear response theorem. The first is a propagation algorithm that is shown to converge if BP converges to a stable fixed point. The second algorithm is based on matrix inversion. Applying these ideas to gaussian random fields, we derive a propagation algorithm for computing the inverse of a matrix.  相似文献   

4.
The probabilistic reasoning scheme of Dempster-Shafer theory provides a remarkably efficient bug identification algorithm for a hierarchical Buggy model. In the particular Buggy model generated by the repair theory of Brown & Van Lehn (1980, A generative theory of bugs in procedural skills, Cognitive Science, 2, 155-192), both Shafer & Logan (1987, Implementing Dempster's rule for hierarchical evidence, Artificial Intelligence, 33 , 271-298) and Gordon & Shortliffe (1985, A method for managing evidential reasoning in a hierarchical hypothesis space, Artificial Intelligence, 26, 324-357) schemes have provided almost identical computational accuracy although the latter involves an approximation of a "smallest superset". If n denotes the number of bugs to be identified, the computational complexity of the two schemes, originally of O (n4/3) and O(n2) respectively, can be improved to O(n) using the simplified top-down calculation scheme whereby from among all the nodes we first locate the particular "parental" node to which the bug belongs and then the bug itself among the sibs within the node. On average, about 5-7 problems are adequate to raise the belief function of the bug to 95% level, based on the evidential reasoning schemes.  相似文献   

5.
Nishiyama  Yu  Kanagawa  Motonobu  Gretton  Arthur  Fukumizu  Kenji 《Machine Learning》2020,109(5):939-972
Machine Learning - Kernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned...  相似文献   

6.
ABSTRACT

Undirected graphical models have been successfully used to jointly model the spatial and the spectral dependencies in earth observing hyperspectral images. They produce less noisy, smooth, and spatially coherent land-cover maps and give top accuracies on many datasets. Moreover, they can easily be combined with other state-of-the-art approaches, such as deep learning. This has made them an essential tool for remote-sensing researchers and practitioners. However, graphical models have not been easily accessible to the larger remote-sensing community as they are not discussed in standard remote-sensing textbooks and not included in the popular remote-sensing software and toolboxes. In this tutorial, we provide a theoretical introduction to Markov random fields and conditional random fields-based spatial–spectral classification for land-cover mapping along with a detailed step-by-step practical guide on applying these methods using freely available software. Furthermore, the discussed methods are benchmarked on four public hyperspectral datasets for a fair comparison among themselves and easy comparison with the vast number of methods in literature which use the same datasets. The source code necessary to reproduce all the results in the paper is published on-line to make it easier for the readers to apply these techniques to different remote-sensing problems.  相似文献   

7.
We demonstrate that certain large-clique graph triangulations can be useful for reducing computational requirements when making queries on mixed stochastic/deterministic graphical models. This is counter to the conventional wisdom that triangulations that minimize clique size are always most desirable for use in computing queries on graphical models. Many of these large-clique triangulations are non-minimal and are thus unattainable via the popular elimination algorithm. We introduce ancestral pairs as the basis for novel triangulation heuristics and prove that no more than the addition of edges between ancestral pairs needs to be considered when searching for state space optimal triangulations in such graphs. Empirical results on random and real world graphs are given. We also present an algorithm and correctness proof for determining if a triangulation can be obtained via elimination, and we show that the decision problem associated with finding optimal state space triangulations in this mixed setting is NP-complete.  相似文献   

8.
Innovations in Systems and Software Engineering - We present AQUA, a new probabilistic inference algorithm that operates on probabilistic programs with continuous posterior distributions. AQUA...  相似文献   

9.
Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures.  相似文献   

10.
11.
Expressing knowledge as expert experience and discovering knowledge implied in data are two important ways for knowledge acquisition. Consistent combination of these two kinds of knowledge has attracted much attention due to the potential applications to knowledge fusion and wide requirements of decision support. In this paper, we focus on the probabilistic modeling of expert experience represented as logical predicate formulas, aiming at the effective fusion of logical and probabilistic knowledge. Taking qualitative probabilistic network (QPN) as the underlying framework of probabilistic knowledge implied in data as well as the abstraction of general Bayesian networks (BNs), we are to construct the probabilistic graphical model for both the given predicate formulas and the ultimate result of knowledge fusion. We first propose the concept and the construction algorithm of predicate graph (PG) to describe the dependence relations among predicate formulas, and discuss PG’s probabilistic semantics correspondingly. We then prove that PG is a probability dependency model and has the same semantics with a general probabilistic graphical model. Consequently, we give the method for fusing PG and QPN. Experimental results show the effectiveness of our methods.  相似文献   

12.
Graphical models, such as Bayesian networks and Markov networks, represent joint distributions over a set of variables by means of a graph. When the graph is singly connected, local propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the correct posterior probabilities. Recently a number of researchers have empirically demonstrated good performance of these same local propagation schemes on graphs with loops, but a theoretical understanding of this performance has yet to be achieved. For graphical models with a single loop, we derive an analytical relationship between the probabilities computed using local propagation and the correct marginals. Using this relationship we show a category of graphical models with loops for which local propagation gives rise to provably optimal maximum a posteriori assignments (although the computed marginals will be incorrect). We also show how nodes can use local information in the messages they receive in order to correct their computed marginals. We discuss how these results can be extended to graphical models with multiple loops and show simulation results suggesting that some properties of propagation on single-loop graphs may hold for a larger class of graphs. Specifically we discuss the implication of our results for understanding a class of recently proposed error-correcting codes known as turbo codes.  相似文献   

13.
Deformable shape detection is an important problem in computer vision and pattern recognition. However, standard detectors are typically limited to locating only a few salient landmarks such as landmarks near edges or areas of high contrast, often conveying insufficient shape information. This paper presents a novel statistical pattern recognition approach to locate a dense set of salient and non-salient landmarks in images of a deformable object. We explore the fact that several object classes exhibit a homogeneous structure such that each landmark position provides some information about the position of the other landmarks. In our model, the relationship between all pairs of landmarks is naturally encoded as a probabilistic graph. Dense landmark detections are then obtained with a new sampling algorithm that, given a set of candidate detections, selects the most likely positions as to maximize the probability of the graph. Our experimental results demonstrate accurate, dense landmark detections within and across different databases.  相似文献   

14.
Causality and belief change play an important role in many applications. This paper focuses on the main issues of causality and interventions in possibilistic graphical models. We show that interventions, which are very useful for representing causal relations between events, can be naturally viewed as a belief change process. In particular, interventions can be handled using a possibilistic counterpart of Jeffrey's rule of conditioning under uncertain inputs. This paper also addresses new issues that are arisen in the revision of graphical models when handling interventions. We first argue that the order in which observations and interventions are introduced is very important. Then we show that in order to correctly handle sequences of observations and interventions, one needs to change the structure of possibilistic networks. Lastly, an efficient procedure for revising possibilistic causal trees is provided.  相似文献   

15.
Fox EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1,1/2, 1/3 ,….  相似文献   

16.
Probabilistic models are commonly used to evaluate quality attributes, such as reliability, availability, safety and performance of software-intensive systems. The accuracy of the evaluation results depends on a number of system properties which have to be estimated, such as environmental factors or system usage. Researchers have tackled this problem by including uncertainties in the probabilistic models and solving them analytically or with simulations. The input parameters are commonly assumed to be normally distributed. Accordingly, reporting the mean and variances of the resulting attributes is usually considered sufficient. However, many of the uncertain factors do not follow normal distributions, and analytical methods to derive objective uncertainties become impractical with increasing complexity of the probabilistic models. In this work, we introduce a simulation-based approach which uses Discrete Time Markov Chains and probabilistic model checking to accommodate a diverse set of parameter range distributions. The number of simulation runs automatically regulates to the desired significance level and reports the desired percentiles of the values which ultimately characterises a specific quality attribute of the system. We include a case study which illustrates the flexibility of this approach using the evaluation of several probabilistic properties.  相似文献   

17.
Many real-world domains exhibit rich relational structure and stochasticity and motivate the development of models that combine predicate logic with probabilities. These models describe probabilistic influences between attributes of objects that are related to each other through known domain relationships. To keep these models succinct, each such influence is considered independent of others, which is called the assumption of “independence of causal influences” (ICI). In this paper, we describe a language that consists of quantified conditional influence statements and captures most relational probabilistic models based on directed graphs. The influences due to different statements are combined using a set of combining rules such as Noisy-OR. We motivate and introduce multi-level combining rules, where the lower level rules combine the influences due to different ground instances of the same statement, and the upper level rules combine the influences due to different statements. We present algorithms and empirical results for parameter learning in the presence of such combining rules. Specifically, we derive and implement algorithms based on gradient descent and expectation maximization for different combining rules and evaluate them on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn both the conditional probability distributions of the influence statements and the parameters of the combining rules.  相似文献   

18.
This article presents an approach for regional categorization in complex natural scenes with undirected graphs. A novel MRF-like model is proposed with spatial constraints in the feature space based on existing directed graphs, and an approximation of pseudo-likelihood is introduced for probability inference and parameter estimation. With this approximation, we can deal with the intractability of potential functions and get spatial relations between patches of different classes for more information in their co-occurrence matrix. The Receiver-Operating-Characteristic curves in our experiments demonstrate a better performance from our proposed method in comparison with directed probabilistic models such as LDA and constellation.  相似文献   

19.
The conditional likelihood approach is a sensible choice for a hierarchical logistic regression model or other generalized regression models with binary data. However, its heavy computational burden limits its use, especially for the related mixed-effects model. A modified profile likelihood is used as an accurate approximation to conditional likelihood, and then the use of two methods for inferences for the hierarchical generalized regression models with mixed effects is proposed. One is based on a hierarchical likelihood and Laplace approximation method, and the other is based on a Markov chain Monte Carlo EM algorithm. The methods are applied to a meta-analysis model for trend estimation and the model for multi-arm trials. A simulation study is conducted to illustrate the performance of the proposed methods.  相似文献   

20.
Extreme value methods are widely used in financial applications such as risk analysis, forecasting and pricing models. One of the challenges with their application in finance is accounting for the temporal dependence between the observations, for example the stylised fact that financial time series exhibit volatility clustering. Various approaches have been proposed to capture the dependence. Commonly a two-stage approach is taken, where the volatility dependence is removed using a volatility model like a GARCH (or one of its many incarnations) followed by application of standard extreme value models to the assumed independent residual innovations.This study examines an alternative one stage approach, which makes parameter estimation and accounting for the associated uncertainties more straightforward than the two-stage approach. The location and scale parameters of the extreme value distribution are defined to follow a conditional autoregressive heteroscedasticity process. Essentially, the model implements GARCH volatility via the extreme value model parameters. Bayesian inference is used and implemented via Markov chain Monte Carlo, to permit all sources of uncertainty to be accounted for. The model is applied to both simulated and empirical data to demonstrate performance in extrapolating the extreme quantiles and quantifying the associated uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号