首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We give a precise picture of the computational complexity of causal relationships in Pearl's structural models, where we focus on causality between variables, event causality, and probabilistic causality. As for causality between variables, we consider the notions of causal irrelevance, cause, cause in a context, direct cause, and indirect cause. As for event causality, we analyze the complexity of the notions of necessary and possible cause, and of the sophisticated notions of weak and actual cause by Halpern and Pearl. In the course of this, we also prove an open conjecture by Halpern and Pearl, and establish other semantic results. We then analyze the complexity of the probabilistic notions of probabilistic causal irrelevance, likely causes of events, and occurrences of events despite other events. Moreover, we consider decision and optimization problems involving counterfactual formulas. To our knowledge, no complexity aspects of causal relationships in the structural-model approach have been considered so far, and our results shed light on this issue.  相似文献   

2.
Bayesian networks provide the means for representing probabilistic conditional independence. Conditional independence is widely considered also beyond the theory of probability, with linkages to, e.g. the database multi-valued dependencies, and at a higher abstraction level of semi-graphoid models. The rough set framework for data analysis is related to the topics of conditional independence via the notion of a decision reduct, to be considered within a wider domain of the feature selection. Given probabilistic version of decision reducts equivalent to the data-based Markov boundaries, the studies were also conducted for other criteria of the rough-set-based feature selection, e.g. those corresponding to the multi-valued dependencies. In this paper, we investigate the degrees of approximate conditional dependence, which could be a topic corresponding to the well-known notions such as conditional mutual information and polymatroid functions, however, with many practically useful approximate conditional independence models unmanageable within the information theoretic framework. The major paper’s contribution lays in extending the means for understanding the degrees of approximate conditional dependence, with appropriately generalized semi-graphoid properties formulated and with the mathematical soundness of the Bayesian network-like representation of the approximate conditional independence statements thoroughly proved. As an additional contribution, we provide a case study of the approximate conditional independence model, which would not be manageable without the above-mentioned extensions.  相似文献   

3.
In this paper we analyze two recent conditional interpretations of defaults, one based on probabilities, and the other, on models. We study what makes them equivalent, explore their limitations and develop suitable extensions. The resulting framework ties together a number of important notions in default reasoning, like high-probabilities and model-preference, default priorities and argument systems, and independence assumptions and minimality considerations.I want to thank D. Nute and J. Pearl for comments on an earlier version of this paper.  相似文献   

4.
在因果条件逻辑的基础上进一步改进因果理论语义,并给出扩充的行动理论,统一表述动作执行和原因蕴涵,从而可以整体论述逻辑语言本身相互动作的独立与非独立性,使得处理并发动作和动作之间独立与非独立性关系变得更为自然形式化.  相似文献   

5.

A causal rule between two variables, X M Y, captures the relationship that the presence of X causes the appearance of Y. Because of its usefulness (compared to association rules), techniques for mining causal rules are beginning to be developed. However, the effectiveness of existing methods (such as the LCD and CU-path algorithms) are limited to mining causal rules among simple variables, and are inadequate to discover and represent causal rules among multi-value variables. In this paper, we propose that the causality between variables X and Y be represented in the form X M Y with conditional probability matrix M Y|X . We also propose a new approach to discover causality in large databases based on partitioning. The approach partitions the items into item variables by decomposing "bad" item variables and composing "not-good" item variables. In particular, we establish a method to optimize causal rules that merges the "useless" information in conditional probability matrices of extracted causal rules.  相似文献   

6.
7.
Is the common cause principle merely one of a set of useful heuristics for discovering causal relations, or is it rather a piece of heavy duty metaphysics, capable of grounding the direction of causation itself? Since the principle was introduced in Reichenbach’s groundbreaking work The Direction of Time (1956), there have been a series of attempts to pursue the latter program—to take the probabilistic relationships constitutive of the principle of the common cause and use them to ground the direction of causation. These attempts have not all explicitly appealed to the principle as originally formulated; it has also appeared in the guise of independence conditions, counterfactual overdetermination, and, in the causal modelling literature, as the causal markov condition. In this paper, I identify a set of difficulties for grounding the asymmetry of causation on the principle and its descendents. The first difficulty, concerning what I call the vertical placement of causation, consists of a tension between considerations that drive towards the macroscopic scale, and considerations that drive towards the microscopic scale—the worry is that these considerations cannot both be comfortably accommodated. The second difficulty consists of a novel potential counterexample to the principle based on the familiar Einstein Podolsky Rosen (EPR) correlations in quantum mechanics.  相似文献   

8.
A Bayesian network is a probabilistic representation for uncertain relationships, which has proven to be useful for modeling real-world problems. When there are many potential causes of a given effect, however, both probability assessment and inference using a Bayesian network can be difficult. In this paper, we describe causal independence, a collection of conditional independence assertions and functional relationships that are often appropriate to apply to the representation of the uncertain interactions between causes and effect. We show how the use of causal independence in a Bayesian network can greatly simplify probability assessment as well as probabilistic inference  相似文献   

9.
A formal framework for the analysis of execution traces collected from distributed systems at run-time is presented. We introduce the notions of event and message traces to capture the consistency of causal dependencies between the elements of a trace. We formulate an approach to property testing where a partially ordered execution trace is modeled by a collection of communicating automata. We prove that the model exactly characterizes the causality relation between the events/messages in the observed trace and discuss the implementation of this approach in SDL, where ObjectGEODE is used to verify properties using model-checking techniques. Finally, we illustrate the approach with industrial case studies. Received May 2004, Revised February 2005, Accepted April 2005 by J. Derrick, M. Harman and R. M. Herons  相似文献   

10.
Causal independence modelling is a well-known method for reducing the size of probability tables, simplifying the probabilistic inference and explaining the underlying mechanisms in Bayesian networks. Recently, a generalization of the widely-used noisy OR and noisy AND models, causal independence models based on symmetric Boolean functions, was proposed. In this paper, we study the problem of learning the parameters in these models, further referred to as symmetric causal independence models. We present a computationally efficient EM algorithm to learn parameters in symmetric causal independence models, where the computational scheme of the Poisson binomial distribution is used to compute the conditional probabilities in the E-step. We study computational complexity and convergence of the developed algorithm. The presented EM algorithm allows us to assess the practical usefulness of symmetric causal independence models. In the assessment, the models are applied to a classification task; they perform competitively with state-of-the-art classifiers.  相似文献   

11.
Constraint-based search methods, which are a major approach to learning Bayesian networks, are expected to be effective in causal discovery tasks. However, such methods often suffer from impracticality of classical hypothesis testing for conditional independence when the sample size is insufficiently large. We present a new conditional independence (CI) testing method that is designed to be effective for small samples. Our method uses the minimum free energy principle, which originates from thermodynamics, with the “Data Temperature” assumption recently proposed by us. This CI method incorporates the maximum entropy principle and converges to classical hypothesis tests in asymptotic regions. In our experiments using repository datasets (Alarm/Insurance/Hailfinder/Barley/Mildew), the results show that our method improves the learning performance of the well known PC algorithm in the view of edge-reversed errors in addition to extra/missing errors.  相似文献   

12.
We show how Bayesian belief networks (BNs) can be used to model common temporal knowledge. Two approaches to their structuring are proposed. The first leads to BNs with nodes representing states of a process and times spent in such states, and with a graphical structure reflecting the conditional independence assumptions of a Markovian process. A second approach leads to BNs whose topology represents a conditional independence structure between event-times. Once required distributional specifications are stored within the nodes of a BN, this becomes a powerful inference machine capable, for example, of reasoning backwards in time. We discuss computational difficulties associated with propagation algorithms necessary to perform these inferences, and the reasons why we chose to adopt Monte Carlo-based propagation algorithms. Two improvements to existing Monte Carlo algorithms are proposed; an enhancement based on the principle of importance sampling, and a combined technique that exploits both forward and Markov sampling. Finally, we consider Petri nets, a very interesting and general representation of temporal knowledge. A combined approach is proposed, in which the user structures temporal knowledge in Petri net formalism. The obtained Petri net is then automatically translated into an equivalent BN for probability propagation. Inferred conclusions may finally be explained with the aid of Petri nets again.  相似文献   

13.
Estimating conditional dependence between two random variables given the knowledge of a third random variable is essential in neuroscientific applications to understand the causal architecture of a distributed network. However, existing methods of assessing conditional dependence, such as the conditional mutual information, are computationally expensive, involve free parameters, and are difficult to understand in the context of realizations. In this letter, we discuss a novel approach to this problem and develop a computationally simple and parameter-free estimator. The difference between the proposed approach and the existing ones is that the former expresses conditional dependence in terms of a finite set of realizations, whereas the latter use random variables, which are not available in practice. We call this approach conditional association, since it is based on a generalization of the concept of association to arbitrary metric spaces. We also discuss a novel and computationally efficient approach of generating surrogate data for evaluating the significance of the acquired association value.  相似文献   

14.
 Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real-world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong substantial dependence. Good models map significant deviance from independence and neglect approximate independence or dependence weaker than a noise threshold. This intuition is applied to learning the structure of Bayes nets from data. We determine the conditional posterior probabilities of structures given that the degree of dependence at each of their nodes exceeds a critical noise level. Deviance from independence is measured by mutual information. The arc probabilities are set equal to the probability that the mutual information, provided by the neighbors of a node, exceeds a given threshold. A χ2 approximation for the probability density function of mutual information is used. A large number of network structures in which the arc probabilities are evaluated, is generated by stochastic simulation. Finally, the probabilities of the whole graph structures are obtained by combining the individual arc probabilities with the number of possible construction sequences compatible with the partial ordering of the graph. While selecting models with large deviance from independence results in simple networks with few but important links, selecting models with small deviance results in highly connected networks also containing less important links.  相似文献   

15.
In this paper we discuss when and how to use deontic logic in multi-agent systems. Our central question is how to proceed once a norm has been violated or defeated, a key issue of deontic logic applications in multi-agent systems. To bridge the logical analysis of norms in philosophy with applications in agent theory, we propose a practical approach based on violation contexts and independence statements. In particular, we introduce a combination of two traditional deontic logics, which we extend with so-called deontic and factual independence assumptions. We show how different notions of violability and defeasibility can be encoded in the logic by defining different ways in which independence assumptions are derived from the explicit manner of presentation. We also show how our approach can be used to give a new analysis of several notorious paradoxes of deontic logic.  相似文献   

16.
Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added. Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We look at two ways to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman’s construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox–Good–Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor’s, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added.  相似文献   

17.
In many applications, the use of Bayesian probability theory is problematical. Information needed to feasibility calculate is unavailable. There are different methodologies for dealing with this problem, e.g., maximal entropy and Dempster-Shafer Theory. If one can make independence assumptions, many of the problems disappear, and in fact, this is often the method of choice even when it is obviously incorrect. The notion of independence is a 0–1 concept, which implies that human guesses about its validity will not lead to robust systems. In this paper, we propose a fuzzy formulation of this concept. It should lend itself to probabilistic updating formulas by allowing heuristic estimation of the “degree of independence.” We show how this can be applied to compute a new notion of conditional probability (we call this “extended conditional probability”). Given information, one typically has the choice of full conditioning (standard dependence) or ignoring the information (standard independence). We list some desiderata for the extension of this to allowing degree of conditioning. We then show how our formulation of degree of independence leads to a formula fulfilling these desiderata. After describing this formula, we show how this compares with other possible formulations of parameterized independence. In particular, we compare it to a linear interpolant, a higher power of a linear interpolant, and to a notion originally presented by Hummel and Manevitz [Tenth Int. Joint Conf. on Artificial Intelligence, 1987]. Interestingly, it turns out that a transformation of the Hummel-Manevitz method and our “fuzzy” method are close approximations of each other. Two examples illustrate how fuzzy independence and extended conditional probability might be applied. The first shows how linguistic probabilities result from treating fuzzy independence as a linguistic variable. The second is an industrial example of troubleshooting on the shop floor.  相似文献   

18.
Detecting and characterizing causal interdependencies and couplings between different activated brain areas from functional neuroimage time series measurements of their activity constitutes a significant step toward understanding the process of brain functions. In this letter, we make the simple point that all current statistics used to make inferences about directed influences in functional neuroimage time series are variants of the same underlying quantity. This includes directed transfer entropy, transinformation, Kullback-Leibler formulations, conditional mutual information, and Granger causality. Crucially, in the case of autoregressive modeling, the underlying quantity is the likelihood ratio that compares models with and without directed influences from the past when modeling the influence of one time series on another. This framework is also used to derive the relation between these measures of directed influence and the complexity or the order of directed influence. These results provide a framework for unifying the Kullback-Leibler divergence, Granger causality, and the complexity of directed influence.  相似文献   

19.
从观测数据中学习因果结构具有重要的应用价值。目前,一类学习因果结构的方法是基于函数因果模型假设,通过检验噪声与原因变量的独立性来学习因果结构。然而,该类方法涉及高计算复杂度的独立性检验过程,影响结构学习算法的实用性和鲁棒性。为此,提出了一种在线性非高斯模型下,利用高阶累积量作为独立性评估的因果结构学习算法。该算法主要分为两个步骤,第一个步骤是利用基于条件独立性约束的方法学习到因果结构的马尔可夫等价类,第二个步骤是定义了一种基于高阶累积量的得分,该得分可以判别两个随机变量的独立性,从而可以从马尔可夫等价类中搜索到最佳独立性得分的因果结构作为算法的输出。该算法的优势在于:a)相比基于核方法的独立性检验,该方法有较低的计算复杂度;b)基于得分搜索的方法,可以得到一个最匹配数据生成过程的模型,提高学习方法的鲁棒性。实验结果表明,基于高阶累积量的因果结构学习方法在合成数据中F1得分提高了5%,并在真实数据中学习到更多的因果方向。  相似文献   

20.
现有级联非线性加性噪声模型可解决隐藏中间变量的因果方向推断问题,然而对于包含隐变量和级联传递因果关系的因果网络学习存在全局结构搜索、等价类无法识别等问题。设计一种面向非时序观测数据的两阶段因果结构学习算法,第一阶段根据观测数据变量间的条件独立性,构建基本的因果网络骨架,第二阶段基于级联非线性加性噪声模型,通过比较骨架中每个相邻因果对在不同因果方向假设下的边缘似然度进行因果方向推断。实验结果表明,该算法在虚拟因果结构数据集的不同隐变量数量、平均入度、结构维度、样本数量下均表现突出,且在真实因果结构数据集中的F1值相比主流因果结构学习算法平均提升了51%,具有更高的准确率和更强的鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号