首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added. Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We look at two ways to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman’s construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox–Good–Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor’s, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added.  相似文献   

2.
Conditional and composite temporal CSPs   总被引:2,自引:2,他引:0  
Constraint Satisfaction Problems (CSPs) have been widely used to solve combinatorial problems. In order to deal with dynamic CSPs where the information regarding any possible change is known a priori and can thus be enumerated beforehand, conditional constraints and composite variables have been studied in the past decade. Indeed, these two concepts allow the addition of variables and their related constraints in a dynamic manner during the resolution process. More precisely, a conditional constraint restricts the participation of a variable in a feasible scenario while a composite variable allows us to express a disjunction of variables where only one will be added to the problem to solve. In order to deal with a wide variety of real life applications under temporal constraints, we present in this paper a unique temporal CSP framework including numeric and symbolic temporal information, conditional constraints and composite variables. We call this model, a Conditional and Composite Temporal CSP (or CCTCSP). To solve the CCTCSP we propose two methods respectively based on Stochastic Local Search (SLS) and constraint propagation. In order to assess the efficiency in time of the solving methods we propose, experimental tests have been conducted on randomly generated CCTCSPs. The results demonstrate the superiority of a variant of the Maintaining Arc Consistency (MAC) technique (that we call MAX+) over the other constraint propagation strategies, Forward Checking (FC) and its variants, for both consistent and inconsistent problems. It has also been shown that, in the case of consistent problems, MAC+ outperforms the SLS method Min Conflict Random Walk (MCRW) for highly constrained CCTCSPs while both methods have comparable time performance for under and middle constrained problems. MCRW is, however, the method of choice for highly constrained CCTCSPs if we decide to trade search time for the quality of the solution returned (number of solved constraints).  相似文献   

3.
In this paper an important problem in the domain of term rewriting, the termination of (conditional) rewrite systems, is dealt with. We show that in many applications, well-founded orderings on terms which only make use of syntactic information of a rewrite systemR, do not suffice for proving termination ofR. Indeed sometimes semantic information is needed to orient a rewrite rule. Therefore we integrate a semantic interpretation of rewrite systems and terms into a well-founded ordering on terms: the notion ofsemantic ordering is the first main contribution of this paper. The use and usefulness of the semantic ordering in proving termination is illustrated by means of some realistic examples.Furthermore the concept of semantic information induces a novel approach for proving termination inconditional rewrite systems. The idea is to employ not only semantic information contained in the terms that are to be compared, but also extra (semantic) information contained in the premiss of the conditional equation in which the terms appear. This leads to our second contribution in the termination problem area: the notion ofcontextual ordering andcontextual semantic ordering. Thecontextual approach allows to prove termination of conditional rewrite systems where all classical partial orderings would fail.  相似文献   

4.
Probabilistic Default Reasoning with Conditional Constraints   总被引:1,自引:0,他引:1  
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of z-, lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system Z, Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as reference-class reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of reference-class reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of z-, lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of z- and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of z-, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational complexity.  相似文献   

5.
李崇轩  朱军  张钹 《软件学报》2020,31(4):1002-1008
产生式对抗网络(generative adversarial networks,简称GANs)可以生成逼真的图像,因此最近被广泛研究.值得注意的是,概率图生成对抗网络(graphical-GAN)将贝叶斯网络引入产生式对抗网络框架,以无监督的方式学习到数据的隐藏结构.提出了条件概率图生成对抗网络(conditional graphical-GAN),它可以在弱监督环境下,利用粗粒度监督信息来学习到更精细而复杂的结构.条件概率图生成对抗网络的推理和学习遵循与graphical-GAN类似的方法.提出了条件概率图生成对抗网络的两个实例.条件高斯混合模型(conditional Gaussian mixture GAN,简称cGMGAN)可以在给出粗粒度标签的情况下从混合数据中学习细粒度聚类.条件状态空间模型(conditional state space GAN,简称cSSGAN)可以在给定对象标签的情况下学习具有多个对象的视频的动态过程.  相似文献   

6.
Cost-sensitive learning with conditional Markov networks   总被引:1,自引:0,他引:1  
There has been a recent, growing interest in classification and link prediction in structured domains. Methods such as conditional random fields and relational Markov networks support flexible mechanisms for modeling correlations due to the link structure. In addition, in many structured domains, there is an interesting structure in the risk or cost function associated with different misclassifications. There is a rich tradition of cost-sensitive learning applied to unstructured (IID) data. Here we propose a general framework which can capture correlations in the link structure and handle structured cost functions. We present two new cost-sensitive structured classifiers based on maximum entropy principles. The first determines the cost-sensitive classification by minimizing the expected cost of misclassification. The second directly determines the cost-sensitive classification without going through a probability estimation step. We contrast these approaches with an approach which employs a standard 0/1-loss structured classifier to estimate class conditional probabilities followed by minimization of the expected cost of misclassification and with a cost-sensitive IID classifier that does not utilize the correlations present in the link structure. We demonstrate the utility of our cost-sensitive structured classifiers with experiments on both synthetic and real-world data.  相似文献   

7.
Conditional models for contextual human motion recognition   总被引:1,自引:0,他引:1  
We describe algorithms for recognizing human motion in monocular video sequences, based on discriminative conditional random fields (CRFs) and maximum entropy Markov models (MEMMs). Existing approaches to this problem typically use generative structures like the hidden Markov model (HMM). Therefore, they have to make simplifying, often unrealistic assumptions on the conditional independence of observations given the motion class labels and cannot accommodate rich overlapping features of the observation or long-term contextual dependencies among observations at multiple timesteps. This makes them prone to myopic failures in recognizing many human motions, because even the transition between simple human activities naturally has temporal segments of ambiguity and overlap. The correct interpretation of these sequences requires more holistic, contextual decisions, where the estimate of an activity at a particular timestep could be constrained by longer windows of observations, prior and even posterior to that timestep. This would not be computationally feasible with a HMM which requires the enumeration of a number of observation sequences exponential in the size of the context window. In this work we follow a different philosophy: instead of restrictively modeling the complex image generation process – the observation, we work with models that can unrestrictedly take it as an input, hence condition on it. Conditional models like the proposed CRFs seamlessly represent contextual dependencies and have computationally attractive properties: they support efficient, exact recognition using dynamic programming, and their parameters can be learned using convex optimization. We introduce conditional graphical models as complementary tools for human motion recognition and present an extensive set of experiments that show not only how these can successfully classify diverse human activities like walking, jumping, running, picking or dancing, but also how they can discriminate among subtle motion styles like normal walks and wander walks.  相似文献   

8.
In nonmonotonic reasoning, a default conditional αβ has most often been informally interpreted as a defeasible version of a classical conditional, usually the material conditional. There is however an alternative interpretation, in which a default is regarded essentially as a rule, leading from premises to conclusion. In this paper, we present a family of logics, based on this alternative interpretation. A general semantic framework under this rule-based interpretation is developed, and associated proof theories for a family of weak conditional logics is specified. Nonmonotonic inference is easily defined in these logics. Interestingly, the logics presented here are weaker than the commonly-accepted base conditional approach for defeasible reasoning. However, this approach resolves problems that have been associated with previous approaches.   相似文献   

9.
In this paper we present how adaptable learned models of graphical models are and how they can be used for classification tasks of 3D laser point clouds with different distributions and density. In order to model the contextual information we use a pair-wise conditional random field and an adaptive graph down-sampling method based on voxel grids. As feature we apply the rotation invariant histogram-of-oriented-residuals operator to describe the local point cloud distribution. We validate the approach with data collected from different laser range finders with varying point cloud distribution and density. Our experiments imply, that conditional random field models learned from one dataset can be applied to another dataset without a significant loss of precision.  相似文献   

10.
Gaming Prediction Markets: Equilibrium Strategies with a Market Maker   总被引:1,自引:0,他引:1  
We study the equilibrium behavior of informed traders interacting with market scoring rule (MSR) market makers. One attractive feature of MSR is that it is myopically incentive compatible: it is optimal for traders to report their true beliefs about the likelihood of an event outcome provided that they ignore the impact of their reports on the profit they might garner from future trades. In this paper, we analyze non-myopic strategies and examine what information structures lead to truthful betting by traders. Specifically, we analyze the behavior of risk-neutral traders with incomplete information playing in a dynamic game. We consider finite-stage and infinite-stage game models. For each model, we study the logarithmic market scoring rule (LMSR) with two different information structures: conditionally independent signals and (unconditionally) independent signals. In the finite-stage model, when signals of traders are independent conditional on the state of the world, truthful betting is a Perfect Bayesian Equilibrium (PBE). Moreover, it is the unique Weak Perfect Bayesian Equilibrium (WPBE) of the game. In contrast, when signals of traders are unconditionally independent, truthful betting is not a WPBE. In the infinite-stage model with unconditionally independent signals, there does not exist an equilibrium in which all information is revealed in a finite amount of time. We propose a simple discounted market scoring rule that reduces the opportunity for bluffing strategies. We show that in any WPBE for the infinite-stage market with discounting, the market price converges to the fully-revealing price, and the rate of convergence can be bounded in terms of the discounting parameter. When signals are conditionally independent, truthful betting is the unique WPBE for the infinite-stage market with and without discounting.  相似文献   

11.
An information-theoretic framework for flow visualization   总被引:1,自引:0,他引:1  
The process of visualization can be seen as a visual communication channel where the input to the channel is the raw data, and the output is the result of a visualization algorithm. From this point of view, we can evaluate the effectiveness of visualization by measuring how much information in the original data is being communicated through the visual communication channel. In this paper, we present an information-theoretic framework for flow visualization with a special focus on streamline generation. In our framework, a vector field is modeled as a distribution of directions from which Shannon's entropy is used to measure the information content in the field. The effectiveness of the streamlines displayed in visualization can be measured by first constructing a new distribution of vectors derived from the existing streamlines, and then comparing this distribution with that of the original data set using the conditional entropy. The conditional entropy between these two distributions indicates how much information in the original data remains hidden after the selected streamlines are displayed. The quality of the visualization can be improved by progressively introducing new streamlines until the conditional entropy converges to a small value. We describe the key components of our framework with detailed analysis, and show that the framework can effectively visualize 2D and 3D flow data.  相似文献   

12.
In domains like bioinformatics, information retrieval and social network analysis, one can find learning tasks where the goal consists of inferring a ranking of objects, conditioned on a particular target object. We present a general kernel framework for learning conditional rankings from various types of relational data, where rankings can be conditioned on unseen data objects. We propose efficient algorithms for conditional ranking by optimizing squared regression and ranking loss functions. We show theoretically, that learning with the ranking loss is likely to generalize better than with the regression loss. Further, we prove that symmetry or reciprocity properties of relations can be efficiently enforced in the learned models. Experiments on synthetic and real-world data illustrate that the proposed methods deliver state-of-the-art performance in terms of predictive power and computational efficiency. Moreover, we also show empirically that incorporating symmetry or reciprocity properties can improve the generalization performance.  相似文献   

13.
I. R. Goodman, H. T. Nguyen, and others have proposed the theory of random sets as a unifying paradigm for knowledge-based systems. The more types of ambiguous evidence that can be brought under the random-set umbrella, the more potentially useful the theory becomes as a systematic methodology for comparing and fusing seemingly incongruent kinds of information. Conditional event algebra provides a general framework for dealing with rule-based evidence in a manner consistent with probability theory. This and a previous companion article bring conditional event logic—and through it, rules and iterated rules—under the random set umbrella. In this article, we derive a new conditional event algebra, denoted GNW2, for rules which are contingent on other rules. We show that this logic is the only one which extends the GNW first-degree logic, admits a simple embedding into random sets, and has GNW-like behavior. We show that GNW events are contained in a Boolean subalgebra GNW* of GNW2. Finally, we show how GNW2 can be used to recursively define a GNW-like Boolean logic for iterated rules of any degree. © 1996 John Wiley & Sons, Inc.  相似文献   

14.
We present a technique for synthesizing systolic arrays which have non-uniform data flow governed by control signals. The starting point for the synthesis is anAffine Recurrence Equation—a generalization of the simple recurrences encountered in mathematics. A large class of programs, including most (single and multiple) nested-loop programs can be described by such recurrences. In this paper we extend our earlier work (Rajopadhye and Fujimoto 1986) in two principal directions. Firstly, we characterize a class of transformations calleddata pipelining and show that they yield recurrences that havelinear conditional expressions governing the computation. Secondly, we discuss the synthesis of systolic arrays that have non-uniform data flow governed by control signals. We show how to derive the control signals in such arrays by applying similar pipelining transformations to theselinear conditional expressions. The approach is illustrated by deriving the Guibas-Kung-Thompson architecture for computing the cost of optimal string parenthesization.Supported by a University of Utah Graduate Research Fellowship, and NSF grant No. MIP-8802454  相似文献   

15.
In general, a program is composed of smaller program segments using composition, conditional constructs or loop constructs. We present a theory which enables us to algebraically define and compute the composition of conditional expressions. The conditional expressions are represented using tabular notation. The formal definition of the composition allows us to compute the close form representation of the composition of tabular expressions. The presented approach is based on a many sorted algebra containing information preserving composition. This formal definition of composition is then “lifted” to an extended algebra containing tabular expressions. The presented theory provides very compact algorithms and proofs. Received July 1998 / Accepted in revised form January 2000  相似文献   

16.
隐私保护的信息熵模型及其度量方法   总被引:1,自引:0,他引:1  
隐私的量化是隐私保护技术的重要支撑,信息熵作为信息的量化手段,自然可以用于解决隐私度量问题. 基于Shannon信息论的通信框架,提出了几种隐私保护信息熵模型,以解决隐私保护系统的相关度量问题,主要包括:隐私保护基本信息熵模型、含敌手攻击的隐私保护信息熵模型、带主观感受的信息熵模型和多隐私信源的隐私保护信息熵模型.在这些模型中,将信息拥有者假设为发送方,隐私谋取者假设为接收方,隐私的泄露渠道假设为通信信道;基于这样的假设,分别引入信息熵、平均互信息量、条件熵及条件互信息等来分别描述隐私保护系统信息源的隐私度量、隐私泄露度量、含背景知识的隐私度量及泄露度量;以此为基础,进一步提出了隐私保护方法的强度和敌手攻击能力的量化测评,为隐私泄露的量化风险评估提供了一种支撑;最后,针对位置隐私保护的应用场景,给出了具体的信息熵模型及隐私保护机制和攻击能力的度量及分析.所提出的模型和隐私量化方法,可以为隐私保护技术和隐私泄露风险分析与评估提供可行的理论基础.  相似文献   

17.
利用覆盖算法对数据进行处理,得到论域U的一个划分,定义一种基于覆盖的条件信息熵,由新的条件信息熵定义新的属性重要性,并证明了对于一致决策表,它与代数定义下的重要性是等价的。以新的属性重要性为启发信息设计约简算法,并给出计算新的条件信息熵的算法。实验结果表明该约简算法能快速搜索到最优或次优约简。  相似文献   

18.
Many information systems are used in a problem solving context. Examples are travel planning systems, catalogs in electronic commerce, or agenda planning systems. They can be made more useful by integrating problem-solving capabilities into the information systems. This poses the challenge of scaleability: when hundreds of users access a server at the same time, it is important to avoid excessive computational load.We present the concept of SmartClients: lightweight problem-solving agents based on constraint satisfaction which can carry out the computation- and communication-intensive tasks on the user's computer. We present an example of an air travel planning system based on this technology.  相似文献   

19.
We propose the first join tree (JT) propagation architecture that labels the probability information passed between JT nodes in terms of conditional probability tables (CPTs) rather than potentials. By modeling the task of inference involving evidence, we can generate three work schedules that are more time-efficient for LAZY propagation. Our experimental results, involving five real-world or benchmark Bayesian networks (BNs), demonstrate a reasonable improvement over LAZY propagation. Our architecture also models inference not involving evidence. After the CPTs identified by our architecture have been physically constructed, we show that each JT node has a sound, local BN that preserves all conditional independencies of the original BN. Exploiting inference not involving evidence is used to develop an automated procedure for building multiply sectioned BNs. It also allows direct computation techniques to answer localized queries in local BNs, for which the empirical results on a real-world medical BN are promising. Screen shots of our implemented system demonstrate the improvements in semantic knowledge.  相似文献   

20.
View update is the problem of translating update requests against a view into update requests against the base data. In this article, we present a novel approach to generating alternative translations of view updates in relational databases. Using conditional tables to represent relational views, we transform a view update problem into constraint satisfaction problems (CSPs). Solutions to the CSPs correspond to possible translations of the view update. Semantic information to resolve ambiguity can be incrementally added as constraints in the CSPs. The connection between view update and constraint satisfaction makes it possible to apply the rich results of the CSP research to analyze and solve the important problem of view management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号