首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
一个智能体从周围环境中接收到多种知识,如何将这些知识合并成单一的、一致的知识是一个非常重要的问题,从信念修正中"缩并+添加"得到启发,我们分两步解决这个问题.第一步弱化接收到的多种信息,使之一致,第二步进行简单的合并操作.本文主要研究了第一步,称为基于群体信念协商的矛盾知识处理模型,本文讨论了该模型的公理系统和该模型的过程实现,通过一个例子示范了这种模型下信息合并操作的具体实现过程.  相似文献   

2.
一种基于可信度的迭代信念修正方法   总被引:2,自引:0,他引:2  
信念修正主要解决在接收到新信息时,如何对原有知识库进行操作的问题.经典的迭代信念修正主要关注信念修正的一致性,并未考虑多agent系统中信息具有不可靠性,以及信念修正过程对修正结果的影响.基于可信度的迭代信念修正方法,通过证据理论以及信度函数方法估计信息的可信度,并由此确定最优的最大协调子集作为信念修正的结果.基于可信度的迭代信念修正算子具有历史依赖性,即修正结果不仅与当前的信念集和接收到的新信息有关,也与信念集中曾经接收到的信息相关.  相似文献   

3.
Numerous belief revision and update semantics have been proposed in the literature in the past few years, but until recently, no work in the belief revision literature has focussed on the problem of implementing these semantics, and little attention has been paid to algorithmic questions. In this paper, we present and analyze our update algorithms built in Immortal, a model-based belief revision system. These algorithms can work for a variety of model-based belief revision semantics proposed to date. We also extend previously proposed semantics to handle updates involving the equality predicate and function symbols and incorporate these extensions in our algorithms. As an example, we discuss the use of belief revision semantics to model the action-augmented envisioning problem in qualitative simulation, and we show the experimental results of running an example simulation in Immortal.  相似文献   

4.
There is now extensive interest in reasoning about moving objects. A probabilistic spatio-temporal (PST) knowledge base (KB) contains atomic statements of the form “Object o is/was/will be in region r at time t with probability in the interval [?,u]”. In this paper, we study mechanisms for belief revision in PST KBs. We propose multiple methods for revising PST KBs. These methods involve finding maximally consistent subsets and maximal cardinality consistent subsets. In addition, there may be applications where the user has doubts about the accuracy of the spatial information, or the temporal aspects, or about the ability to recognize objects in such statements. We study belief revision mechanisms that allow changes to the KB in each of these three components. Finally, there may be doubts about the assignment of probabilities in the KB. Allowing changes to the probability of statements in the KB yields another belief revision mechanism. Each of these belief revision methods may be epistemically desirable for some applications, but not for others. We show that some of these approaches cannot satisfy AGM-style axioms for belief revision under certain conditions. We also perform a detailed complexity analysis of each of these approaches. Simply put, all belief revision methods proposed that satisfy AGM-style axioms turn out to be intractable with the exception of the method that revises beliefs by changing the probabilities (minimally) in the KB. We also propose two hybrids of these basic approaches to revision and analyze the complexity of these hybrid methods.  相似文献   

5.
Conditional probability is extended so as to be conditioned by an uncertain proposition of which truth value is ? (0 ? ? ? 1). Using the extended conditional probability, the measure of increased belief and disbelief in a hypothesis resulting from the observation of uncertain evidence are derived from MYCIN's measures based on certain evidence. On the comparison with our measures, it is shown that MYCIN's intuitive measures based on uncertain evidence, called the strength of evidence, utilize affirmative information which increases belief in the uncertain evidence but ignore negative information which causes new doubt in the uncertain evidence. An interpretation of the disregard of the negative information is presented from the viewpoint of cognitive psychology. It is pointed out that this disregard of the negative information is reasonable for a model of human inference but the negative information must also be utilized in order to evaluate a hypothesis correctly, or impartially on the basis of uncertain evidence. Our measures provide a means for utilizing both the affirmative and negative information on uncertain evidence. It is shown that inference based on the negation of evidence, which is contained in one of our measures, is difficult for an expert. A method for estimating the measure is presented which does not demand the difficult inference from an expert. The significance of the method is explained from the viewpoint of cognitive psychology.  相似文献   

6.
We give a logical framework for reasoning with observations at different time points. We call belief extrapolation the process of completing initial belief sets stemming from observations by assuming minimal change. We give a general semantics and we propose several extrapolation operators. We study some key properties verified by these operators and we address computational issues. We study in detail the position of belief extrapolation with respect to revision and update: in particular, belief extrapolation is shown to be a specific form of time-stamped belief revision. Several related lines of work are positioned with respect to belief extrapolation.  相似文献   

7.
This paper applies the Transferable Belief Model (TBM) interpretation of the Dempster-Shafer theory of evidence to estimate parameter distributions for probabilistic structural reliability assessment based on information from previous analyses, expert opinion, or qualitative assessments (i.e., evidence). Treating model parameters as credal variables, the suggested approach constructs a set of least-committed belief functions for each parameter defined on a continuous frame of real numbers that represent beliefs induced by the evidence in the credal state, discounts them based on the relevance and reliability of the supporting evidence, and combines them to obtain belief functions that represent the aggregate state of belief in the true value of each parameter. Within the TBM framework, beliefs held in the credal state can then be transformed to a pignistic state where they are represented by pignistic probability distributions. The value of this approach lies in its ability to leverage results from previous analyses to estimate distributions for use within a probabilistic reliability and risk assessment framework. The proposed methodology is demonstrated in an example problem that estimates the physical vulnerability of a notional office building to blast loading.  相似文献   

8.
From the early developments of machines for reasoning and decision making in higher-level information fusion, there was a need for a systematic and reliable evaluation of their performance. Performance evaluation is important for comparison and assessment of alternative solutions to real-world problems. In this paper we focus on one aspect of performance assessment for reasoning under uncertainty: the accuracy of the resulting belief (prediction or estimate). We propose a framework for assessment based on the assumption that the system under investigation is uncertain only due to stochastic variability (randomness), which is partially known. In this context we formulate a distance measure between the “ground truth” and the output of an automated system for reasoning in the framework of one of the non-additive uncertainty formalisms (such as imprecise probability theory, belief function theory or possibility theory). The proposed assessment framework is demonstrated with a simple numerical example.  相似文献   

9.
The problem tackled in this article consists in associating perceived objects detected at a certain time with known objects previously detected, knowing uncertain and imprecise information regarding the association of each perceived objects with each known objects. For instance, this problem can occur during the association step of an obstacle tracking process, especially in the context of vehicle driving aid. A contribution in the modeling of this association problem in the belief function framework is introduced. By interpreting belief functions as weighted opinions according to the Transferable Belief Model semantics, pieces of information regarding the association of known objects and perceived objects can be expressed in a common global space of association to be combined by the conjunctive rule of combination, and a decision making process using the pignistic transformation can be made. This approach is validated on real data.  相似文献   

10.
一种基于非单调逻辑的模型管理方法   总被引:2,自引:0,他引:2  
蓝红兵  费奇 《自动化学报》1992,18(4):414-420
本文讨论了模型管理中不确定性的表达、传递、证据合成以及问题求解过程,提出了一种 基于非单调逻辑的模型管理方法:将模型结构形式的不确定性表示为由建模者或领域专家对 问题结构中未知或随机情形所作假设集支持的可能性命题;模型之间不确定性关系的管理通 过对假设环境的真值(一致性)保持和信度调整过程来实现,其依据是在问题求解过程中出现 的冲突情形或者是由决策人提供的有关命题或次判断.  相似文献   

11.
Group decision-making combined with uncertainty theory is verified as a more conclusive theory, by building a bridge between deterministic and indeterministic group decision-making in this paper. Due to the absence of sufficient historical data, reliability of decisions are mainly determined by experts rather than some prior probability distributions, easily leading to the problem of subjectivity. Thus, belief degree and uncertainty distribution are used in this paper to fit individual preferences, and five scenarios of uncertain chance-constrained minimum cost consensus models are further discussed from the perspectives of the moderator, individual decision-makers and non-cooperators. Through deduction, reaching conditions for consensus and analytic formulas of the minimum total cost are both theoretically given. Finally, with the application in carbon quota negotiation, the proposed models are demonstrated as a further extension of the crisp number or interval preference-based minimum cost consensus models. In other words, the basic conclusions of the traditional models are some special cases of the uncertain minimum cost consensus models under different belief degrees.  相似文献   

12.
信度网中条件概率表的学习   总被引:6,自引:1,他引:5  
一、引言信度网B的学习包括结构B(?)的学习和条件概率表B_p的学习。因果马尔可夫条件原理表明:如果图形G是一个随机变量集合X的因果图,那么图形G也是该随机变量集合的联合概率分布所对应的信度网的结构图。根据这一原理,在实际应用中,可以利用领域  相似文献   

13.
Dempster–Shafer theory allows to construct belief functions from (precise) basic probability assignments. The present paper extends this idea substantially. By considering sets of basic probability assignments, an appealing constructive approach to general interval probability is achieved, which allows for a very flexible modelling of uncertain knowledge.  相似文献   

14.
In this paper, we have studied the Dempster–Shafer theory of evidence in situations of decision making with linguistic information and we develop a new aggregation operator: belief structure generalized linguistic hybrid averaging (BS-GLHA) operator and a wide range of particular cases. we have developed the new decision making model with Dempster–Shafer belief structure that uses linguistic information in order to manage uncertain situations that cannot be managed in a probabilistic way. We have seen that all these approaches are very useful for representing the new approaches in a more complete way selecting for each situation the particular case that it is closest to our interests in the specific problem analyzed. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. We have pointed out that the results and decisions are dependent on the linguistic aggregation operator used in the decision making process.  相似文献   

15.
《Artificial Intelligence》1987,31(3):271-293
Four main results are arrived at in this paper. (1) Closed convex sets of classical probability functions provide a representation of belief that includes the representations provided by Shafer probability mass functions as a special case. (2) The impact of “uncertain evidence” can be (formally) represented by Dempster conditioning, in Shafer's framework. (3) The impact of “uncertain evidence” can be (formally) represented in the framework of convex sets of classical probabilities by classical conditionalization. (4) The probability intervals that result from Dempster-Shafer updating on uncertain evidence are included in (and may be properly included in) the intervals that result from Bayesian updating on uncertain evidence.  相似文献   

16.
In this paper we present a new credal classification rule (CCR) based on belief functions to deal with the uncertain data. CCR allows the objects to belong (with different masses of belief) not only to the specific classes, but also to the sets of classes called meta-classes which correspond to the disjunction of several specific classes. Each specific class is characterized by a class center (i.e. prototype), and consists of all the objects that are sufficiently close to the center. The belief of the assignment of a given object to classify with a specific class is determined from the Mahalanobis distance between the object and the center of the corresponding class. The meta-classes are used to capture the imprecision in the classification of the objects when they are difficult to correctly classify because of the poor quality of available attributes. The selection of meta-classes depends on the application and the context, and a measure of the degree of indistinguishability between classes is introduced. In this new CCR approach, the objects assigned to a meta-class should be close to the center of this meta-class having similar distances to all the involved specific classes? centers, and the objects too far from the others will be considered as outliers (noise). CCR provides robust credal classification results with a relatively low computational burden. Several experiments using both artificial and real data sets are presented at the end of this paper to evaluate and compare the performances of this CCR method with respect to other classification methods.  相似文献   

17.
In Bayesian probabilistic approach for uncertain reasoning, one basic assumption is that a priori knowledge about the uncertain variable is modeled by a probability distribution. When new evidence representable by a constant set is available, the Bayesian conditioning is used to update a priori knowledge. In the conventional D-S evidence theory, all bodies of evidence about the uncertain variable are imprecise and uncertain. All bodies of evidence are combined by so-called Dempster’s rule of combination to achieve a combined body of evidence without considering a priori knowledge. From our point of view, when identifying the true value of an uncertain variable, Bayesian approach and evidence theory can cooperate to deal with uncertain reasoning. Firstly all imprecise and uncertain bodies of evidence about the uncertain variable are fused to achieve a combined evidence based on a priori knowledge, then the a posteriori probability distribution is achieved from a priori probability distribution by conditioning on the combined evidence. In this paper we firstly deal with the knowledge updating problem where a priori knowledge is represented by a probability distribution and new evidence is represented by a random set. Then we review the conditional evidence theory which resolves the knowledge combining problem based on a priori probabilistic knowledge. Finally we discuss the close relationship between knowledge updating procedure and knowledge combining procedure presented in this paper. We show that a posteriori probability conditioned on fused body of evidence satisfies the Bayesian parallel combination rule.  相似文献   

18.
John McCarthy's situation calculus has left an enduring mark on artificial intelligence research. This simple yet elegant formalism for modelling and reasoning about dynamic systems is still in common use more than forty years since it was first proposed. The ability to reason about action and change has long been considered a necessary component for any intelligent system. The situation calculus and its numerous extensions as well as the many competing proposals that it has inspired deal with this problem to some extent. In this paper, we offer a new approach to belief change associated with performing actions that addresses some of the shortcomings of these approaches. In particular, our approach is based on a well-developed theory of action in the situation calculus extended to deal with belief. Moreover, by augmenting this approach with a notion of plausibility over situations, our account handles nested belief, belief introspection, mistaken belief, and handles belief revision and belief update together with iterated belief change.  相似文献   

19.
We present a system for performing belief revision in a multi-agent environment. The system is called GBR (Genetic Belief Revisor) and it is based on a genetic algorithm. In this setting, different individuals are exposed to different experiences. This may happen because the world surrounding an agent changes over time or because we allow agents exploring different parts of the world. The algorithm permits the exchange of chromosomes from different agents and combines two different evolution strategies, one based on Darwin’s and the other on Lamarck’s evolutionary theory. The algorithm therefore includes also a Lamarckian operator that changes the memes of an agent in order to improve their fitness. The operator is implemented by means of a belief revision procedure that, by tracing logical derivations, identifies the memes leading to contradiction. Moreover, the algorithm comprises a special crossover mechanism for memes in which a meme can be acquired from another agent only if the other agent has “accessed” the meme, i.e. if an application of the Lamarckian operator has read or modified the meme. Experiments have been performed on the η-queen problem and on a problem of digital circuit diagnosis. In the case of the η-queen problem, the addition of the Lamarckian operator in the single agent case improves the fitness of the best solution. In both cases the experiments show that the distribution of constraints, even if it may lead to a reduction of the fitness of the best solution, does not produce a significant reduction. Evelina Lamma, Ph.D.: She is Full Professor at the University of Ferrara. She got her degree in Electrical Engineering at the University of Bologna in 1985, and her Ph.D. in Computer Science in 1990. Her research activity centers on extensions of logic programming languages and artificial intelligence. She was coorganizers of the 3rd International Workshop on Extensions of Logic Programming ELP92, held in Bologna in February 1992, and of the 6th Italian Congress on Artificial Intelligence, held in Bologna in September 1999. Currently, she teaches Artificial Intelligence and Fondations of Computer Science. Fabrizio Riguzzi, Ph.D.: He is Assistant Professor at the Department of Engineering of the University of Ferrara, Italy. He received his Laurea from the University of Bologna in 1995 and his Ph.D. from the University of Bologna in 1999. He joined the Department of Engineering of the University of Ferrara in 1999. He has been a visiting researcher at the University of Cyprus and at the New University of Lisbon. His research interests include: data mining (and in particular methods for learning from multirelational data), machine learning, belief revision, genetic algorithms and software engineering. Luís Moniz Pereira, Ph.D.: He is Full Professor of Computer Science at Departamento de Informática, Universidade Nova de Lisboa, Portugal. He received his Ph.D. in Artificial Intelligence from Brunel University in 1974. He is the director of the Artificial Intelligence Centre (CENTRIA) at Universidade Nova de Lisboa. He has been elected Fellow of the European Coordinating Committee for Artificial Intelligence in 2001. He has been a visiting Professor at the U. California at Riverside, USA, the State U. NY at Stony Brook, USA and the U. Bologna, Italy. His research interests include: knowledge representation, reasoning, learning, rational agents and logic programming.  相似文献   

20.
The theory of evidence proposed by G. Shafer is gaining more and more acceptance in the field of artificial intelligence, for the purpose of managing uncertainty in knowledge bases. One of the crucial problems is combining uncertain pieces of evidence stemming from several sources, whether rules or physical sensors. This paper examines the framework of belief functions in terms of expressive power for knowledge representation. It is recalled that probability theory and Zadeh's theory of possibility are mathematically encompassed by the theory of evidence, as far as the evaluation of belief is concerned. Empirical and axiomatic foundations of belief functions and possibility measures are investigated. Then the general problem of combining uncertain evidence is addressed, with focus on Dempster rule of combination. It is pointed out that this rule is not very well adapted to the pooling of conflicting information. Alternative rules are proposed to cope with this problem and deal with specific cases such as nonreliable sources, nonexhaustive sources, inconsistent sources, and dependent sources. It is also indicated that combination rules issued from fuzzy set and possibility theory look more flexible than Dempster rule because many variants exist, and their numerical stability seems to be better.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号