首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Fuzzy logic can bring about inappropriate inferences as a result of ignoring some information in the reasoning process. Neural networks are powerful tools for pattern processing, but are not appropriate for the logical reasoning needed to model human knowledge. The use of a neural logic network derived from a modified neural network, however, makes logical reasoning possible. In this paper, we construct a fuzzy inference network by extending the rule–inference network based on an existing neural logic network. The propagation rule used in the existing rule–inference network is modified and applied. In order to determine the belief value of a proposition pertaining to the execution part of the fuzzy rules in a fuzzy inference network, the nodes connected to the proposition to be inferenced should be searched for. The search costs are compared and evaluated through application of sequential and priority searches for all the connected nodes.  相似文献   

2.
3.
In recent years reasoning about structure and function of physical systems for the purpose of diagnosis has seen a dramatic increase in activities. New exciting results concerning modelling issues, diagnostic inference patterns and inferential power have emerged. A state of the art diagnosis agent now has a considerable toolset at hand. A main obstacle for building large diagnosis systems, however, remains. How can we controlwhen to usewhich inference pattern or representation? We argue that the actions available to a diagnosis agent can be understood in terms of change ofworking hypotheses. The control problem then becomes a belief revision problem: when to adopt or drop beliefs. Our approach proceeds in two steps. First, we adopt the principle of informational economy from Gärdenfors, Knowledge in Flux (MIT Press, 1988) as kind of a law of inertia for diagnostic processes, that helps us identify candidates for revised belief states. In a second step we employ specificdiagnostic knowledge to actually choose the next belief state. We demonstrate the use of our concepts on an example in the domain of ballast tank systems as e.g. used in offshore plants.  相似文献   

4.
《Pattern recognition letters》1999,20(11-13):1211-1217
Abductive inference in Bayesian belief networks is the process of generating the K most probable configurations given an observed evidence. When we are only interested in a subset of the network's variables, this problem is called partial abductive inference. Both problems are NP-hard, and so exact computation is not always possible. This paper describes an approximate method based on genetic algorithms to perform partial abductive inference. We have tested the algorithm using the alarm network and from the experimental results we can conclude that the algorithm presented here is a good tool to perform this kind of probabilistic reasoning.  相似文献   

5.
黄德根  张云霞  林红梅  邹丽  刘壮 《软件学报》2020,31(4):1063-1078
为了缓解神经网络的“黑盒子”机制引起的算法可解释性低的问题,基于使用证据推理算法的置信规则库推理方法(以下简称RIMER)提出了一个规则推理网络模型.该模型通过RIMER中的置信规则和推理机制提高网络的可解释性.首先证明了基于证据推理的推理函数是可偏导的,保证了算法的可行性;然后,给出了规则推理网络的网络框架和学习算法,利用RIMER中的推理过程作为规则推理网络的前馈过程,以保证网络的可解释性;使用梯度下降法调整规则库中的参数以建立更合理的置信规则库,为了降低学习复杂度,提出了“伪梯度”的概念;最后,通过分类对比实验,分析了所提算法在精确度和可解释性上的优势.实验结果表明,当训练数据集规模较小时,规则推理网络的表现良好,当训练数据规模扩大时,规则推理网络也能达到令人满意的结果.  相似文献   

6.
This paper is based on the premise that legal reasoning involves an evaluation of facts, principles, and legal precedent that are inexact, and uncertainty-based methods represent a useful approach for modeling this type of reasoning. By applying three different uncertainty-based methods to the same legal reasoning problem, a comparative study can be constructed. The application involves modeling legal reasoning for the assessment of potential liability due to defective product design. The three methods used for this study include: a Bayesian belief network, a fuzzy logic system, and an artificial neural network. A common knowledge base is used to implement the three solutions and provide an unbiased framework for evaluation. The problem framework and the construction of the common knowledgebase are described. The theoretical background for Bayesian belief networks, fuzzy logic inference, and multilayer perceptron with backpropagation are discussed. The design, implementation, and results with each of these systems are provided. The fuzzy logic system outperformed the other systems by reproducing the opinion of a skilled attorney in 99 of 100 cases, but the fuzzy logic system required more effort to construct the rulebase. The neural network method also reproduced the expert's opinions very well, but required less effort to develop. ©1999 John Wiley & Sons, Inc.  相似文献   

7.
Fuzzy reasoning methods (or approximate reasoning methods) are extensively used in intelligent systems and fuzzy control. In this paper the author discusses how errors in premises affect conclusions in fuzzy reasoning, that is, he discusses the robustness of fuzzy reasoning. After reviewing his previous work (1996), he presents robustness results for various implication operators and inference rules. All the robustness results are formulated in terms of δ-equalities of fuzzy sets. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ  相似文献   

8.
An inquiry into computer understanding   总被引:1,自引:0,他引:1  
This essay addresses a number of issues centered around the question of what is the best method for representing and reasoning about common sense (sometimes called plausible inference). Drew McDermott has shown that a direct translation of commonsense reasoning into logical form leads to insurmountable difficulties, from which McDermott concluded that we must resort to procedural ad hocery. This paper shows that the difficulties McDermott described are a result of insisting on using logic as the language of commonsense reasoning. If, instead, (Bayesian) probability is used, none of the technical difficulties found in using logic arise. For example, in probability, the problem of referential opacity cannot occur and nonmonotonic logics (which McDermott showed don't work anyway) are not necessary. The difficulties in applying logic to the real world are shown to arise from the limitations of truth semantics built into logic–probability substitutes the more reasonable notion of belief. In Bayesian inference, many pieces of evidence are combined to get an overall measure of belief in a proposition. This is much closer to commonsense patterns of thought than long chains of logical inference to the true conclusions. Also it is shown that English expressions of the “IF A THEN B” form are best interpreted as conditional probabilities rather than universally quantified expressions. Bayesian inference is applied to a simple example of linguistic information to illustrate the potential of this type of inference for AI. This example also shows how to deal with vague information, which has so far been the province of fuzzy logic. It is further shown that Bayesian inference gives a theoretical basis for inductive inference that is borne out in practice. Instead of insisting that probability is the best language for commonsense reasoning, a major point of this essay is to show that real inference is a complex interaction between probability, logic, and other formal representation and reasoning systems.  相似文献   

9.
Bayesian networks (BNs) and influence diagrams (IDs) are probabilistic graphical models that are widely used for building diagnosis- and decision-support expert systems. Explanation of both the model and the reasoning is important for debugging these models, alleviating users' reluctance to accept their advice, and using them as tutoring systems. This paper describes some explanation options for BNs and IDs that have been implemented in Elvira and how they have been used for building medical models and teaching probabilistic reasoning to pre- and postgraduate students.  相似文献   

10.
This paper demonstrates the relational structure of belief networks by establishing an extended relational data model which can be applied to both belief networks and relational applications. It is demonstrated that a Markov network can be represented as a generalized acyclic join dependency (GAJD) which is equivalent to a set of conflict-free generalized multivalued dependencies (GMVDs). A Markov network can also be characterized by an entropy function, which greatly facilitates the manipulation of GMVDs. These results are extensions of results established in relational theory. It is shown that there exists a complete set of inference rules for the GMVDs. This result is important from a probabilistic perspective. All the above results explicitly demonstrate that there is a unified model for relational database and probabilistic reasoning systems. This is not only important from a theoretical point of view in that one model has been developed for a number of domains, but also from a practical point of view in that one system can be implemented for both domains. This implemented system can take advantage of the performance enhancing techniques developed in both fields. Thereby, this paper serves as a theoretical foundation for harmonizing these two important information domains.  相似文献   

11.
12.
The importance of explanation in expert systems has been documented from the early days of their development; there is an equally pressing need for explanation in systems that employ a decision-making process based on quantitative reasoning. This is particularly necessary for users who do not have a sophisticated understanding of the formal apparatus that the system employs to reach its decisions. In order to generate meaningful answers to questions asked by such unsophisticated users, an explanation facility must translate the formal structures of the problem solving system into the concepts with which the user understands the problem domain. Previous work on the explanation of quantitative systems is based on the assumption that the user has at least a basic grasp of the formal approach of the problem solving system. However, in realistic application situations, it is more likely the case that in order for the human user to understand why a mathematically-based advice-giving system makes the suggestions that it does, the problem solving rationale of the system must be explained in the user's own terms, which are typically different from those of the mathematical system. To develop an explanation methodology that is capable of justifying the results of a system based on quantitative reasoning to an uninitiated user, we employ a representation that enables our explanation facility to translate the abstract mathematical relationships that make up a quantitative system into the domain-specific concepts with which a typical user approaches the problem solving task. In our system, the process of generating explanations, therefore, involves translating one set of concepts into another. An added feature of this system is that it is capable of providing explanations from two perspectives: that of the quantitative problem solving system, and that of the human user who is familiar with the domain problem but not with the mathematical approach. We have implemented this approach to explaining quantitative systems by creating an explanation facility for a problem in the manufacturing domain. This facility responds to user queries about a scheduling system that uses a mathematically-based heuristic to choose jobs for an annealing furnace.  相似文献   

13.
Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.  相似文献   

14.
一种基于弱T-范数和弱S-范数的神经元,可以实现与、或和混合-并模糊逻辑运算,并且拥有较强的鲁棒性。将它所组成的神经网络运用到模糊推理系统中,不仅可以简化网络,实现模糊推理最基本的一致性要求,还可以控制在模糊推理过程中当规则发生摄动时对推理结果的影响程度。  相似文献   

15.
The ability to explain reasoning processes used for problem solving distinguishes the expert system from other decision support systems. The explanation facility is intended to help the user critically analyze the expert system output. Studies investigating the explanation effect, however, report that the act of explaining an event's occurrence increases the perceived likelihood of the event. The consideration of expert system explanations, therefore, may lead to greater agreement with the system's output rather than the intended critical review of the output. This research examines the explanation effect resulting from the consideration of expert system explanations. The differential impact of considering expert system explanations is compared to the effect of generating written explanations. In addition, the differential impact of positive versus negative explanations is investigated. A hypothetical audit case was administered to 41 practicing auditors. An explanation effect was observed when the expert system explanation was negative. An explanation effect was not observed when the expert system explanation was positive. An explanation effect was not observed for either positive or negative self-generated explanations. Therefore, the most influential explanations were those which were expert system generated and negative or conservative in direction.  相似文献   

16.
《Artificial Intelligence》1987,33(2):173-215
This paper extends the applications of belief network models to include the revision of belief “commitments,” i.e., the categorical acceptance of a subset of hypotheses which, together, constitute the most satisfactory explanation of the evidence at hand. A coherent model of nonmonotonic reasoning is introduced, and distributed algorithms for belief revision are presented. We show that, in singly connected networks, the most satisfactory explanation can be found in linear time by a message-passing algorithm similar to the one used in belief updating. In multiply connected networks, the problem may be exponentially hard but, if the network is sparse, topological considerations can be used to render the interpretation task tractable. In general, finding the most probable combination of hypotheses is no more complex than computing the degree of belief for any individual hypothesis. Applications to circuit and medical diagnosis are illustrated.  相似文献   

17.
In complex reasoning tasks it is often the case that there is no single, correct set of conclusions given some initial information. Instead, there may be several such conclusion sets, which we will call belief sets. In the present paper we introduce nonmonotonic belief set operators and selection operators to formalize and to analyze structural aspects of reasoning with multiple belief sets. We define and investigate formal properties of belief set operators as absorption, congruence, supradeductivity and weak belief monotony. Furthermore, it is shown that for each belief set operator satisfying strong belief cumulativity there exists a largest monotonic logic underlying it, thus generalizing a result for nonmonotonic inference operations. Finally, we study abstract properties of selection operators connected to belief set operators, which are used to choose some of the possible belief sets. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
Environmental monitoring is usually based on large volumes of data, while in general, environmental decision making is a complex problem, has a high degree of uncertainty, and involves diverse areas of expertise. Environmental decision-support systems are therefore good candidates for application of artificial intelligence (AI) techniques. In this paper it is argued that a suitable approach for building these systems is the use of case-based reasoning or analogical reasoning techniques, which offer more adaptability and better explanation facilities than other AI paradigms. As an example, the development stages, the architecture, and the operational characteristics of the expert system Air Quality Predictor (AIRQUAP), developed to predict air pollution levels in Athens, Greece, are described. AIRQUAP helps users retrieve historical data intelligently and can predict air pollution levels, useful for management of air pollution episodes. The performance of the system is also compared with other techniques used in this class of applications.  相似文献   

19.
Providing explanations for recommended actions is one of the most important capabilities of expert systems (ESs). The nature of the auditing domain suggests that ESs designed for audit applications should provide an explanation facility. There is little empirical evidence, however, that explanation facilities are, in fact, useful. This paper investigates the impact of explanations on changes in user beliefs toward ES-generated conclusions. Grounded on a theoretical model of argument, the study utilized a simulated expert system to provide three alternative types of ES explanations: trace; justification; and strategy. Ten expert and ten novice auditors performing an analytical review task evaluated the outputs of the system in a laboratory setting. The results indicate that explanation facilities can make a system's advice more agreeable and hence acceptable to auditors, and that justification is the most effective type of ES explanation to bring about changes in auditor attitudes toward the system. In addition, the results suggest that auditors at different levels of expertise may value each explanation type differently.  相似文献   

20.
A 2-D model for evidential reasoning is proposed, in which the belief function of evidence is represented as a belief density function which can be in a continuous or discrete form. A vector form of mutual dependency relationship of the evidence is considered and a dependency propagation theorem is proved. This robust method can resolve the conflicts resulting from either the mutual dependency among evidences or the structural dependency in an inference network due to the evidence combination order. Belief conjunction, belief combination, belief propagation procedures, and AND/OR operations of an inference network based on the proposed 2-D model are all presented, followed by some examples demonstrating the advantages of this method over the conventional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号