首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Logic programming provides a model for rule-based reasoning in expert systems. The advantage of this formal model is that it makes available many results from the semantics and proof theory of first-ordet predicate logic. A disadvantage is that in expert systems one often wants to use, instead of the usual two truth values, an entire continuum of “uncertainties” in between. That is, instead of the usual “qualitative” deduction, a form of “quantitative” deduction is required. We present an approach to generalizing the Tarskian semantics of Horn clause rules to justify a form of quantitative deduction. Each clause receives a numerical attenuation factor. Herbrand interpretations, which are subsets of the Herbrand base, are generalized to subsets which are fuzzy in the sense of Zadeh. We show that as result the fixpoint method in the semantics of Horn clause rules can be developed in much the same way for the quantitative case. As for proof theory, the interesting phenomenon is that a proof should be viewed as a two-person game. The value of the game turns out to be the truth value of the atomic formula to be proved, evaluated in the minimal fixpoint of the rule set. The analog of the PROLOG interpreter for quantitative deduction becomes a search of the game tree ( = proof tree) using the alpha-beta heuristic well known in game theory.  相似文献   

2.
Association rules have been widely used in many application areas to extract new and useful information expressed in a comprehensive way for decision makers from raw data. However, raw data may not always be available, it can be distributed in multiple datasets and therefore there resulting number of association rules to be inspected is overwhelming. In the light of these observations, we propose meta-association rules, a new framework for mining association rules over previously discovered rules in multiple databases. Meta-association rules are a new tool that convey new information from the patterns extracted from multiple datasets and give a “summarized” representation about most frequent patterns. We propose and compare two different algorithms based respectively on crisp rules and fuzzy rules, concluding that fuzzy meta-association rules are suitable to incorporate to the meta-mining procedure the obtained quality assessment provided by the rules in the first step of the process, although it consumes more time than the crisp approach. In addition, fuzzy meta-rules give a more manageable set of rules for its posterior analysis and they allow the use of fuzzy items to express additional knowledge about the original databases. The proposed framework is illustrated with real-life data about crime incidents in the city of Chicago. Issues such as the difference with traditional approaches are discussed using synthetic data.  相似文献   

3.
本文论述从逻辑程序本身提取启发式控制信息,以克服由于逻辑语言系统中控制策略的机械性所带来的不完备性和低效性。具体地给出若干启发式控制规则,并证明了这些规则的正确性。运用这些控制规则可以大大地提高系统的运行效率或改善逻辑程序的语义性质。文章最后给出启发式WAM(记作HWAM),并且用实例说明HWAM比WAM更有效,更完善。  相似文献   

4.
针对控制系统中对象的模糊性和动态性,基于动态模糊集(Dynamic Fuzzy Sets)及动态模糊逻辑(Dynamic FuzzyLogic)系统理论,给出DF控制推理模型的相关概念,如DF向量、DF语言变量、DF语言规则和DF蕴涵关系等,并在此基础上探讨基于DF语言规则的DF推理方法,最后通过实例说明这些概念和方法的应用。  相似文献   

5.
Abstract

A new AI programming language (called FUZZY) is introduced which provides a number of facilities for efficiently representing and manipulating fuzzy knowledge. A fuzzy associative net is maintained by the system, and procedures with associated “procedure demons” may be defined for the control of fuzzy processes. Such standard AI language features as a pattern-directed data access and procedure invocation mechanism and a backtrack control structure are also available.

This paper examines some general techniques for representing fuzzy knowledge in FUZZY, including the use of the associative net for the explicit representation of fuzzy sets and fuzzy relations, and the use of “deduce procedures” to implicitly define fuzzy sets, logical combinations of fuzzy sets, linguistic hedges, and fuzzy algorithms. The role of inference in a fuzzy environment is also discussed, and a technique for computing fuzzy inferences in FUZZY is examined.

The programming language FUZZY is implemented in LISP, and is currently running on a UNIVAC 1110 computer.  相似文献   

6.
This paper focuses on the techniques used in an NKRL environment (NKRL = Narrative Knowledge Representation Language) to deal with a general problem affecting the so-called “semantic/conceptual annotations” techniques. These last, mainly ontology-based, aim at “annotating” multimedia documents by representing, in some way, the “inner meaning/deep content” of these documents. For documents of sufficient size, the content modeling operations are separately executed on ‘significant fragments’ of the documents, e.g., “sentences” for natural language texts or “segments” (minimal units for story advancement) in a video context. The general problem above concerns then the possibility of collecting all the partial conceptual representations into a global one. This integration operation must, moreover, be carried out in such a way that the meaning of the full document could go beyond the simple addition of the ‘meanings’ conveyed by the single fragments. In this context, NKRL makes use of second order knowledge representation structures, “completive construction” and “binding occurrences”, for collecting within the conceptual annotation of a whole “narrative” the basic building blocks corresponding to the representation of its composing elementary events. These solutions, of a quite general nature, are discussed in some depth in this paper. This last includes also a short “state of the art” in the annotation domain and some comparisons with the different methodologies proposed in the past for solving the above ‘integration’ problem.  相似文献   

7.
Recently, a new approach to the design of fuzzy control rules was suggested. The method, referred to as fuzzy Lyapunov synthesis, extends classical Lyapunov synthesis to the domain of “computing with words”, and allows the systematic, instead of heuristic, design and analysis of fuzzy controllers given linguistic information about the plant. In this paper, we use fuzzy Lyapunov synthesis to design and analyze the rule-base of a fuzzy scheduler. Here, too, rather than use heuristics, we can derive the fuzzy rule-base systematically. This suggests that the process of deriving the rules can be automated. Our approach may lead to a novel computing with words algorithm: the input is linguistic information concerning the “plant” and the “control” objective, and the output is a suitable fuzzy rule-base.  相似文献   

8.

Natural language processing techniques contribute more and more in analyzing legal documents recently, which supports the implementation of laws and rules using computers. Previous approaches in representing a legal sentence often based on logical patterns that illustrate the relations between concepts in the sentence, often consist of multiple words. Those representations cause the lack of semantic information at the word level. In our work, we aim to tackle such shortcomings by representing legal texts in the form of abstract meaning representation (AMR), a graph-based semantic representation that gains lots of polarity in NLP community recently. We present our study in AMR Parsing (producing AMR from natural language) and AMR-to-text Generation (producing natural language from AMR) specifically for legal domain. We also introduce JCivilCode, a human-annotated legal AMR dataset which was created and verified by a group of linguistic and legal experts. We conduct an empirical evaluation of various approaches in parsing and generating AMR on our own dataset and show the current challenges. Based on our observation, we propose our domain adaptation method applying in the training phase and decoding phase of a neural AMR-to-text generation model. Our method improves the quality of text generated from AMR graph compared to the baseline model. (This work is extended from our two previous papers: “An Empirical Evaluation of AMR Parsing for Legal Documents”, published in the Twelfth International Workshop on Juris-informatics (JURISIN) 2018; and “Legal Text Generation from Abstract Meaning Representation”, published in the 32nd International Conference on Legal Knowledge and Information Systems (JURIX) 2019.).

  相似文献   

9.
This article describes the design and implementation of the reasoning engine developed for the interpretation of FLORIAN rule language. A key feature of the language is to allow the specification of control knowledge using generalized meta-rules. The user can define how to solve conflicts at the object level, at the meta-level, or at any higher level using meta-i-rules. Object-level rules and generalized meta-i-rules share the same rule format. Several examples of meta-rules and higher level rules are presented using the rule syntax. The architecture and working of the rule interpreter is analyzed, describing the main algorithms and abstract data types implementing The reasoning engine.  相似文献   

10.
11.
General methods for understanding a natural language based on the intensive use of rewrite rules and on the existence of several cooperating processes are put forward. The choice of Horn-clause logic as the underlying formalism for semantic representations, together with the employment of unification as pattern-matching procedure and depth-first search with backtracking were derived from logic-programming ideas, in particular from the use of PROLOG. Several examples are presented to illustrate how these methods work.  相似文献   

12.
基于着色Petri网模糊专家系统的研究   总被引:1,自引:0,他引:1  
针对变电站无功控制模糊专家系统知识表示不确定性及规则数量多的特点,文章以模糊、着色Petri网为基础,提出了一种基于模糊着色Petri网的知识表示与规则获取方法。该方法利用Petri网的图形化环境特点,将模糊规则库的不同变量用不同的颜色加以区分,不同规则中的同一个变量用该变量的颜色集表示,构成一个模糊着色Petri网模型。充分利用着色Petri网的特点,对推理过程进行了仔细研究,并提出一种基于着色模糊Petri网的启发式搜索策略。将其用于变电站无功控制的模糊专家系统中,结果表明,基于着色Petri网的模糊知识表示和获取方法,对于大型、复杂变电站模糊专家控制系统是非常有效的。  相似文献   

13.
A method is presented for executing PROLOG programs which avoids almost all unnecessary occur-checks. The method is based on a dynamic classification of the context in which logical variables occur. No static global analysis of the PROLOG program is required to detect the places where an occur-check has to be made. The presented method has also an important side benefit. It considerably cuts down on the number of memory references during the execution of PROLOG programs. Furthermore, in most cases it avoids “trailing” and “untrailing” of unbound variables altogether. Due to this fact the employed method actually speeds up PROLOG execution. The method is discussed in terms of an actual implementation based on the Warren abstract PROLOG instruction set. However, the method should be applicable to other implementation models as well. No assumptions are made with respect to particular hardware.  相似文献   

14.
Petroleum exploration is an economical activity where many billions of dollars are invested every year. Despite these enormous investments, it is still considered a classical example of decision-making under uncertainty. In this paper, a new hybrid fuzzy-probabilistic methodology is proposed and the implementation of a software tool for assessing the risk of petroleum prospects is described. The methodology is based in a fuzzy-probabilistic representation of uncertain geological knowledge where the risk can be seen as a stochastic variable whose probability distribution counts on a codified geological argumentation. The risk of each geological factor is calculated as a fuzzy set through a fuzzy system and then associated with a probability interval. Then the risk of the whole prospect is calculated using simulation and fitted to a beta probability distribution. Finally, historical and direct hydrocarbon indicators data are incorporated in the model. The methodology is implemented in a prototype software tool called RCSUEX (“Certainty Representation of the Exploratory Success”). The results show that the method can be applied in systematizing the arguing and measuring the probability of success of a petroleum accumulation discovery.  相似文献   

15.
The semantics of PROLOG programs is usually given in terms of the model theory of first-order logic. However, this does not adequately characterizethe computational behavior of PROLOG programs. PROLOG implementations typically use a sequential evaluation strategy based on the textual order of clauses and literals in a program, as well as nonlogical features like cut. In this work we develop a denotational semantics that captures thecomputational behavior of PROLOG. We present a semantics for “cut-free” PROLOG, which is then extended to PROLOG with cut. For each case we develop a congruence proof that relates the semantics to a standard operational interpreter. As an application of our denotational semantics, we show the correctness of some standard “folk” theorems regarding transformations on PROLOG programs.  相似文献   

16.
After having recalled some well-known shortcomings linked with the Semantic Web approach to the creation of (application oriented) systems of “rules” – e.g., limited expressiveness, adoption of an Open World Assumption (OWA) paradigm, absence of variables in the original definition of OWL – this paper examines the technical solutions successfully used for implementing advanced reasoning systems according to the NKRL’s methodology. NKRL (Narrative Knowledge Representation Language) is a conceptual meta-model and a Computer Science environment expressly created to deal, in an ‘intelligent’ and complete way, with complex and content-rich non-fictional ‘narrative’ data sources. These last include corporate memory documents, news stories, normative and legal texts, medical records, surveillance videos, actuality photos for newspapers and magazines, etc. In this context, we will expound first the need for distinguishing between “plain/static” and “structured/dynamic” knowledge and for introducing appropriate (and different) knowledge representation structures for these two types of knowledge. In a structured/dynamic context, we will then show how the introduction of “functional roles” – associated with the possibility of making use of n-ary structures – allows us to build up highly ‘expressive’ rules whose “atoms” can directly represent complex situations, actions, etc. without being restricted to the use of binary clauses. In an NKRL context, “functional roles” are primitive symbols interpreted as “relations” – like “subject”, “object”, “source”, “beneficiary”, etc. – that link a semantic predicate with its arguments within an n-ary conceptual formula. Functional roles contrast then with the “semantic roles” that are equated to ordinary concepts like “student”, to be inserted into the “non-sortal” (no direct instances) branch of a traditional ontology.  相似文献   

17.
The article addresses the problem of reasoning under time constraints with incomplete, vague, and uncertain information. It is based on the idea of Variable Precision Logic (VPL), introduced by Michalski and Winston, which deals with both the problem of reasoning with incomplete information subject to time constraints and the problem of reasoning efficiently with exceptions. It offers mechanisms for handling trade-offs between the precision of inferences and the computational efficiency of deriving them. As an extension of Censored Production Rules (CPRs) that exhibit variable precision in which certainty varies while specificity stays constant, a Hierarchical Censored Production Rules (HCPRs) system of Knowledge Representation proposed by Bharadwaj and Jain exhibits both variable certainty as well as variable specificity. Fuzzy Censored Production Rules (FCPRs) are obtained by augmenting ordinary fuzzy conditional statement: “if X is A then Y is B” (or A(x)B(y) for short) with an exception condition and are written in the form: “if X is A then Y is B unless Z is C” (or A(x) ⇒ B(y) ∥ C(z)). Such rules are employed in situations in which the fuzzy conditional statement “if X is A then Y is B” holds frequently and the exception condition “Z is C” holds rarely. Thus, using a rule of this type we are free to ignore the exception condition, when the resources needed to establish its presence are tight or there simply is no information available as to whether it holds or does not hold. Thus if … then part of the FCPR expresses important information while the unless part acts only as a switch that changes the polarity of “Y is B” to “Y is not B” when the assertion “Z is C” holds. Our aim is to show how an ordinary fuzzy production rule on suitable modifications and augmentation with relevant information becomes a Fuzzy Hierarchical Censored Production Rules (FHCPRs), which in turn enables to resolve many of the problems associated with usual fuzzy production rules system. Examples are given to demonstrate the behavior of the proposed schemes. © 1996 John Wiley & Sons, Inc.  相似文献   

18.
An augmented and⧸or tree representation of logic programs is presented as the basis for an advanced graphical tracing and debugging facility for PROLOG. An extension of our earlier work on “retrospective zooming”, this representation offers several distinct advantages over existing tracing and debugging facilities: (1) it naturally incorporates traditional and⧸or trees and Byrd box models (call⧸exit⧸fail⧸redo procedural models) as special cases; (2) it can be run in slow-motion, close-up mode for novices or high-speed, long-distance mode for experts with no attendant conceptual change; (3) it serves as the uniform basis for textbook material, video-based teaching material, and an advanced user interface for experienced PROLOG programmers; (4) it tells the truth about clause head matching and deals correctly with the cut. One of the key insights underlying the work is the realization that it is possible to display an execution space of several thousand nodes in a meaningful way on a modern graphics workstation. By enhancing and⧸or trees to include “status boxes” rather than simple “nodes”, it is possible to display both a long-distance view of execution and the full details of clause-head matching. Graphical “collapsing” techniques enable the model to deal with user-defined abstractions, higher-order predicates such as setof, and definite-clause grammars. The current implementation runs on modern graphics workstations and is written in PROLOG.  相似文献   

19.
20.
This article presents a new method for learning and tuning a fuzzy logic controller automatically. A reinforcement learning and a genetic algorithm are used in conjunction with a multilayer neural network model of a fuzzy logic controller, which can automatically generate the fuzzy control rules and refine the membership functions at the same time to optimize the final system's performance. In particular, the self-learning and tuning fuzzy logic controller based on genetic algorithms and reinforcement learning architecture, which is called a Stretched Genetic Reinforcement Fuzzy Logic Controller (SGRFLC), proposed here, can also learn fuzzy logic control rules even when only weak information, such as a binary target of “success” or “failure” signal, is available. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. It is shown that the system can solve a fairly difficult control learning problem more concretely, the task is a cart–pole balancing system, in which a pole is hinged to a movable cart to which a continuously variable control force is applied. © 1997 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号