首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
不完善的领域理论包括不完全理论、不正确理论、不一致理论和难以操纵的理论 ,它们限制了基于解释学习系统的学习能力。提出了相应的改进方法 ,包括部分解释方法、过失测定和改进、实验以及近似理论 ,最后提出了集成学习方法解决一个领域理论中包含多种不完善性的问题  相似文献   

2.
叶风  权光日  王熙照 《计算机学报》1999,22(12):1233-1238
提出一种基于归结的并有关于背景适应吸示例的一致特化理论,该理论给出了最大一般特化假设的归结构造方法,可将其作为一种蕴涵意义下的一般理论特化框架。基于该理论,进一步提出k一般特化概念以解决特化的可计算性问题,并相应地给出特化算法。有关实验表明,该理论与算法能够正确并有效地进行一阶理论特化。  相似文献   

3.
Automated Refinement of First-Order Horn-Clause Domain Theories   总被引:8,自引:0,他引:8  
Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. FORTE uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including prepositional theory refinement, first-order induction, and inverse resolution. FORTE is demonstrated in several domains, including logic programming and qualitative modelling.  相似文献   

4.
周光明 《计算机工程》2005,31(17):144-145,148
介绍了一个基于解释的学习系统IFLS的结构和采用的技术,克服了传统的解释学习要求学习程序拥有完善的领域知识以及没有考虑所学知识的效用性问题。  相似文献   

5.
6.
基于逻辑程序的知识库更新方法研究的焦点在于处理知识库的冲突问题,但代价是更新时规则库增大很快,该文提出了“修正的逻辑程序知识库更新方法”,此种方法基于一种规范知识库更新的形式化方法—修正程序,此种更新方法不仅可以最大程度地减少更新时规则库的增大,也避免了重复工作和知识库信息的丢失,还可以同时满足“替换更新”和“丰富更新”。  相似文献   

7.
In a broad sense, logic is the field of formal languages for knowledge and truth that have a formal semantics. It tends to be difficult to give a narrower definition because very different kinds of logics exist. One of the most fundamental contrasts is between the different methods of assigning semantics. Here two classes can be distinguished: model theoretical semantics based on a foundation of mathematics such as set theory, and proof theoretical semantics based on an inference system possibly formulated within a type theory.Logical frameworks have been developed to cope with the variety of available logics unifying the underlying ontological notions and providing a meta-theory to reason abstractly about logics. While these have been very successful, they have so far focused on either model or proof theoretical semantics. We contribute to a unified framework by showing how the type/proof theoretical Edinburgh Logical Framework (LF) can be applied to the representation of model theoretical logics.We give a comprehensive formal representation of first-order logic, covering both its proof and its model theoretical semantics as well as its soundness in LF. For the model theory, we have to represent the mathematical foundation itself in LF, and we provide two solutions for that. Firstly, we give a meta-language that is strong enough to represent the model theory while being simple enough to be treated as a fragment of untyped set theory. Secondly, we represent Zermelo-Fraenkel set theory and show how it subsumes our meta-language. Specific models are represented as LF morphisms.All representations are given in and mechanically verified by the Twelf implementation of LF. Moreover, we use the Twelf module system to treat all connectives and quantifiers independently. Thus, individual connectives are available for reuse when representing other logics, and we obtain the first version of a feature library from which logics can be pieced together.Our results and methods are not restricted to first-order logic and scale to a wide variety of logical systems, thus demonstrating the feasibility of comprehensively formalizing large scale representation theorems in a logical framework.  相似文献   

8.
9.
Learning complex action models with quantifiers and logical implications   总被引:1,自引:0,他引:1  
Automated planning requires action models described using languages such as the Planning Domain Definition Language (PDDL) as input, but building action models from scratch is a very difficult and time-consuming task, even for experts. This is because it is difficult to formally describe all conditions and changes, reflected in the preconditions and effects of action models. In the past, there have been algorithms that can automatically learn simple action models from plan traces. However, there are many cases in the real world where we need more complicated expressions based on universal and existential quantifiers, as well as logical implications in action models to precisely describe the underlying mechanisms of the actions. Such complex action models cannot be learned using many previous algorithms. In this article, we present a novel algorithm called LAMP (Learning Action Models from Plan traces), to learn action models with quantifiers and logical implications from a set of observed plan traces with only partially observed intermediate state information. The LAMP algorithm generates candidate formulas that are passed to a Markov Logic Network (MLN) for selecting the most likely subsets of candidate formulas. The selected subset of formulas is then transformed into learned action models, which can then be tweaked by domain experts to arrive at the final models. We evaluate our approach in four planning domains to demonstrate that LAMP is effective in learning complex action models. We also analyze the human effort saved by using LAMP in helping to create action models through a user study. Finally, we apply LAMP to a real-world application domain for software requirement engineering to help the engineers acquire software requirements and show that LAMP can indeed help experts a great deal in real-world knowledge-engineering applications.  相似文献   

10.
We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in the literature and thus they cannot give a complete characterization in terms of learnability with polynomial resources. We then identify an alternative notion of size and a simple set of parameters that are useful for first order Horn Expressions. These parameters are the number of clauses in the expression, the maximum number of distinct terms in a clause, and the maximum number of literals in a clause. Matching lower bounds derived using the Vapnik Chervonenkis dimension complete the picture showing that these parameters are indeed crucial. This work has been partly supported by NSF Grant IIS-0099446. A preliminary version of this paper appeared in the proceeding of the conference on Inductive Logic Programming 2003. Most of this work was done while M.A. was at Tufts University. Editors: Tamás Horváth and Akihiro Yamamoto  相似文献   

11.
The evolution of logic programming semantics has included the introduction of a new explicit form of negation, beside the older implicit (or default) negation typical of logic programming. The richer language has been shown adequate for a spate of knowledge representation and reasoning forms.The widespread use of such extended programs requires the definition of a correct top-down querying mechanism, much as for Prolog wrt. normal programs. One purpose of this paper is to present and exploit a SLDNF-like derivation procedure, SLX, for programs with explicit negation under well-founded semantics (WFSX) and prove its soundness and completeness. (Its soundness wrt. the answer-sets semantics is also shown.) Our choice ofWFSX as the base semantics is justi-fied by the structural properties it enjoys, which are paramount for top-down query evaluation.Of course, introducing explicit negation requires dealing with contradiction. Consequently, we allow for contradiction to appear, and show moreover how it can be removed by freely changing the truth-values of some subset of a set of predefined revisable literals. To achieve this, we introduce a paraconsistent version ofWFSX, WFSX p , that allows contradictions and for which our SLX top-down procedure is proven correct as well.This procedure can be used to detect the existence of pairs of complementary literals inWESX p simply by detecting the violation of integrity rulesf L, -L introduced for eachL in the language of the program. Furthermore, integrity constraints of a more general form are allowed, whose violation can likewise be detected by SLX.Removal of contradiction or integrity violation is accomplished by a variant of the SLX procedure that collects, in a formula, the alternative combinations of revisable literals' truth-values that ensure the said removal. The formulas, after simplification, can then be satisfied by a number of truth-values changes in the revisable, among true, false, and undefined. A notion of minimal change is defined as well that establishes a closeness relation between a program and its revisions. Forthwith, the changes can be enforced by introducing or deleting program rules for the revisable literals.To illustrate the usefulness and originality of our framework, we applied it to obtain a novel logic programming approach, and results, in declarative debugging and model-based diagnosis problems.  相似文献   

12.
This note serves three purposes: (i) we provide a self-contained exposition of the fact that conjunctive queries are not efficiently learnable in the Probably-Approximately-Correct (PAC) model, paying clear attention to the complicating fact that this concept class lacks the polynomial-size fitting property, a property that is tacitly assumed in much of the computational learning theory literature; (ii) we establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs), including acyclic CQs for a wide range of notions of acyclicity; (iii) we show that CQs (and UCQs) are efficiently PAC learnable with membership queries.  相似文献   

13.
ProbLog is a recently introduced probabilistic extension of Prolog (De Raedt, et al. in Proceedings of the 20th international joint conference on artificial intelligence, pp. 2468–2473, 2007). A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query in a randomly sampled program. This paper introduces the theory compression task for ProbLog, which consists of selecting that subset of clauses of a given ProbLog program that maximizes the likelihood w.r.t. a set of positive and negative examples. Experiments in the context of discovering links in real biological networks demonstrate the practical applicability of the approach. Editors: Stephen Muggleton, Ramon Otero, Simon Colton.  相似文献   

14.
Abstract

Inductive logic programming (ILP) involves the synthesis of logic programs from examples. In terms of scientific theory formation ILP systems define observational predicates in terms of a set of theoretical predicates. However, certain basic theorems indicate that with an inadequate theoretical vocabulary this is not always possible. Predicate invention is the augmentation of a given theoretical vocabulary to allow finite axiomatization of the observational predicates. New theoretical predicates need to be chosen from a well-defined universe of such predicates. In this paper a partial order of utilization is described over such a universe. This ordering is a special case of a logical translation. The notion of utilization allows the definition of an equivalence relationship over new predicates. In a manner analogous to Plotkin, clause refinement is defined relative to given background knowledge and a universe of new predicates. It is shown that relative least clause refinement is defined and unique whenever there exists a relative least general generalization of a set of clauses. Results of a preliminary implementation of this approach are given.  相似文献   

15.
The preliminary design or reconfiguration of modular manufacturing lines is addressed. The modules are multi-spindle units. Each unit executes a subset of operations. The set of all available spindle units and their costs are known. The problem is to select the right spindle units and to arrange them into linear workstations. The objective is to design such a line respecting technological constraints and minimizing the investment cost. In this paper, some efficient formulations of linear integer programming (IP) models and specific techniques to reduce the calculation time are proposed. Experiments were carried out which highlight the performance of the models.  相似文献   

16.
I show that there is a common order-theoretic structure underlying many of the models for representing beliefs in the literature. After identifying this structure, and studying it in some detail, I argue that it is useful. On the one hand, it can be used to study the relationships between several models for representing beliefs, and I show in particular that the model based on classical propositional logic can be embedded in that based on the theory of coherent lower previsions. On the other hand, it can be used to generalise the coherentist study of belief dynamics (belief expansion and revision) by using an abstract order-theoretic definition of the belief spaces where the dynamics of expansion and revision take place. Interestingly, many of the existing results for expansion and revision in the context of classical propositional logic can still be proven in this much more abstract setting, and therefore remain valid for many other belief models, such as those based on imprecise probabilities.  相似文献   

17.
In this paper, we propose a novel class of wrappers (logic wrappers) inspired by the logic prog- ramming paradigm. The developed Logic wrappers (L-wrapper) have declarative semantics, and therefore: (i) their specification is decoupled from their implementation and (ii) they can be generated using inductive logic programming. We also define a convenient way for mapping L-wrappers to XSLT for efficient processing using available XSLT processing engines.  相似文献   

18.
Goldsmith  Judy  Sloan  Robert H.  Turán  György 《Machine Learning》2002,47(2-3):257-295
The theory revision, or concept revision, problem is to correct a given, roughly correct concept. This problem is considered here in the model of learning with equivalence and membership queries. A revision algorithm is considered efficient if the number of queries it makes is polynomial in the revision distance between the initial theory and the target theory, and polylogarithmic in the number of variables and the size of the initial theory. The revision distance is the minimal number of syntactic revision operations, such as the deletion or addition of literals, needed to obtain the target theory from the initial theory. Efficient revision algorithms are given for three classes of disjunctive normal form expressions: monotone k-DNF, monotone m-term DNF and unate two-term DNF. A negative result shows that some monotone DNF formulas are hard to revise.  相似文献   

19.
Khardon  Roni 《Machine Learning》1999,37(3):241-275
The problem of learning universally quantified function free first order Horn expressions is studied. Several models of learning from equivalence and membership queries are considered, including the model where interpretations are examples (Learning from Interpretations), the model where clauses are examples (Learning from Entailment), models where extensional or intentional background knowledge is given to the learner (as done in Inductive Logic Programming), and the model where the reasoning performance of the learner rather than identification is of interest (Learning to Reason). We present learning algorithms for all these tasks for the class of universally quantified function free Horn expressions. The algorithms are polynomial in the number of predicate symbols in the language and the number of clauses in the target Horn expression but exponential in the arity of predicates and the number of universally quantified variables. We also provide lower bounds for these tasks by way of characterising the VC-dimension of this class of expressions. The exponential dependence on the number of variables is the main gap between the lower and upper bounds.  相似文献   

20.
We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches—abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号