首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the main applications of computational techniques to pure mathematics has been the use of computer algebra systems to perform calculations which mathematicians cannot perform by hand. Because the data is produced within the computer algebra system, this becomes an environment for the exploration of new functions and the data produced is often analysed in order to make conjectures empirically. We add some automation to this discovery process by using the HR theory formation system to make conjectures about Maple functions supplied by the user. HR forms theories by inventing concepts, making conjectures empirically which relate the concepts and appealing to third party theorem provers and model generators to prove/disprove the conjectures. It has been used with success in number theory, graph theory and various algebraic domains such as group theory and ring theory.Experience has shown that HR produces too many conjectures which can be easily proven from the definitions of the functions involved. Hence, we use the Otter theorem prover to discard any theorems which can be easily proven, leaving behind the more interesting ones which are empirically plausible but not easily provable. We describe the core functionality of HR which enables it to form a theory, and the additional functionality implemented in order for HR to work with Maple functions. We present two experiments where we have applied HR’s theory formation in number theory. We discuss the modes of operation for the user and provide some of the results produced in this way. We hope to show that using HR, Otter and Maple in this fashion has much potential for the advancement of computer algebra systems.  相似文献   

2.
3.
万新熠  徐轲  曹钦翔 《软件学报》2023,34(8):3549-3573
离散数学是计算机类专业的基础课程之一,命题逻辑、一阶逻辑与公理集合论是其重要组成部分.教学实践表明,初学者准确理解语法、语义、推理系统等抽象概念是有一定难度的.近年来,已有一些学者开始在教学中引入交互式定理证明工具,以帮助学生构造形式化证明,更透彻地理解逻辑系统.然而,现有的定理证明器有较高上手门槛,直接使用会增加学生的学习负担.鉴于此,在Coq中开发了针对教学场景的ZFC公理集合论证明器.首先,形式化了一阶逻辑推理系统和ZFC公理集合论;之后,开发了数条自动化推理规则证明策略.学生可以在与教科书风格相同的简洁证明环境中使用自动化证明策略完成定理的形式化证明.该工具被用在了大一新生离散数学课程的教学中,没有定理证明经验的学生使用该工具可以快速完成数学归纳法和皮亚诺算术系统等定理的形式化证明,验证了该工具的实际效果.  相似文献   

4.
Searching the hypothesis space bounded below by a bottom clause is the basis of several state-of-the-art ILP systems (e.g. Progol, Aleph). These systems use refinement operators together with search heuristics to explore a bounded hypothesis space. It is known that the search space of these systems is limited to a sub-graph of the general subsumption lattice. However, the structure and properties of this sub-graph have not been properly characterised. In this paper firstly, we characterise the hypothesis space considered by the ILP systems which use a bottom clause to constrain the search. In particular, we discuss refinement in Progol as a representative of these ILP systems. Secondly, we study the lattice structure of this bounded hypothesis space. Thirdly, we give a new analysis of refinement operators, least generalisation and greatest specialisation in the subsumption order relative to a bottom clause. The results of this study are important for better understanding of the constrained refinement space of ILP systems such as Progol and Aleph, which proved to be successful for solving real-world problems (despite being incomplete with respect to the general subsumption order). Moreover, characterising this refinement sub-lattice can lead to more efficient ILP algorithms and operators for searching this particular sub-lattice. For example, it is shown that, unlike for the general subsumption order, efficient least generalisation operators can be designed for the subsumption order relative to a bottom clause.  相似文献   

5.
Hypotheses constructed by inductive logic programming (ILP) systems are finite sets of definite clauses. Top-down ILP systems usually adopt the following greedy clause-at-a-time strategy to construct such a hypothesis: start with the empty set of clauses and repeatedly add the clause that most improves the quality of the set. This paper formulates and analyses an alternative method for constructing hypotheses. The method, calledcautious induction, consists of a first stage, which finds a finite set of candidate clauses, and a second stage, which selects a finite subset of these clauses to form a hypothesis. By using a less greedy method in the second stage, cautious induction can find hypotheses of higher quality than can be found with a clause-at-a-time algorithm. We have implemented a top-down, cautious ILP system called CILS. This paper presents CILS and compares it to Progol, a top-down clause-at-a-time ILP system. The sizes of the search spaces confronted by the two systems are analysed and an experiment examines their performance on a series of mutagenesis learning problems. Simon Anthony, BEng.: Simon, perhaps better known as “Mr. Cautious” in Inductive Logic Programming (ILP) circles, completed a BEng in Information Engineering at the University of York in 1995. He remained at York as a research student in the Intelligent Systems Group. Concentrating on ILP, his research interests are Cautious Induction and developing number handling techniques using Constraint Logic Programming. Alan M. Frisch, Ph.D.: He is the Reader in Intelligent Systems at the University of York (UK), and he heads the Intelligent Systems Group in the Department of Computer Science. He was awarded a Ph. D. in Computer Science from the University of Rochester (USA) in 1986 and has held faculty positions at the University of Sussex (UK) and the University of Illinois at Urbana-Champaign (USA). For over 15 years Dr. Frisch has been conducting research on a wide range of topics in the area of automated reasoning, including knowledge retrieval, probabilistic inference, constraint solving, parsing as deduction, inductive logic programming and the integration of constraint solvers into automated deduction systems.  相似文献   

6.
The classification of mathematical structures plays an important role for research in pure mathematics. It is, however, a meticulous task that can be aided by using automated techniques. Many automated methods concentrate on the quantitative side of classification, like counting isomorphism classes for certain structures with given cardinality. In contrast, we have devised a bootstrapping algorithm that performs qualitative classification by producing classification theorems that describe unique distinguishing properties for isomorphism classes. In order to fully verify the classification it is essential to prove a range of problems, which can become quite challenging for classical automated theorem provers even in the case of relatively small algebraic structures. But since the problems are in a finite domain, employing Boolean satisfiability solving is possible. In this paper we present the application of satisfiability solvers to generate fully verified classification theorems in finite algebra. We explore diverse methods to efficiently encode the arising problems both for Boolean SAT solvers as well as for solvers with built-in equational theory. We give experimental evidence for their effectiveness, which leads to an improvement of the overall bootstrapping algorithm.  相似文献   

7.
Discrete mathematics is a foundation course for computer-related majors, and propositional logic, first-order logic, and the axiomatic set theory are important parts of this course. Teaching practice shows that beginners find it difficult to accurately understand abstract concepts, such as syntax, semantics, and reasoning system. In recent years, some scholars have begun introducing interactive theorem provers into teaching to help students construct formal proofs so that they can understand logic systems more thoroughly. However, directly employing the existing theorem provers will increase students'' learning burden since these tools have a high threshold for getting started with them. To address this problem, we develop a prover for the Zermelo-Fraenkel set theory with the axiom of Choice (ZFC) in Coq for teaching scenarios. Specifically, the first-order logical reasoning system and the axiomatic set theory ZFC are formalized, and several automated proof tactics specific to reasoning rules are then developed. Students can utilize these automated proof tactics to construct formal proofs of theorems in a textbook-style concise proving environment. This tool has been introduced into the teaching of the course of discrete mathematics for freshmen. Students with no prior theorem-proving experience can quickly construct formal proofs of theorems including mathematical induction and Peano arithmetic with this tool, which verifies the practical effectiveness of this tool.  相似文献   

8.
9.
领域知识的获取是智能规划研究中的重要内容之一.派生规则是一种基于逻辑推理的领域知识表示方法.在对动作模型和派生规则综合分析的基础上提出了基于派生谓词的STRIPS领域知识提取策略,并给出了该提取策略的算法描述.在规划求解过程中,利用提取所得的领域规则可减少派生规则的逻辑推导,从而提高规划效率.对任意一个规划领域,利用提...  相似文献   

10.
Locational reasoning plays an important role in many applications of AI problem-solving systems, yet has remained a relatively unexplored area of research. This paper addresses both theoretical and practical issues relevant to reasoning about locations. We define several theories of location designed for use in various settings, along with a sound and complete belief revision calculus for each that maintains a STRIPS-style database of locational facts. Techniques for the efficient operationalization of the belief revision rules in planning frameworks are presented. These techniques were developed during application of the location theories to several large-scale planning tasks within the Sipe planning framework.  相似文献   

11.
We argue that theorem provers based on domain dependent knowledge must be able to increase their domain dependent deductive knowledge if they are to serve as a component of a mathematical reasoning system. The reason for this is that if such systems are not extensible then they would not be able to assimilate and use new deductive knowledge produced by a mathematical reasoning system.  相似文献   

12.
The ways to transform a wide class of machine learning algorithms into processes of plausible reasoning based on known deductive and inductive rules of inference are shown. The employed approach to machine learning problems is based on the concept of a good classification (diagnostic) test for a given set of positive and negative examples. The problem of inferring all good diagnostic tests is to search for the best approximations of the given classification (partition or the partitioning) on the established set of examples. The theory of algebraic lattice is used as a mathematical language to construct algorithms of inferring good classification tests. The advantage of the algebraic lattice is that it is given both as a declarative structure, i.e., the structure for knowledge representation, and as a system of dual operations used to generate elements of this structure. In this work, algorithms of inferring good tests are decomposed into subproblems and operations that are the main rules of plausible human inductive and deductive reasoning. The process of plausible reasoning is considered as a sequence of three mental acts: implementing the rule of reasoning (inductive or deductive)with obtaining a new assertion, refining the boundaries of reasoning domain, and choosing a new rule of reasoning (deductive or inductive one).  相似文献   

13.
《Artificial Intelligence》1986,30(2):117-263
Experimental logic can be viewed as a branch of logic dealing with the actual construction of useful deductive systems and their application to various scientific disciplines. In this paper we describe an experimental deductive system called the SYMbolic EVALuator (i.e. SYMEVAL) which is based on a rather simple, yet startling principle about deduction, namely that deduction is fundamentally a process of replacing expressions by logically equivalent expressions. This principle applies both to logical and domain-dependent axioms and rules. Unlike more well-known logical inference systems which do not satisfy this principle, herein is described a system of logical axioms and rules called the SYMMETRIC LOGIC which is based on this principle. Evidence for this principle is given by proving theorems and performing deduction in the areas of set theory, logic programming, natural language analysis, program verification, automatic complexity analysis, and inductive reasoning.  相似文献   

14.
We propose multicontext systems (MC systems) as a formal framework for the specification of complex reasoning. MC systems provide the ability to structure the specification of “global” reasoning in terms of “local” reasoning subpatterns. Each subpattern is modeled as a deduction in a context, formally defined as an axiomatic formal system. the global reasoning pattern is modeled as a concatenation of contextual deductions via bridge rules, i.e., inference rules that infer a fact in one context from facts asserted in other contexts. Besides the formal framework, in this article we propose a three-layer architecture designed to specify and automatize complex reasoning. At the first level we have object-level contexts (called s-contexts) for domain specifications. Problem-solving principles and, more in general, meta-level knowledge about the application domain is specified in a distinct context, called Problem-Solving Context (PSC). On top of s-contexts and PSC, we have a further context, called MT, where it is possible to specify strategies to control multicontext reasoning spanning through s-contexts and PSC. We show how GETFOL can be used as a computer tool for the implementation of MC systems and for the automatization of multicontext deductions. © 1995 John Wiley & Sons, Inc.  相似文献   

15.
We describe a flexible approach to automated reasoning, where non-theorems can be automatically altered to produce proved results which are related to the original. This is achieved in the TM system through an interaction of the HR machine learning program, the Otter theorem prover and the Mace model generator. Given a non-theorem, Mace is used to generate examples which support the non-theorem, and examples which falsify it. HR then invents concepts which categorise these examples and TM uses these concepts to modify the original non-theorem into specialised theorems which Otter can prove. The methods employed by TM are inspired by the piecemeal exclusion, strategic withdrawal and counterexample barring methods described in Lakatos's philosophy of mathematics. In addition, TM can also determine which modified theorems are likely to be interesting and which are not. We demonstrate the effectiveness of this approach by modifying non-theorems taken from the TPTP library of first order theorems. We show that, for 98 non-theorems, TM produced meaningful modifications for 81 of them. This work forms part of two larger projects. Firstly, we are working towards a full implementation both of the reasoning and the social interaction notions described by Lakatos. Secondly, we are aiming to show that the combination of reasoning systems such as those used in TM will lead to a new generation of more powerful AI systems.  相似文献   

16.
Equality is such a fundamental concept in mathematics that, in fact, we seldom explore it in detail, and tend to regard it as trivial. When it is shown to be non-trivial, we are often surprised. As is often the case, the computerization of mathematical computation in computer algebra systems on the one hand, and mathematical reasoning in theorem provers on the other hand, forces us to explore the issue of equality in greater detail.  相似文献   

17.
可由用户持续发展的几何自动推理平台的推理算法   总被引:1,自引:0,他引:1  
郑焕  张景中 《计算机应用》2011,31(8):2101-2104
目前的几何定理证明器都不具有可持续性。提出一种结构具有一般性的知识表示和能够统一处理所有规则的推理算法,初步实现了可由用户持续发展的几何自动推理平台。该推理平台允许用户添加几何知识,如几何对象、谓词和规则,并可以综合使用多种推理算法,如前推搜索法和一部分面积法,它将更适合用于几何教学。  相似文献   

18.
Inductive logic programming (ILP) induces concepts from a set of positive examples, a set of negative examples, and background knowledge. ILP has been applied on tasks such as natural language processing, finite element mesh design, network mining, robotics, and drug discovery. These data sets usually contain numerical and multivalued categorical attributes; however, only a few relational learning systems are capable of handling them in an efficient way. In this paper, we present an evolutionary approach, called Grouping and Discretization for Enriching the Background Knowledge (GDEBaK), to deal with numerical and multivalued categorical attributes in ILP. This method uses evolutionary operators to create and test numerical splits and subsets of categorical values in accordance with a fitness function. The best subintervals and subsets are added to the background knowledge before constructing candidate hypotheses. We implemented GDEBaK embedded in Aleph and compared it to lazy discretization in Aleph and discretization in Top‐down Induction of Logical Decision Trees (TILDE) systems. The results obtained showed that our method improves accuracy and reduces the number of rules in most cases. Finally, we discuss these results and possible lines for future work.  相似文献   

19.
Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.  相似文献   

20.
It is widely accepted that spatial reasoning plays a central role in artificial intelligence, for it has a wide variety of potential applications, e.g., in robotics, geographical information systems, and medical analysis and diagnosis. While spatial reasoning has been extensively studied at the algebraic level, modal logics for spatial reasoning have received less attention in the literature. In this paper we propose a new modal logic, called spatial propositional neighborhood logic (SpPNL for short) for spatial reasoning through directional relations. We study the expressive power of SpPNL, we show that it is able to express meaningful spatial statements, we prove a representation theorem for abstract spatial frames, and we devise a (non-terminating) sound and complete tableaux-based deduction system for it. Finally, we compare SpPNL with the well-known algebraic spatial reasoning system called rectangle algebra.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号