共查询到20条相似文献,搜索用时 0 毫秒
1.
《国际计算机数学杂志》2012,89(2):67-83
This paper is concerned with an algorithm for identifying an unknown regular language from examples of its members and non-members. The algorithm is based on the model inference algorithm given by Shapiro. In our setting, however, a given first order language for describing a target logic program has countably many unary predicate symbols: q 0,q 1,q 2…. On the other hand, the oracle which gives information about the unknown regular language to the inference algorithm has no interpretation for predicates other than the predicate q 0. In such a setting,we cannot directly take advantage of the contradiction backtracing algorithm which is one of the most important parts for the efficiency of the model inference algorithm. In order to overcome this disadvantage, we develop a method for giving an interpretation for predicates other than the predicate q 0 indirectly, which is based on the idea of using the oracle and a one to one mapping from a set of predicates to a set of strings. Furthermore, we propose a model inference algorithm for regular languages using the method, then argue the correctness and the time complexity of the algorithm 相似文献
2.
Lisp applications need to show a reasonable cost-benefit relationship between the offered expressiveness and their demand for storage and run-time. Drawbacks in efficiency, apparent inLisp as a dynamically typed programming language, can be avoided by optimizations. Statically inferred type information can be decisive for the success of these optimizations.This paper describes a practical approach to type inference realized in a module and application compiler forEuLisp. The approach is partly related to Milner-style polymorphic type inference, but differs by describing functions withgeneric type schemes. Dependencies between argument and result types can be expressed more precisely by using generic type schemes of several lines than by using the common one-line type schemes. Generic type schemes contain types of a refined complementary lattice and bounded type variables. Besides standard and defined types so-calledstrategic types (e.g. singleton, zero, number-list) are combined into the type lattice. Local, global and control flow inference using generic type schemes with refined types generate precise typings of defined functions. Due to module compilation, inferred type schemes of exported functions can be stored in export interfaces, so they may be reused when imported elsewhere.This work was supported by the German Federal Ministry for Research and Technology (BMFT) within the joint project APPLY. The partners in this project are the Christian Albrechts University Kiel, the Fraunhofer Institute for Software Engineering and Systems Engineering (ISST), the German National Research Centre for Computer Science (GMD), and VW-GEDAS. 相似文献
3.
In this paper we describe a new inference rule, called –-match, which is used for finding set instantiations within an automated reasoning program. We have implemented –-match within a theorem prover called & and have used the system to prove some non-trivial theorems in mathematics, including Cantor's theorem and the correctness of transfinite induction. While not complete, we believe that –-match is a generally useful inference rule for finding set instantiations. One of the major contributions of the –-match rule is the ability to instantiate a term as an incompletely specified set abstraction, and then subsequently elaborate the identity of this set by considering other subgoals in the proof. This elaboration happens as a consequence of the deduction rules of the prover, and requires no additional machinery in the prover. 相似文献
4.
5.
The Spin model checker and its specification language Promela have been used extensively in industry and academia to check the logical properties of distributed algorithms and protocols. Model checking with Spin involves reasoning about a system via an abstract Promela specification, thus the technique depends critically on the soundness of this specification. Promela includes a rich set of data types including first-class channels, but the language syntax restricts the declaration of channel types so that it is not generally possible to deduce the complete type of a channel directly from its declaration. We present the design and implementation of Etch, an enhanced type checker for Promela, which uses constraint-based type inference to perform strong type checking of Promela specifications, allowing static detection of errors that Spin would not detect until simulation/verification time, or that Spin may miss completely. We discuss theoretical and practical problems associated with designing a type system and type checker for an existing language, and formalise our approach using a Promela-like calculus. To handle subtyping between base types, we present an extension to a standard unification algorithm to solve a system of equality and subtyping constraints, based on bounded substitutions. 相似文献
6.
Domain experts typically have detailed knowledge of the concepts that are used in their domain; however they often lack the technical skills needed to translate that knowledge into model-driven engineering (MDE) idioms and technologies. Flexible or bottom-up modelling has been introduced to assist with the involvement of domain experts by promoting the use of simple drawing tools. In traditional MDE the engineering process starts with the definition of a metamodel which is used for the instantiation of models. In bottom-up MDE example models are defined at the beginning, letting the domain experts and language engineers focus on expressing the concepts rather than spending time on technical details of the metamodelling infrastructure. The metamodel is then created manually or inferred automatically. The flexibility that bottom-up MDE offers comes with the cost of having nodes in the example models left untyped. As a result, concepts that might be important for the definition of the domain will be ignored while the example models cannot be adequately re-used in future iterations of the language definition process. In this paper, we propose a novel approach that assists in the inference of the types of untyped model elements using Constraint Programming. We evaluate the proposed approach in a number of example models to identify the performance of the prediction mechanism and the benefits it offers. The reduction in the effort needed to complete the missing types reaches up to 91.45% compared to the scenario where the language engineers had to identify and complete the types without guidance. 相似文献
7.
为保持装甲车辆的机动安全和运行可靠,提高其铅酸蓄电池健康状态的预测能力至关重要。本文将遗传算法与自适应模糊神经系统相结合,提出了一种基于GA-ANFIS的装甲车辆蓄电池SOH预测方法,着重分析了该方法的总体流程和训练过程。着眼装甲车辆的工作环境,在放电深度和输出能量的基础上,引入海拔和温度作为模型的输入。在Matlab的实验结果表明,GA-ANFIS相比ANFIS测试数据误差减小47.6%,四输入GA-ANFIS相比两输入GA-ANFIS测试数据误差减小51.2%,验证了方法的有效性。 相似文献
8.
This paper presents an architecture of the inference machine for a rule based expert system. The paper, structured around the concept of “inference flow graphs”, is aimed at incorporating parallelism in antecedent matching to find out the firable rules as well as firing more than one rule simultaneously, whenever required. Through this architecture, the number of comparisons required during the antecedent matching phase, is significantly reduced. The flow of inferencing can also proceed in a pipelined manner resulting in faster inferences. 相似文献
9.
Akihiro Yamamoto 《New Generation Computing》1999,17(1):99-117
We propose in this paper an inference method called Bottom Generalization for Inductive Logic Programming (ILP, for short). We give an inference procedure based on it, and prove that a hypothesis clauseH is derived by the procedure from an exampleE under a background theoryB iffH subsumesE relative toB in Plotkin’s sense. The theoryB can be any clausal theory, and the exampleE can be any clause which is not implied byB. The derived hypothesisH is a clause, but is not always definite. The result is proved by defining a declarative semantics for arbitrary consistent clausal theories, and showing that SB-resolution, which was originally introduced by Plotkin, gives their complete procedural semantics. We also show that Bottom Generalization is more powerful than both Jung’s method based on theV-operator and Saturant Generalization by Rouveirol, but not than Inverse Entailment by Muggleton. At the ILP ’97 workshop we called our inference method “Inverse Entailment,” but we have renamed it “Bottom Generalization” because we found that it differs from the original definition of Inverse Entailment. The main part of this work was accomplished while the author was visiting the Artificial Intelligence Group, Department of Computer Science, Technical University Darmstadt, Germany. Akihiro Yamamoto, Dr.: He is an Associate Professor of the Division of Electronics and Information Engineering at Hokkaido University. He received the B.S. degree from Kyoto University in 1985, and the M.S. and Dr.Sci. degrees from Kyushu University in 1987 and 1990 respectively. He was a guest researcher of Oxford University Computing Laboratory, the United Kingdom, from January 1996 to March 1996, and of Department of Computer Science at Technical University Darmstadt, Germany, from June 1996 to May 1997. His present interests include the application of Logic Programming and Theorem Proving to Machine Learning. 相似文献
10.
11.
ContextFinite State Machine (FSM) inference from execution traces has received a lot of attention over the past few years. Various approaches have been explored, each holding different properties for the resulting models, but the lack of standard benchmarks limits the ability of comparing the proposed techniques. Evaluation is usually performed on a few case studies, which is useful for assessing the feasibility of the algorithm on particular cases, but fails to demonstrate effectiveness in a broad context. Consequently, understanding the strengths and weaknesses of inference techniques remains a challenging task.ObjectiveThis paper proposes CARE, a general, approach-independent, platform for the intensive evaluation of FSM inference techniques.MethodGrounded in a program specification scheme that provides a good control on the expected program structures, it allows the production of large benchmarks with well identified properties.ResultsThe CARE platform demonstrates the following features: (1) providing a benchmarking mechanism for FSM inference techniques, (2) allowing analysis of existing techniques w.r.t. a class of programs and/or behaviors, and (3) helping users in choosing the best suited approach for their objective. Moreover, our extensive experiments on different FSM inference techniques highlight that they do not behave in the same manner on every class of program. Characterizing different classes of programs thus helps understanding the strengths and weaknesses of the studied techniques.ConclusionExperiments reported in this paper show examples of use cases that demonstrate the ability of the platform to generate large and diverse sets of programs, which allows to carry out meaningful inference techniques analysis. The analysis strategies the CARE platform offers open new opportunities for program behavior learning, particularly in conjunction with model checking techniques. The CARE platform is available at http://care.lip6.fr. 相似文献
12.
基于DSP指纹识别核心算法的设计与实现 总被引:6,自引:0,他引:6
文章详细介绍了如何在DSP系统实现指纹识别算法。针对嵌入式系统的特点,重点讨论了如何对算法进行简化和改进以及混合编程的问题。对于片内存储空间过小问题,介绍了存储空间的分配问题,并且设计了一种算法程序调度的方法,可以保证算法的运行效率和空间需求。 相似文献
13.
Yasubumi Sakakibara 《New Generation Computing》1990,7(4):365-380
In this paper we present a new inductive inference algorithm for a class of logic programs, calledlinear monadic logic programs. It has several unique features not found in Shapiro’s Model Inference System. It has been proved that a set of trees isrational if and only if it is computed by a linear monadic logic program, and that the rational set of trees is recognized by a tree
automaton. Based on these facts, we can reduce the problem of inductive inference of linear monadic logic programs to the
problem of inductive inference of tree automata. Further several efficient inference algorithms for finite automata have been
developed. We extend them to an inference algorithm for tree automata and use it to get an efficient inductive inference algorithm
for linear monadic logic programs. The correctness, time complexity and several comparisons of our algorithm with Model Inference
System are shown. 相似文献
14.
Miguel Bugalho Author Vitae Author Vitae 《Pattern recognition》2005,38(9):1457-1467
State merging algorithms have emerged as the solution of choice for the problem of inferring regular grammars from labeled samples, a known NP-complete problem of great importance in the grammatical inference area. These methods derive a small deterministic finite automaton from a set of labeled strings (the training set), by merging parts of the acceptor that corresponds to this training set. Experimental and theoretical evidence have shown that the generalization ability exhibited by the resulting automata is highly correlated with the number of states in the final solution.As originally proposed, state merging algorithms do not perform search. This means that they are fast, but also means that they are limited by the quality of the heuristics they use to select the states to be merged. Sub-optimal choices lead to automata that have many more states than needed and exhibit poor generalization ability.In this work, we survey the existing approaches that generalize state merging algorithms by using search to explore the tree that represents the space of possible sequences of state mergings. By using heuristic guided search in this space, many possible state merging sequences can be considered, leading to smaller automata and improved generalization ability, at the expense of increased computation time.We present comparisons of existing algorithms that show that, in widely accepted benchmarks, the quality of the derived solutions is improved by applying this type of search. However, we also point out that existing algorithms are not powerful enough to solve the more complex instances of the problem, leaving open the possibility that better and more powerful approaches need to be designed. 相似文献
15.
Nelson Baloian Henning Breuer Wolfram Luther 《Journal of Visual Languages and Computing》2008,19(6):652-674
Software visualization and algorithm animation have been tackled almost exclusively from the visual point of view; this means representation and control occur through the visual channel. This approach has its limitations. To achieve better comprehension, we deal with multimodal interfaces that include the extended facilities of interaction together with those of the standard systems for data visualization and algorithm animation. The notion of specific concept keyboards is introduced. As a consequence, modern information and learning systems for algorithm animation are enhanced in such a way that control and interaction take place through appropriate interfaces designed and semi-automatically generated for this special purpose. In this paper, we provide some examples and report on a thorough evaluation to show the relevance of this new approach. 相似文献
16.
17.
Makoto Kobayashi 《Software》1977,7(5):585-594
This paper proposes a set of new program restructuring algorithms which can be used to reorganize programs so as to increase their performance under two typical memory management strategies. The new algorithms are based on a recently proposed program behaviour model called the bounded locality intervals model, which allows us to give a precise definition of the localities of a program. The paging activities of a program restructured with the new algorithms under working-set and global LRU-like memory management strategies are simulated to evaluate the new algorithms. Some of them are shown to have quite satisfactory performance. 相似文献
18.
《国际计算机数学杂志》2012,89(1-4):213-229
The problem of determining a minimum independent dominating set is fundamental to both the theory and applications of graphs. Computationally it belongs to the class of hard combinatorial optimization problems known as NP-hard. In this paper, we develop a backtracking algorithm and a dynamic programming algorithm to determine a minimum independent dominating set. Computational experience with the backtracking algorithm on more than 1000 randomly generated graphs ranging from 100 to 200 vertices and from 10% to 60% densities has shown that the algorithm is effective. 相似文献
19.
20.
为了进一步提高模糊系统建立模型的精度,提出一种新的模糊系统算法ANFIS-HC-QPSO:采用一种混合型模糊聚类算法来对模糊系统的输入空间进行划分,每一个聚类通过高斯函数的拟合产生一个隶属度函数,即完成ANFIS系统的前件参数--隶属度函数参数的初始识别,通过具有量子行为的粒子群算法QPSO与最小二乘法优化前件参数,直至达到停机条件,最终得到ANFIS的前件及后件参数,从而得到满意的模糊系统模型。实验表明,AN-FIS-HC-QPSO算法与传统算法相比,能在只需较少模糊规则的前提下就使模糊系统达到更高的精度。 相似文献