首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Static value analysis is a classical approach for verifying programs with floating-point computations. Value analysis mainly relies on abstract interpretation and over-approximates the possible values of program variables. State-of-the-art tools may however compute over-approximations that can be rather coarse for some very usual program expressions. In this paper, we show that constraint solvers can significantly refine approximations computed with abstract interpretation tools. More precisely, we introduce a hybrid approach combining abstract interpretation and constraint programming techniques in a single static and automatic analysis. This hybrid approach benefits from the strong points of abstract interpretation and constraint programming techniques, and thus, it is more effective than static analysers and constraint solvers, when used separately. We compared the efficiency of the system we developed—named rAiCp—with state-of-the-art static analyzers: rAiCp produces substantially more precise approximations and is able to check program properties on both academic and industrial benchmarks.  相似文献   

2.
Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variable‐free) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variable‐free program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating built‐in predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., built‐in integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

3.
We extend logic programming to deal with default reasoning by allowing the explicit representation of exceptions in addition to general rules. To formalise this extension, we modify the answer set semantics of Gelfond and Lifschitz, which allows both classical negation and negation as failure. We also propose a transformation which eliminates exceptions by using negation by failure. The transformed program can be implemented by standard logic programming methods, such as SLDNF. The explicit representation of rules and exceptions has the virtue of greater naturalness of expression. The transformed program, however, is easier to implement.  相似文献   

4.
In the current practice of Answer Set Programming (ASP), evaluable functions are represented as special kinds of relations. This often makes the resulting program unnecessarily large when instantiated over a large domain. The extra constraints needed to enforce the relation as a function also make the logic program less transparent. In this paper, we consider adding evaluable functions to answer set logic programs. The class of logic programs that we consider here is that of weight constraint programs, which are widely used in ASP. We propose an answer set semantics to these extended weight constraint programs and define loop completion to characterize the semantics. Computationally, we provide a translation from loop completions of these programs to instances of the Constraint Satisfaction Problem (CSP) and use the off-the-shelf CSP solvers to compute the answer sets of these programs. A main advantage of this approach is that global constraints implemented in such CSP solvers become available to ASP. The approach also provides a new encoding for CSP problems in the style of weight constraint programs. We have implemented a prototype system based on these results, and our experiments show that this prototype system competes well with the state-of-the-art ASP solvers. In addition, we illustrate the utilities of global constraints in the ASP context.  相似文献   

5.
The concept of mixed computation is briefty described. An equivalence criterion of deterministic multitape automata is established in terms of mixed computations. Some considerations concerning adaptive properties of programs are presented.Translated from Kibernetika, No. 1, pp. 28–30, January–February, 1990.  相似文献   

6.
It is well known that for many non-deterministic programming languages there is no continuous fully abstract fixpoint semantics. This is usually attributed to “problems with continuity”, that is, the assumption that the semantic functions should be continuous supposedly plays a role in the difficulties of giving a fully abstract fixpoint semantics. We show that for a large class of non-deterministic programming languages there is no fully abstract least fixpoint semantics even if one considers arbitrary functions (not necessarily continuous) over arbitrary partial orders (not necessarily complete).  相似文献   

7.
Conclusion We have considered dynamic models of parallel computations and transformation algorithms for macropipelined programs that increase the internal exchange asynchronism of the parallel components. The models generalize the well-known CSP synchronous exchange paradigm (the rendezvous mechanism) [18] and provide a theoretical justification for the development of a whole range of interconnected semantic models of asynchronous computation with increasing degree of exchange asynchronism. Some publications ensure asynchronous exchange only by buffering [15, 16]. The dynamic model approach developed in this study combines buffering and analysis of data dependences of the exchange operators. It not only reduces losses associated with synchronization of communicating parallel processes, but also ensures automatic resolution of some classes of data exchange deadlocks. Experience with dynamic models as a means of increasing the efficiency of macropipelined programs and multilevel memory in multiprocessor systems [12, 17] can be applied in other programming systems and parallel program design systems of both compiling (S1, S2) and interpreting (S3) type. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 45–65, November–December, 1995.  相似文献   

8.
Multi-tac is a learning system that synthesizes heuristic constraint satisfaction programs. The system takes a library of generic algorithms and heuristics and specializes them for a particular application. We present a detailed case study with three different distributions of a single combinatorial problem, Minimum Maximal Matching, and show that Muti-tac can synthesize programs for these different distributions that perform on par with hand-coded programs and that exceed the performance of some well-known satisfiability algorithms. In synthesizing a program, Multi-tac bases its choice of heuristics on an instance distribution, and we demonstrate that this capability has a significant impact on the results.  相似文献   

9.
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutorial way, some of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to some of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation (such as work-stealing schedulers), etc.  相似文献   

10.
We present a programming language called TCEL (Time-Constrained Event Language), whose semantics are based on time-constrained relationships between observable events. Such a semantics infers only those timing constraints necessary to achieve real-time correctness, without overconstraining the system. Moreover, an optimizing compiler can exploit this looser semantics to help tune the code, so that its worst-case execution time is consistent with its real-time requirements. In this paper we describe such a transformation system, which works in two phases. First, the TCEL source code is translated into an intermediate representation. Then an instruction-scheduling algorithm rearranges selected unobservable operations and synthesizes tasks guaranteed to respect the original event-based constraints  相似文献   

11.
12.
This work presents an extension of the Tag-Frame resource management system previously developed by the authors. The extended system is able to isolate the consumption of a given goal/clause without incurring extra runtime costs. We believe this feature may help in debugging linear logic programs and specifications in a proof-theoretic setting. This point is illustrated by means of a simple example.  相似文献   

13.
This paper presents an approach to specialising logic programs which is based on abstract interpretation. Program specialisation involves two stages, the construction of an abstract computation tree and a program construction stage. For the tree construction stage, abstract OLDT resolution is defined and used to construct a complete and finite tree corresponding to a given logic program and a goal. In the program construction stage, a specialised program is extracted from this tree. We focus on two logic programming languages: sequential Prolog and Flat Concurrent Prolog. Although the procedural reading of concurrent logic programs is very different from that of sequential programs, the techniques presented provide a uniform approach to the specialisation of both languages. We present the results for Prolog rigorously, and extend them less formally to Flat Concurrent Prolog. There are two main advantages of basing program specialisation on abstract interpretation. Firstly, termination can be ensured by using abstract interpretations over finite domains, while performing a complete flow analysis of the program. Secondly, correctness of the specialised program is closely related to well-defined consistency conditions on the concrete and abstract interpretations.  相似文献   

14.
15.
We present a new static analysis by abstract interpretation to prove automatically the functional correctness of algorithms implementing matrix operations, such as matrix addition, multiplication, general matrix multiplication, inversion, or more generally Basic Linear Algebra Subprograms. In order to do so, we introduce a family of abstract domains parameterized by a set of matrix predicates as well as a numerical domain. We show that our analysis is robust enough to prove the functional correctness of several versions of the same matrix operations, resulting from loop reordering, loop tiling, inverting the iteration order, line swapping, and expression decomposition. We extend our method to enable modular analysis on code fragments manipulating matrices by reference, and show that it results in a significant analysis speedup.  相似文献   

16.
Standard and nonstandard models of Propositional Dynamic Logic differ in their interpretation of loops. In Standard models, a loop is interpreted as the Kleene closure of the interpretation of its loop body; in nonstandard (Loop Invariant) models, a loop is interpreted as a program which preserves invariant assertions over the loop body.In this paper we show that both interpretations are adequate to represent loops in PDL. We demonstrate this in two ways: First we note that Standard and Loop Invariant models are distinct but not distinguishable within PDL. Second, we show that the class of Loop Invariant models is complete with respect to the Segerberg axiomatization of PDL. Since completeness of the class of Loop Invariant models implies completeness of the class of Standard models, Standard models are also complete with respect to this axiomatization.The research reported here was supported in part by NSF Grants MCS77-02474 and MCS80-05387. Most of the results in this paper were announced in A Completeness Technique forD-axiomatizable Semantics presented at the 11th Annual ACM Symposium on the Theory of Computing in May, 1979.  相似文献   

17.
李争名  杨南粤  岑健 《计算机应用》2017,37(6):1716-1721
为了提高字典的判别性能,提出基于原子Fisher判别准则约束的字典学习算法AFDDL。首先,利用特定类字典学习算法为每个原子分配一个类标,计算同类原子和不同类原子间的散度矩阵。然后,利用类内散度矩阵和类间散度矩阵的迹的差作为判别式约束项,促使不同类原子间的差异最大化,并在最小化同类原子间差异的同时减少原子间的自相关性,使得同类原子尽可能地重构某一类样本,提高字典的判别性能。在AR、FERET和LFW三个人脸数据库和USPS手写字体数据库中进行实验,实验结果表明,在四个图像数据库中,所提算法在识别率和训练时间方面均优于类标一致的K奇异值分解(LC-KSVD)算法、局部特征和类标嵌入约束的字典学习(LCLE-DL)算法、支持矢量指导的字典学习(SVGDL)算法和Fisher判别字典学习算法;且在四个数据库中,该算法也比稀疏表示分类(SRC)和协同表示分类(CRC)取得更高的识别率。  相似文献   

18.
在介绍约束逻辑程序的相关概念的基础上,研究了简单单调约束逻辑程序约束原子的正文字前缀幂集展开方法,并证明展开后的正规逻辑约束与约束逻辑程序的等价特性.分析了正规逻辑程序的交替不动点良基模型建立的原理,将简单单调约束逻辑程序等价展开为与其等价的正规逻辑程序,以求展开后的逻辑程序中的给定算子的最小不动点为切入,给出了简单单调约束逻辑程序的交替不动点的良基模型.论证了文中提出的简单单调约束逻辑程序良基模型定义的合理性,说明把约束逻辑程序转化为正规逻辑程序是可行的.  相似文献   

19.
The Timed Concurrent Constraint programming language (tccp) introduces time aspects into the Concurrent Constraint paradigm. This makes tccp especially appropriate for analyzing timing properties of concurrent systems by model checking. However, even if very compact state representations are obtained thanks to the use of constraints in tccp, large state spaces can still be generated, which may prevent model-checking tools from verifying tccp programs completely. Model checking tccp programs is a difficult task due to the subtleties of the underlying operational semantics, which combines constraints, concurrency, non-determinism and time. Currently, there is no practical model-checking tool that is applicable to tccp. In this work, we introduce an abstract methodology which is based on over- and under-approximating tccp models and which mitigates the state explosion problem that is common to traditional model-checking algorithms. We ascertain the conditions for the correctness of the abstract technique and show that this preliminary abstract semantics does not correctly simulate the suspension behavior, which is a key feature of tccp. Then, we present a refined abstract semantics which correctly models suspension. Finally, we complete our methodology by approximating the temporal properties that must be verified.  相似文献   

20.
We present MODL, a Dynamic Logic and a deductive verification calculus for a core Java-like language that includes multi-threading. The calculus is based on symbolic execution. Even though we currently do not handle non-atomic loops, employing the technique of symmetry reduction allows us to verify systems without limits on state space or thread number. We have instantiated our logic for (restricted) multi-threaded Java programs and implemented the verification calculus within the KeY system. We demonstrate our approach by verifying a central method of the StringBuffer class from the Java standard library in the presence of unbounded concurrency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号