首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A rigorous framework for analyzing safe composition of distributed programs is presented. It facilitates specifying notions of safe sequential execution of distributed programs in various models of communication. A notion of sealing is defined, where if a program P is immediately followed by a program Q that seals P then P will be —it will execute as if it runs in isolation. None of its send or receive actions will match or interact with actions outside P. The applicability of sealing is illustrated by a study of program composition when communication is reliable but not necessarily FIFO. In this model, special care must be taken to ensure that messages do not accidentally overtake one another in the composed program. In this model no program that sends or receives messages can be composed automatically with arbitrary programs without jeopardizing their intended behavior. Safety of composition becomes context-sensitive and new tools are needed for ensuring it. The investigation of sealing in this model reveals a novel connection between Lamport causality and safe composition. A characterization of sealable programs is given, as well as efficient algorithms for testing if Q seals P and for constructing a seal for a class of straight-line programs. It is shown that every sealable program can be sealed using O(n) messages. In fact, 3n − 4 messages are necessary and sufficient in the worst case, despite the fact that a sealable program may be open to interference on Ω(n 2) channels.  相似文献   

2.
The verified software repository is dedicated to a long-term vision of a future in which all computer systems justify the trust that society increasingly places in them. This would be accompanied by a substantial reduction in the current high costs of programming error, incurred during the design, development, testing, installation, maintenance, evolution, and retirement of computer software. An important technical contribution to this vision will be a verifying compiler: a tool-set that automatically proves that a program will always meet its specification, insofar as this has been formalised, without even needing to run it. This has been a challenge for computing research for over 30 years, but the current state of the art now gives grounds for hope that it may be implemented in the foreseeable future. Achievement of the overall vision will depend also on continued progress of research into dependability and software evolution, as envisaged by the UKCRC Grand Challenge project in dependable systems evolution. The verified software repository is a first step towards the realisation of this long-term vision. It will maintain and develop an evolving collection of state-of-the-art tools, together with a representative portfolio of real programs and specifications on which to test, evaluate, and develop the tools. It will contribute initially to the inter-working of tools, and eventually to their integration. It will promote transfer of the relevant technology to industrial tools and into software engineering practice. It will build on the recognised achievements of practical formal development of safety-critical computer applications, and contribute to an international initiative in verified software, covering theory, tools, and experimental validation. Received April 2005 Revised November 2005 Accepted November 2005 by C. B. Jones  相似文献   

3.
We describe a Prolog-based approach to the development of language processors (such as preprocessors, frontends, evaluators, tools for software modification and analysis). The design of the corresponding environment Laptob for prological language processing is outlined. Language processor definitions in Laptob are basically Prolog programs. The programs might contain grammars, that is, we consider logic grammars. The programs can be typed, and they can be higher-order. The adaptation and composition of the logic programs themselves is supported by meta-programming. The environment offers tool support for efficient scanning, testing, and application development based on a make-system. We report on recent and ongoing applications of the Prolog-based approach.  相似文献   

4.
We describe a combination of BDDs and superposition theorem proving, called light-weight theorem proving, and its application to the flexible and efficient automation of the reasoning activity required to debug and verify pointer manipulating programs. This class of programs is notoriously challenging to reason about and it is also interesting from a programming point of view since pointers are an important source of bugs. The implementation of our technique (in a system called haRVey) scales up significantly better than state-of-the-art tools such as E (a superposition prover) and Simplify (a prover based on the Nelson and Oppen combination schema of decision procedures which is used in ESC/Java) on a set of proof obligations arising in debugging and verifying C functions manipulating pointers.  相似文献   

5.
Locating potential execution errors in software is gaining more attention due to the economical and social impact of software crashes. For this reason, many software engineers are now in need of automatic debugging tools in their development environments. Fortunately, the work on formal method technologies during the past 25 years has produced a number of techniques and tools that can make the debugging task almost automatic, using standard computer equipment and with a reasonable response time. In particular, verification techniques like model-checking that were traditionally employed for formal specifications of the software can now be directly employed for real source code. Due to the maturity of model-checking technology, its application to real software is now a promising and realistic approach to increase software quality. There are already some successful examples of tools for this purpose that mainly work with self-contained programs (programs with no system-calls). However, verifying software that uses external functionality provided by the operating system via API s is currently a challenging trend. In this paper, we propose a method for using the tool spin to verify C software systems that use services provided by the operating system thorough a given API. Our approach consists in building a model of the underlying operating system to be joined with the original C code in order to obtain the input for the model checker spin. The whole modeling process is transparent for the C programmer, because it is performed automatically and without special syntactic constraints in the input C code. Regarding verification, we consider optimization techniques suitable for this application domain, and we guarantee that the system only reports potential (non-spurious) errors. We present the applicability of our approach focusing on the verification of distributed software systems that use the API Socket and the network protocol stack TCP/IP for communications. In order to ensure correctness, we define and use a formal semantics of the API to conduct the construction of correct models.  相似文献   

6.
Object Oriented Tools for Scientific Computing   总被引:1,自引:0,他引:1  
A set of object oriented tools is presented which, when combined, yield an efficient parallel finite element program. Special emphasis is given to details within the concept of the tools which enhance their efficiency. The experience of the author has shown that the design concepts documented are crucial for the efficiency of the issuing code, and that they can easily be incorporated within existing object oriented programs.  相似文献   

7.
The paper proposes an algebraic representation of program modules, called F(p), and illustrates the algorithms that use F(p) to generate program graph models for measurement, documentation and testing activities. The representation refers to procedural languages, D-structured programs and one-in/one-out modules but its definition can be extended to programs structured in terms of an arbitrary set of one-in/one-out legal control structures. Since it is possible to produce F(p) directly from the program code using reverse engineering techniques, the algorithms proposed are of considerable interest for the setting up of tools supporting the program comprehension phase, which is a fundamental first step in any maintenance operation.  相似文献   

8.
This paper describesMicroScope, a framework for developing analysis tools for Lisp programs. MicroScope uses a knowledge-intensive approach for program representation and analysis. The analysis tools share a common object oriented program database, and a common Prolog inference engine. The use of Prolog and a declarative representation for programs permits sharing of information, and provides high bandwidth communication between diverse analysis tools. It also supports program specification and debugging activities in the same framework. Extensions to Prolog to support analysis are described, and two tools, theCritic and theExpector, are presented.This work supported in part by Hewlett-Packard Company, the National Science foundation Under Grant Number MCS81-21750 and the Defense Advanced Research Projects Agency under contract number DAAK11-84-K-0017.  相似文献   

9.
Context: Static analysis of source code is a scalable method for discovery of software faults and security vulnerabilities. Techniques for static code analysis have matured in the last decade and many tools have been developed to support automatic detection.Objective: This research work is focused on empirical evaluation of the ability of static code analysis tools to detect security vulnerabilities with an objective to better understand their strengths and shortcomings.Method: We conducted an experiment which consisted of using the benchmarking test suite Juliet to evaluate three widely used commercial tools for static code analysis. Using design of experiments approach to conduct the analysis and evaluation and including statistical testing of the results are unique characteristics of this work. In addition to the controlled experiment, the empirical evaluation included case studies based on three open source programs.Results: Our experiment showed that 27% of C/C++ vulnerabilities and 11% of Java vulnerabilities were missed by all three tools. Some vulnerabilities were detected by only one or combination of two tools; 41% of C/C++ and 21% of Java vulnerabilities were detected by all three tools. More importantly, static code analysis tools did not show statistically significant difference in their ability to detect security vulnerabilities for both C/C++ and Java. Interestingly, all tools had median and mean of the per CWE recall values and overall recall across all CWEs close to or below 50%, which indicates comparable or worse performance than random guessing. While for C/C++ vulnerabilities one of the tools had better performance in terms of probability of false alarm than the other two tools, there was no statistically significant difference among tools’ probability of false alarm for Java test cases.Conclusions: Despite recent advances in methods for static code analysis, the state-of-the-art tools are not very effective in detecting security vulnerabilities.  相似文献   

10.
The software model checker Blast   总被引:2,自引:0,他引:2  
Blast is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, Blast either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). Blast constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to Blast and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use Blast to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use Blast to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use Blast to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, Blast determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that Blast can provide automated, precise, and scalable analysis for C programs.  相似文献   

11.
Proper memory management and pointer usage often prove to be the most difficult concepts for students learning C++ to grasp. Compounding this problem is the fact that the compilers and runtime environments traditionally used to introduce these concepts leave much to be desired with regard to generating meaningful diagnostics to assist students in tracking down and fixing memory‐related logical errors. To alleviate this, we have developed Dereferee, an advanced yet thin wrapper around C++ pointers that greatly increases the quality of these runtime diagnostics, but with only a small amount of intrusion into the development process. With regard to performance, memory‐intensive programs will experience execution times approximately 20–30 times slower when using Dereferee, which is comparable with other similar tools. Furthermore, the library has been designed to be customizable and easily disabled to transition codes from development to production.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
A model (consisting of rv-systems), a core programming language (for developing rv-programs), several specification and analysis techniques appropriate for modeling, programming and reasoning about interactive computing systems have been recently introduced by Stefanescu using register machines and space-time duality, see [Stefanescu, G. Interactive systems with registers and voices. Fundamenta Informaticae 73 (2006), 285–306. (Early draft, School of Computing, National University of Singapore, July 2004.)]. After that, Dragoi and Stefanescu have developed structured programming techniques for rv-systems and their verification, see, e.g., [Dragoi, C., and G. Stefanescu. Structured programming for interactive rv-systems. Institute of Mathematics of the Romanian Academy, IMAR Preprint 9/2006, Bucharest 2006. Dragoi, C., and G. Stefanescu. Towards a Hoare-like logic for structured rv-programs. Institute of Mathematics of the Romanian Academy, IMAR Preprint 10/2006, Bucharest, 2006. Dragoi, C., and G. Stefanescu. Implementation and verification of ring termination detection protocols using structured rv-programs. Annals of University of Bucharest, Mathematics-Informatics Series, 55 (2006), 129–138. Dragoi, C., and G. Stefanescu. Structured interactive programs with registers and voices and their verification. Draft, Bucharest, January 2007. Dragoi, C., and G. Stefanescu. On compiling structured interactive programs with registers and voices. In: “Proc. SOFSEM 2008,” 259–270. LNCS 4910, Springer, 2008.].In the present paper a kernel programming language AGAPIA v0.1 for interactive systems is introduced. The language contains definitions for complex spatial and temporal data, arithmetic and boolean expressions, modules, and while-programming statements with their temporal, spatial, and spatio-temporal versions. In AGAPIA v0.1 one can write programs for open processes located at various sites and having their temporal windows of adequate reaction to the environment. The main technical part of the paper describes a typing system for AGAPIA v0.1 programs.  相似文献   

13.
Compile‐time metaprograms are programs executed during the compilation of a source file, usually targeting to update its source code. Even though metaprograms are essentially programs, they are typically treated as exceptional cases without sharing common practices and development tools. Toward this direction, we identify a set of primary requirements related to language implementation, metaprogramming features, software engineering support, and programming environments and elaborate on addressing these requirements in the implementation of a metaprogramming language. In particular, we introduce the notion of integrated compile‐time metaprograms, as coherent programs assembled from specific metacode fragments present in the source code. We show the expressiveness of this programming model and illustrate its advantages through various metaprogram scenarios. Additionally, we present an integrated tool chain, supporting full‐scale build features and compile‐time metaprogram debugging. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The interpretative approach to compilation allows compiling programs by partially evaluating an interpreter w.r.t. a source program. This approach, though very attractive in principle, has not been widely applied in practice mainly because of the difficulty in finding a partial evaluation strategy which always obtain “quality” compiled programs. In spite of this, in recent work we have performed a proof of concept of that, at least for some examples, this approach can be applied to decompile Java bytecode into Prolog. This allows applying existing advanced tools for analysis of logic programs in order to verify Java bytecode. However, successful partial evaluation of an interpreter for (a realistic subset of) Java bytecode is a rather challenging problem. The aim of this work is to improve the performance of the decompilation process above in two respects. First, we would like to obtain quality decompiled programs, i.e., simple and small. We refer to this as the effectiveness of the decompilation. Second, we would like the decompilation process to be as efficient as possible, both in terms of time and memory usage, in order to scale up in practice. We refer to this as the efficiency of the decompilation. With this aim, we propose several techniques for improving the partial evaluation strategy. We argue that our experimental results show that we are able to improve significantly the efficiency and effectiveness of the decompilation process.  相似文献   

15.
Summary. Knowledge-based program are programs with explicit tests for knowledge. They have been used successfully in a number of applications. Sanders has pointed out what seem to be a counterintuitive property of knowledge-based programs. Roughly speaking, they do not satisfy a certain monotonicity property, while standard programs (ones without tests for knowledge) do. It is shown that there are two ways of defining the monotonicity property, which agree for standard programs. Knowledge-based programs satisfy the first, but do not satisfy the second. It is further argued by example that the fact that they do not satisfy the second is actually a feature, not a problem. Moreover, once we allow the more general class of knowledge-based specifications, standard programs do not satisfy the monotonicity property either. Received: January 1997 / Accepted January 2000  相似文献   

16.
Summary Hoare's logical system for specifying and proving partial correctness properties of sequential programs is generalized to concurrent programs. The basic idea is to define the assertion {P} S {Q} to mean that if execution is begun anywhere in S with P true, then P will remain true until S terminates, and Q will be true if and when S terminates. The predicates P and Q may depend upon program control locations as well as upon the values of variables. A system of inference rules and axiom schemas is given, and a formal correctness proof for a simple program is outlined. We show that by specifying certain requirements for the unimplemented parts, correctness properties can be proved without completely implementing the program. The relation to Pnueli's temporal logic formalism is also discussed.  相似文献   

17.
ABSTRACT

Algorithmic differentiation tools can automate the adjoint transformation of parallel message-passing codes [J. Utke, L. Hascoët, P. Heimbach, C. Hill, P. Hovland, and U. Naumann, Toward Adjoinable MPI, in 2009 IEEE International Symposium on Parallel & Distributed Processing, May, IEEE, 2009, pp. 1–8] using the AMPI library. Nevertheless, a non-trivial and manual step after the differentiation is the initialization of the seed and retrieval of the output values from the differentiated code. Ambiguities in seeding occur in programs where the user is unable to expose the complete program flow with a single entry and single exit point to the AD tool. We present the ambiguities associated with seed initialization and output retrieval for adjoint transformation of halo and zero-halo partitioned MPI programs. We introduce a general framework to eliminate ambiguities in seeding and retrieval for shared-node reduction over +, and * operators using a conceptual master-worker model. The model shows the need for new MPI calls for retrieval and eliminate MPI calls for seed initialization. Different implementations for seeding manually assembled adjoints were inferred from the model, namely, partial and unique seeding. We successfully applied the seeding techniques to a 3D zero-halo partitioned unstructured compressible discrete adjoint solver and highlight the merits and demerits of each strategy.  相似文献   

18.
In Java, C or C++, attempts to dereference the null value result in an exception or a segmentation fault. Hence, it is important to identify those program points where this undesired behaviour might occur or prove the other program points (and possibly the entire program) safe. To that purpose, null-pointer analysis of computer programs checks or infers non-null annotations for variables and object fields. With few notable exceptions, null-pointer analyses currently use run-time checks or are incorrect or only verify manually provided annotations. In this paper, we use abstract interpretation to build and prove correct a first, flow and context-sensitive static null-pointer analysis for Java bytecode (and hence Java) which infers non-null annotations. It is based on Boolean formulas, implemented with binary decision diagrams. For better precision, it identifies instance or static fields that remain always non-null after being initialised. Our experiments show this analysis faster and more precise than the correct null-pointer analysis by Hubert, Jensen and Pichardie. Moreover, our analysis deals with exceptions, which is not the case of most others; its formulation is theoretically clean and its implementation strong and scalable. We subsequently improve that analysis by using local reasoning about fields that are not always non-null, but happen to hold a non-null value when they are accessed. This is a frequent situation, since programmers typically check a field for non-nullness before its access. We conclude with an example of use of our analyses to infer null-pointer annotations which are more precise than those that other inference tools can achieve.  相似文献   

19.
Andrew Hume 《Software》1988,18(11):1063-1072
Text searching programs such as the UNIX system tools grep and egrep require more than just good algorithms; they need to make efficient use of system resources such as I/O. I describe improving the I/O management in grep and egrep by using a new fast I/O library fio to replace the normal I/O library stdio. I also describe incorporating the Boyer-Moore algorithm into egrep; egrep is now typically 8–10 (for some common patterns 30–40) times faster than grep.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号