共查询到20条相似文献,搜索用时 31 毫秒
1.
An assertion language for data structures is presented, leading to the following results: formal semantics of operations on data structures are given in terms of the weakest precondition formula for assignment statements; input/output specifications for data-structure manipulating algorithms can be stated with precision; there is a clear relationship between the output specification and intermediate assertions; and knowledge about standard types of data structures can be schematized. These ideas are illustrated on an algorithm to reverse the arcs on a one-way linked list, and on a threaded tree example. 相似文献
2.
一种面向异构计算的结构化并行编程框架 总被引:1,自引:0,他引:1
随着人工智能时代的到来,异构计算在深度学习、科学计算等领域发挥着越来越重要的作用。目前异构计算系统在应用上的瓶颈之一在于缺少高效的软件开发框架,已有的OpenCL、CUDA等支持GPU、DSP及FPGA的编程框架基于C/C++语言和传统的并行编程方法,导致软件开发效率较低,软件推理和调试困难,难以灵活处理计算设备之间的协作和调度。提出一种面向异构计算平台的基于脚本语言的结构化并行编程框架,提供结构化的并行编程接口,支持计算任务到异构计算设备的映射,便于并行程序的推理和验证。设计并实现了基于遗传算法的结构化调度算法,充分利用异构计算系统的计算能力,提高了异构计算系统的软件开发效率。实验结果表明,提出的编程框架在CPU+GPU平台上实现了相对于单处理器1.5到2.5倍的加速比。 相似文献
3.
Corina S. Păsăreanu Dimitra Giannakopoulou Mihaela Gheorghiu Bobaru Jamieson M. Cobleigh Howard Barringer 《Formal Methods in System Design》2008,32(3):175-205
Assume-guarantee reasoning enables a “divide-and-conquer” approach to the verification of large systems that checks system
components separately while using assumptions about each component’s environment. Developing appropriate assumptions used to be a difficult and manual process. Over the
past five years, we have developed a framework for performing assume-guarantee verification of systems in an incremental and
fully automated fashion. The framework uses an off-the-shelf learning algorithm to compute the assumptions. The assumptions
are initially approximate and become more precise by means of counterexamples obtained by model checking components separately.
The framework supports different assume-guarantee rules, both symmetric and asymmetric. Moreover, we have recently introduced
alphabet refinement, which extends the assumption learning process to also infer assumption alphabets. This refinement technique starts with assumption alphabets that are a subset of the minimal interface between a component
and its environment, and adds actions to it as necessary until a given property is shown to hold or to be violated in the
system. We have applied the learning framework to a number of case studies that show that compositional verification by learning
assumptions can be significantly more scalable than non-compositional verification.
J.M. Cobleigh currently employed by The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760, USA. 相似文献
4.
A calculus of refinements for program derivations 总被引:1,自引:0,他引:1
R. J. R. Back 《Acta Informatica》1988,25(6):593-624
Summary A calculus of program refinements is described, to be used as a tool for the step-by-step derivation of correct programs. A derivation step is considered correct if the new program preserves the total correctness of the old program. This requirement is expressed as a relation of (correct) refinement between nondeterministic program statements. The properties of this relation are studied in detail. The usual sequential statement constructors are shown to be monotone with respect to this relation and it is shown how refinement between statements can be reduced to a proof of total correctness of the refining statement. A special emphasis is put on the correctness of replacement steps, where some component of a program is replaced by another component. A method by which assertions can be added to statements to justify replacements in specific contexts is developed. The paper extends the weakest precondition technique of Dijkstra to proving correctness of larger program derivation steps, thus providing a unified framework for the axiomatic, the stepwise refinement and the transformational approach to program construction and verification. 相似文献
5.
Formal proofs in mathematics and computer science are being studied because these objects can be verified by a very simple computer program. An important open problem is whether these formal proofs can be generated with an effort not much greater than writing a mathematical paper in, say, LATEX. Modern systems for proof development make the formalization of reasoning relatively easy. However, formalizing computations in such a manner that the results can be used in formal proofs is not immediate. In this paper we show how to obtain formal proofs of statements such as Prime(61) in the context of Peano arithmetic or (x+1)(x+1)=x
2+2x+1 in the context of rings. We hope that the method will help bridge the gap between the efficient systems of computer algebra and the reliable systems of proof development. 相似文献
6.
Paul Feautrier 《International journal of parallel programming》1991,20(1):23-53
Given a program written in a simple imperative language (assignment statements,for loops, affine indices and loop limits), this paper presents an algorithm for analyzing the patterns along which values flow as the execution proceeds. For each array or scalar reference, the result is the name and iteration vector of the source statement as a function of the iteration vector of the referencing statement. The paper discusses several applications of the method: conversion of a program to a set of recurrence equations, array and scalar expansion, program verification and parallel program construction. 相似文献
7.
一种基于满足性判定的并发软件验证策略 总被引:1,自引:0,他引:1
对线性时态逻辑SE-LTL提出了一种基于SAT的有界模型检测过程,该过程避免了基于BDD方法中状态空间快速增长的问题.在SE-LTL的子集SE-LTL?X的有界模型检测过程中,集成了stuttering等价技术,该集成有效地加速了验证过程.进一步提出了一种组合了基于SAT的有界模型检测、基于反例的抽象求精、组合推理3种状态空间约简技术的并发软件验证策略.该策略中,抽象和求精在每一个构件上独立进行.同时,模型检测的过程是符号化的.实例表明,该策略降低了验证时间和对内存空间的需求. 相似文献
8.
Alexandros V. Gerbessiotis 《International journal of parallel programming》2010,38(2):159-182
We show how computations such as those involved in American or European-style option price valuations with the explicit finite
difference method can be performed in parallel. Towards this we introduce a latency tolerant parallel algorithm for performing
such computations efficiently that achieves optimal theoretical speedup p, where p is the number of processor of the parallel system. An implementation of the parallel algorithm has been undertaken, and an
evaluation of its performance is carried out by performing an experimental study on a high-latency PC cluster, and at a smaller
scale, on a multi-core processor using in addition the SWARM parallel computing framework for multi-core processors. Our implementation
of the parallel algorithm is not only architecture but also communication library independent: the same code works under LAM-MPI
and Open MPI and also BSPlib, two sets of library frameworks that facilitate parallel programming. The suitability of our
approach to multi-core processors is also established. 相似文献
9.
The notion of partial eigenstructure assignment (PEA) via linear state feedback control in linear multivariable systems is introduced. This notion is a natural extension of eigenstructure assignment and partial eigenvalue assignment. Some theoretical basis for PEA is provided, and a parametric expression for feedback gain matrices achieving PEA is derived. An effective numerical algorithm for PEA tailored to large-scale systems is presented. As an extension of the algorithm, a recursive algorithm for eigenstructure assignment is presented. These algorithms possess the following desired properties: (1) compared to existing methods, the presented algorithms significantly reduce the required computation time via transforming high-dimensional matrix computations into low-dimensional matrix computations; (2) they can be implemented in a parallel fashion. The proposed algorithm for PEA is applied to modal control of large flexible space structure systems 相似文献
10.
Binary synchronization has been used extensively in the construction of mathematical models for the verification of embedded
systems. Although it allows for the modeling of complex cooperation among many processes in a natural environment, not many
tools have been developed to support the modeling capability in this regard. In this article, we first give examples to argue
that special algorithms are needed for the efficient verification of systems with complex synchronizations. We then define
our models of distributed real-time systems with synchronized cooperation among many processes. We present algorithms for
the construction of BDD-like diagrams for the characterization of complex synchronizations among many processes. We present
weakest precondition algorithms that take advantage of the just-mentioned BDD-like diagrams for the efficient verification
of complex real-time systems. Finally, we report experiments and argue that the techniques could be useful in practice. 相似文献
11.
Axel Wabenhorst 《Acta Informatica》2003,39(4):233-271
A theory of fairness which supports the specification and development of a wide variety of “fair” systems is developed. The
definition of fairness presented is much more general than the standard forms of weak and strong fairness, allowing the uniform
treatment of many different kinds of fairness within the same formalism, such as probabilistic behaviour, for example. The
semantic definition of fairness consists of a safety condition on finite sequences of actions and a liveness or termination
condition on the fair infinite sequences of actions. The definition of the predicate transformer of a fair action system permits
the use of the existing framework for program development, including the existing definitions of refinement and data refinement,
thus avoiding an ad hoc treatment of fairness. The theory includes results that support the modular development of fair action systems, like monotonicity,
adding skips, and data refinement. The weakest precondition and the weakest errorfree precondition are unified, so that in
particular a standard action system is a special case of a fair action system. The results are illustrated with the development
from specification of an unreliable buffer.
Received: 3 January 2000 / 17 November 2002 相似文献
12.
This paper describes a verification framework for Hoare-style pre- and post-conditions of programs manipulating balanced tree-like
data structures. Since the considered verification problem is undecidable, we appeal to the standard semi-algorithmic approach
in which the user has to provide loop invariants, which are then automatically checked, together with the program pre- and
post-conditions. We specify sets of program states, representing tree-like memory configurations, using Tree Automata with Size Constraints (TASC). The main advantage of this new class of tree automata is that they recognise tree languages based on arithmetic reasoning
about the lengths of various (possibly all) paths in trees, like, e.g., in AVL trees or red–black trees. TASCs are closed under union, intersection, and complement, and their emptiness problem is decidable. Thus we obtain a class
of automata which are an interesting theoretical contribution by itself. Further, we show that, under few restrictions, one
can automatically compute the effect of tree-updating program statements on the set of configurations represented by a TASC,
which makes TASC a practical verification tool. We tried out our approach on the insertion procedure for red–black trees,
for which we verified that the output on an arbitrary balanced red–black tree is also a balanced red–black tree. 相似文献
13.
This paper presents an advanced software system for solving the flexible manufacturing systems (FMS) scheduling in a job-shop environment with routing flexibility, where the assignment of operations to identical parallel machines has to be managed, in addition to the traditional sequencing problem. Two of the most promising heuristics from nature for a wide class of combinatorial optimization problems, genetic algorithms (GA) and ant colony optimization (ACO), share data structures and co-evolve in parallel in order to improve the performance of the constituent algorithms. A modular approach is also adopted in order to obtain an easy scalable parallel evolutionary-ant colony framework. The performance of the proposed framework on properly designed benchmark problems is compared with effective GA and ACO approaches taken as algorithm components. 相似文献
14.
Karam Abd Elkader Orna Grumberg Corina S. Păsăreanu Sharon Shoham 《Formal Aspects of Computing》2018,30(5):571-595
Model checking is a successful approach for verifying hardware and software systems. Despite its success, the technique suffers from the state explosion problem which arises due to the large state space of real-life systems. One solution to the state explosion problem is compositional verification, that aims to decompose the verification of a large system into the more manageable verification of its components. To account for dependencies between components, assume-guarantee reasoning defines rules that break-up the global verification of a system into local verification of individual components, using assumptions about the rest of the system. In recent years, compositional techniques have gained significant successes following a breakthrough in the ability to automate assume-guarantee reasoning. However, automation has been restricted to simple acyclic assume-guarantee rules. In this work, we focus on automating circular assume-guarantee reasoning in which the verification of individual components mutually depends on each other. We use a sound and complete circular assume-guarantee rule and we describe how to automatically build the assumptions needed for using the rule. Our algorithm accumulates joint constraints on the assumptions based on (spurious) counterexamples obtained from checking the premises of the rule, and uses a SAT solver to synthesize minimal assumptions that satisfy these constraints. To the best of our knowledge, our work is the first to fully automate circular assume-guarantee reasoning. We implemented our approach and compared it with established non-circular compositional methods that use learning or SAT-based techniques. The experiments show that the assumptions generated for the circular rule are generally smaller, and on the larger examples, we obtain a significant speedup. 相似文献
15.
This paper defines an algorithm for predicting worst-case and best-case execution times, and determining execution-time constraints of control-flow paths through real-time programs using their partial correctness semantics. The algorithm produces a linear approximation of path traversal conditions, worst-case and best-case execution times and strongest postconditions for timed paths in abstract real-time programs. Also shown are techniques for determining the set of control-flow paths with decidable worst-case and best-case execution times. The approach is based on a weakest liberal precondition semantics and relies on supremum and infimum calculations similar to standard computations from linear programming and Presburger arithmetic. The methodology is applicable to any executable language with a predicate transformer semantics and hence provides a verification basis for both high-level language and assembly code execution-time analysis. 相似文献
16.
Wim H. Hesselink 《Acta Informatica》1989,26(4):309-332
Summary We develop the semantics of a language with arbitrary atomic statements, unbounded nondeterminacy, and mutual recursion. The semantics is expressed in weakest preconditions and weakest liberal preconditions. Individual states are not mentioned. The predicates on the state space are treated as elements of a distributive lattice. The semantics of recursion is constructed by means of the theorem of Knaster-Tarski. It is proved that the law of the excluded miracle can be preserved, if that is wanted. The universal conjunctivity of the weakest liberal precondition, and the connection between the weakest precondition and the weakest liberal precondition are proved to remain valid. Finally we treat Hoare-triple methods for proving correctness and conditional correctness of programs. 相似文献
17.
现有的资源描述框架(RDF)数据分布式并行推理算法大多需要启动多个MapReduce任务,但有些算法对于含有实例三元组前件的RDFS/OWL规则的推理效率低下,整体推理效率不高。针对此问题,文中提出结合Rete的RDF数据分布式并行推理算法(DRRM)。首先结合RDF数据本体,构建模式三元组列表和规则标记模型。在RDFS/OWL推理阶段,结合MapReduce实现Rete算法中的alpha阶段和beta阶段。然后对推理结果进行去重处理,完成一次RDFS/OWL全部规则推理。实验表明,文中算法能高效正确地实现大规模数据的并行推理。 相似文献
18.
图像分类识别是计算机视觉系统的重要组成部分,而正确快速地提取目标特征参数是图像分类识别的前提。本文针对目前图像几何特征参数提取算法存在运算量大的问题,提出一种快速的几何特征提取算法。算法首先采用顶点链码对图像连通区域进行编码,使之映射为一个闭合区域,然后将顶点链码的方向码与离散格林定理相结合,把曲面积分变为曲线积分,使多种几何特征参数的计算均可统一在该算法框架下。实验表明,该方法具有多参数计算统一性、运算量小、可适用于特殊区域等特点。 相似文献
19.
Since a static work distribution does not allow for satisfactory speed‐ups of parallel irregular algorithms, there is a need for a dynamic distribution of work and data that can be adapted to the runtime behavior of the algorithm. Task pools are data structures which can distribute tasks dynamically to different processors where each task specifies computations to be performed and provides the data for these computations. This paper discusses the characteristics of task‐based algorithms and describes the implementation of selected types of task pools for shared‐memory multiprocessors. Several task pools have been implemented in C with POSIX threads and in Java. The task pools differ in the data structures to store the tasks, the mechanism to achieve load balance, and the memory manager used to store the tasks. Runtime experiments have been performed on three different shared‐memory systems using a synthetic algorithm, the hierarchical radiosity method, and a volume rendering algorithm. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
20.
Pascal Cuoq Benjamin Monate Anne Pacalet Virgile Prevosto 《International Journal on Software Tools for Technology Transfer (STTT)》2011,13(5):405-417
We present functional dependencies, a convenient, formal, but high-level, specification format for a piece of procedural software (function). Functional dependencies
specify the set of memory locations, which may be modified by the function, and for each modified location, the set of memory
locations that influence its final value. Verifying that a function respects pre-defined functional dependencies can be tricky:
the embedded world uses C and Ada, which have arrays and pointers. Existing systems we know of that manipulate functional
dependencies, Caveat and SPARK, are restricted to pointer-free subsets of these languages. This article deals with the functional
dependencies in a programming language with full aliasing. We show how to use a weakest pre-condition calculus to generate
a verification condition for pre-existing functional dependencies requirements. This verification condition can then be checked
using automated theorem provers or proof assistants. With our approach, it is possible to verify the specification as it was
written beforehand. We assume little about the implementation of the verification condition generator itself. Our study takes
place inside the C analysis framework Frama-C, where an experimental implementation of the technique described here has been
implemented on top of the WP plug-in in the development version of the tool. 相似文献