首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
We present in this paper an approach for mechanically certifying the correctness of systolic algorithms and detail the correctness proof of a systolic algorithm for a dynamic programming (optimal parenthesization) problem.  相似文献   

2.
In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, thequery locality set model (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms.  相似文献   

3.
The probabilistic guarded-command language (pGCL) contains both demonic and probabilistic non-determinism, which makes it suitable for reasoning about distributed random algorithms. Proofs are based on weakest precondition semantics, using an underlying logic of real- (rather than Boolean-)valued functions.We present a mechanization of the quantitative logic for pGCL using the HOL theorem prover, including a proof that all pGCL commands satisfy the new condition sublinearity, the quantitative generalization of conjunctivity for standard GCL.The mechanized theory also supports the creation of an automatic proof tool which takes as input an annotated pGCL program and its partial correctness specification, and derives from that a sufficient set of verification conditions. This is employed to verify the partial correctness of the probabilistic voting stage in Rabin's mutual-exclusion algorithm.  相似文献   

4.
Certain answers are a widely accepted semantics of query answering over incomplete databases. As their computation is a coNP-hard problem, recent research has focused on developing (polynomial time) evaluation algorithms with correctness guarantees, that is, techniques computing a sound but possibly incomplete set of certain answers. The aim is to make the computation of certain answers feasible in practice, settling for under-approximations.In this paper, we present novel evaluation algorithms with correctness guarantees, which provide better approximations than current techniques, while retaining polynomial time data complexity. The central tools of our approach are conditional tables and the conditional evaluation of queries. We propose different strategies to evaluate conditions, leading to different approximation algorithms—more accurate evaluation strategies have higher running times, but they pay off with more certain answers being returned. Thus, our approach offers a suite of approximation algorithms enabling users to choose the technique that best meets their needs in terms of balance between efficiency and quality of the results.  相似文献   

5.
In this paper, a design methodology for synthesizing efficient parallel algorithms and VLSI architectures is presented. A design process starts with a problem definition specified in the parallel programming language Crystal and is followed by a series of program transformations in Crystal, each aiming at optimizing the target design for a specific purpose. To illustrate the design methodology, a set of design methods for deriving systolic algorithms and architectures is given and the use of these methods in the design of a dynamic programming solver is described. The design methodology, together with this particular set of design methods, can be viewed as a general theory of systolic designs (or multidimensional pipelines). The fact that Crystal is a general purpose language for parallel programming allows new design methods and synthesis techniques, properties and theorems about problems in specific application domains, and new insights into any given problem to be integrated readily within the existing design framework.  相似文献   

6.
It has been observed by many researchers that systolic arrays are very suitable for certain high-speed computations. Using a formal methodology, we present a design for a single simple programmable linear systolic array capable of solving large numbers of problems drawn from a variety of applications. The methodology is applicable to problems solvable by sequential algorithms that can be specified as nested for-loops of arbitrary depth. The algorithms of this form that can be computed on the array presented in this paper include 25 algorithms dealing with signal and image processing, algebraic computations, matrix arithmetic, pattern matching, database operations, sorting, and transitive closure. Assuming bounded I/O, for 18 of those algorithms the time and storage complexities are optimal, and therefore no improvement can be expected by using dedicated special-purpose linear systolic arrays designed for individual algorithms. We also describe another design which, using a sufficient large local memory and allowing data to be preloaded and unloaded, has an optimal processor/time product.An earlier version of this paper was presented at Supercomputing '88.This work was partially supported by ONR under the contract N00014-85-K-0046 and by NSF under Grant Number CCR-8906949.  相似文献   

7.
Many critical real-time applications are implemented as time-triggered systems. We present a systematic way to derive such time-triggered implementations from algorithms specified as functional programs (in which form their correctness and fault-tolerance properties can be formally and mechanically verified with relative ease). The functional program is first transformed into an untimed synchronous system and, then, to its time-triggered implementation. The first step is specific to the algorithm concerned, but the second is generic and we prove its correctness. This proof has been formalized and mechanically checked with the PVS verification system. The approach provides a methodology that can ease the formal specification and assurance of critical fault-tolerant systems  相似文献   

8.
Energy-efficient backbone construction is one of the most important objective in a wireless sensor network (WSN) and to construct a more robust backbone, weighted connected dominating sets can be used where the energy of the nodes are directly related to their weights. In this study, we propose algorithms for this purpose and classify our algorithms as weighted dominating set algorithms and weighted Steiner tree algorithms where these algorithms are used together to construct a weighted connected dominating set (WCDS). We provide fully distributed algorithms with semi-asynchronous versions. We show the design of the algorithms, analyze their proof of correctness, time, message and space complexities and provide the simulation results in ns2 environment. We show that the approximation ratio of our algorithms is 3ln(S) where S is the total weight of optimum solution. To the best of our knowledge, our algorithms are the first fully distributed and semi-asynchronous WCDS algorithms with 3ln(S) approximation ratio. We compare our proposed algorithms with the related work and show that our algorithms select backbone with lower cost and less number of nodes.  相似文献   

9.
The Timed Concurrent Constraint programming language (tccp) introduces time aspects into the Concurrent Constraint paradigm. This makes tccp especially appropriate for analyzing timing properties of concurrent systems by model checking. However, even if very compact state representations are obtained thanks to the use of constraints in tccp, large state spaces can still be generated, which may prevent model-checking tools from verifying tccp programs completely. Model checking tccp programs is a difficult task due to the subtleties of the underlying operational semantics, which combines constraints, concurrency, non-determinism and time. Currently, there is no practical model-checking tool that is applicable to tccp. In this work, we introduce an abstract methodology which is based on over- and under-approximating tccp models and which mitigates the state explosion problem that is common to traditional model-checking algorithms. We ascertain the conditions for the correctness of the abstract technique and show that this preliminary abstract semantics does not correctly simulate the suspension behavior, which is a key feature of tccp. Then, we present a refined abstract semantics which correctly models suspension. Finally, we complete our methodology by approximating the temporal properties that must be verified.  相似文献   

10.
Let S be a set of n points in a fixed axis-parallel rectangle $R\subseteq \Re^{2}$ , i.e. in the two-dimensional space (2D). Assuming that those points are stored in an R-tree, this paper presents several algorithms for finding the empty rectangle in R with the largest area, sides parallel to the axes of the space, and containing only a query point q. This point can not be part of S, that is, it is not stored in the R-tree. All algorithms follow the basic idea of discarding part of the points of S, in such a way that the problem can be solved only considering the remaining points. As a consequence, the algorithms only have to access a very small portion of the nodes (disk blocks) of the R-tree, saving main memory resources and computation time. We provide formal proofs of the correctness of our algorithms and, in order to evaluate the performance of the algorithms, we run an extensive set of experiments using synthetic and real data. The results have demonstrated the efficiency and scalability of our algorithms for different dataset configurations.  相似文献   

11.
E. Bach, following an idea of T. Itoh, has shown how to build a small set of numbers modulo a prime p such that at least one element of this set is a generator of Z/pZ. E. Bach suggests also that at least half of his set should be generators. We show here that a slight variant of this set can indeed be made to contain a ratio of primitive roots as close to 1 as necessary. In particular we present an asymptotically algorithm providing primitive roots of p with probability of correctness greater than 1−? and several O(logα(p)), α?5.23, algorithms computing “Industrial-strength” primitive roots.  相似文献   

12.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach.  相似文献   

13.
ContextThe design of complex systems demands methodologies to analyze its correct behaviour. It is usual that a correct behaviour is determined by the compliance with temporal requirements. Currently, testing is the most used technology to validate the correctness of systems. Although several techniques that take into account time aspects have been proposed, most of them require the tester interacts with the system. However, if this is not possible, it is necessary to apply a passive testing approach where the tester monitors the behaviour of the system.ObjectiveThe aim of this paper is to propose a methodology to perform passive testing on communicating systems in which the behaviour of their components must fulfill temporal restrictions associated with both performance and delays/timeouts.MethodOur framework uses algorithms for checking traces collected from the systems against invariants which formally represent the most relevant properties that must be fulfilled by the system. In order to support the feasibility of the methodology, we have performed an empirical study on a complex system for automatic recognition of images based on a pipeline architecture. We have analyzed the correctness of the system’s behaviour with respect to a set of invariants. Finally, an experiment, based on mutations of the system, was conducted to study the level of detection of a set of invariants.ResultsDifferent errors were detected and fixed along the development of the system by means of the proposed methodology. The results of the experiments with the mutated versions of the system indicated that the designed set of invariants was more effective in finding errors associated to temporal aspects than those related to communication among components.ConclusionThe proposed technique has been shown to be very useful for analyzing complex timed systems, and find errors when the tester has no control over their behaviour.  相似文献   

14.
Two important issues in systolic array designs are addressed: How is fault tolerance provided in systolic arrays to enhance the yield of wafer-scale integration implementations? And, how are efficient systolic arrays with two levels of pipelining designed? (The first level refers to the pipelined organization of the array at the cellular level, and the second refers to the pipelined functional units inside the cells.) The fault-tolerant scheme proposed replaces defective cells with clocked delays. This has the distinct characteristic that data can flow through the array with faulty cells at the original clock speed. It is shown that both the defective cells under this fault-tolerant scheme and the second-level pipeline stages can simply be modeled as additional delays in the data paths of “generic” systolic designs. The mathematical notion of a cut is introduced to solve the problem of how to allow for these extra delays while preserving the correctness of the original systolic array designs. The results obtained by applying these techniques are encouraging. When applied to systolic arrays without feedback cycles, the arrays can tolerate large numbers of failures (with the addition of very little hardware) while maintaining the original throughput. Furthermore, all of the pipeline stages in the cells can be kept fully utilized through the addition of a small number of delay registers. However, adding delays to systolic arrays with cycles typically induces a significant decrease in throughput. In response to this, a new class of systolic algorithms has been derived in which the data cycle around a ring of processing cells. The systolic ring architecture has the property that its performance degrades gracefully as cells fail. Use of the cut theory and ring architectures for arrays with feedback gives effective fault-tolerant and two-level pipelining schemes for most systolic arrays. As a side effect of developing the ring architecture approach, several new systolic algorithms have been derived. These algorithms generally require only one-third to one-half of the number of cells used in previous designs to achieve the same throughput. The new systolic algorithms include ones for LU-decomposition, QR-decomposition, and the solution of triangular linear systems.  相似文献   

15.
Massive-scale self-administered networks like Peer-to-Peer and Sensor Networks have data distributed across thousands of participant hosts. These networks are highly dynamic with short-lived hosts being the norm rather than an exception. In recent years, researchers have investigated best-effort algorithms to efficiently process aggregate queries (e.g., sum, count, average, minimum and maximum) on these networks. Unfortunately, query semantics for best-effort algorithms are ill-defined, making it hard to reason about guarantees associated with the result returned. In this paper, we specify a correctness condition, Single-Site Validity, with respect to which the above algorithms are best-effort. We present a class of algorithms that guarantee validity in dynamic networks. Experiments on real-life and synthetic network topologies validate performance of our algorithms, revealing the hitherto unknown price of validity.  相似文献   

16.
We present two case studies which illustrate the use of second–order algebra as a formalism for specification and verification of hardware algorithms. In the first case study we specify a systolic algorithm for convolution and formally verify its correctness using second–order equational logic. The second case study demonstrates the expressive power of second–order algebraic specifications by presenting a non–constructive specification of the Hamming stream problem. A dataflow algorithm for computing the Hamming stream is then specified and the correctness of this algorithm is verified by semantical methods. Both case studies illustrate aspects of the metatheory of second-order equational logic. Received: 16 August 1999 / revised version: 15 June 2001  相似文献   

17.
In this paper, we study the partial deduction of updateable logic programs. Partial deduction is a transformation technique that, given a normal program P and a normal goal G, produces from P a residual program P′ wrt G. An updateable program is a normal program to which an update can be applied. An update is a sequence of additions of clauses to a program and deletions of clauses from a program. We present algorithms that, given a normal program P, a normal goal G, a residual program P′ wrt G, and an update t for P, compute the corresponding update t′ such that t′(P′) is a residual program of t(P) wrt G. Weprove the correctness of these algorithms. We also describe an application of one of the algorithms to updateable knowledge bases.  相似文献   

18.
《Computer Communications》2007,30(14-15):2976-2986
A new class of wireless sensor networks that harvest power from the environment is emerging because of its intrinsic capability of providing unbounded lifetime. While a lot of research has been focused on energy-aware routing schemes tailored to battery-operated networks, the problem of optimal routing for energy harvesting wireless sensor networks (EH-WSNs) has never been explored. The objective of routing optimization in this context is not extending network lifetime, but maximizing the workload that can be autonomously sustained by the network.In this work we present a methodology for assessing the energy efficiency of routing algorithms for networks whose nodes drain power from the environment. We first introduce the energetic sustainability problem, then we define the maximum energetically sustainable workload (MESW) as the objective function to be used to drive the optimization of routing algorithms for EH-WSNs.We propose a methodology that makes use of graph algorithms and network simulations for evaluating the MESW starting from a network topology, a routing algorithm and a distribution of the environmental power available at each node. We present a tool flow implementing the proposed methodology and we show comparative results achieved on several routing algorithms.Experimental results highlight that routing strategies that do not take into account environmental power do not provide optimal results in terms of workload sustainability. Using optimal routing algorithms may lead to sizeable enhancements of the maximum sustainable workload. Moreover, optimality strongly depends on environmental power configurations. Since environmental power sources change over time, our results prompt for a new class of routing algorithms for EH-WSNs that are able to dynamically adapt to time-varying environmental conditions.  相似文献   

19.
In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, thequery locality set model (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms.This research was partially supported by the Department of Energy under Contract No. DE-AC02-81ER10920 and the National Science Foundation under grant MCS82-01870.  相似文献   

20.
State-based models provide a very convenient framework for analyzing, verifying, validating and designing sequential as well as concurrent or distributed algorithms. Each state-based model is considered as an abstraction, which is more or less close to the target algorithmic entity. The problem is then to organize the relationship between an initial abstract state-based model expressing requirements and a final concrete state-based model expressing a structured algorithmic state-based model. A simulation (or refinement) relation between two state-based models allows to structure these models from an abstract view to a concrete view. Moreover, state-based models can be extended by assertion languages for expressing correctness properties as pre/post specification, safety properties or even temporal properties. In this work, we review state-based models and play scores for verifying and designing concurrent or distributed algorithms. We choose the Event-B modeling language for expressing state-based models and we show how we can play Event-B scores using Rodin and methodological elements to guarantee that the resulting algorithm is correct with respect to initial requirements. First, we show how annotation-based verification can be handled in the Event-B modeling language and we propose an extension to handle the verification of concurrent programs. In a second step, we show how important is the concept of refinement and how it can be used to found a methodology for designing concurrent programs using the coordination paradigm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号