共查询到20条相似文献,搜索用时 31 毫秒
1.
Orna Kupferman Moshe Y. Vardi 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):224-233
One of the advantages of temporal-logic model-checking tools is their ability to accompany a negative answer to the correctness
query by a counterexample to the satisfaction of the specification in the system. On the other hand, when the answer to the
correctness query is positive, most model-checking tools provide no witness for the satisfaction of the specification. In
the last few years there has been growing awareness as to the importance of suspecting the system or the specification of
containing an error also in the case model checking succeeds. The main justification of such suspects are possible errors
in the modeling of the system or of the specification. Many such errors can be detected by further automatic reasoning about
the system and the environment. In particular, Beer et al. described a method for the detection of vacuous satisfaction of
temporal logic specifications and the generation of interesting witnesses for the satisfaction of specifications. For example,
verifying a system with respect to the specification ϕ=AG(reqAFgrant) (“every request is eventually followed by a grant”), we say that ϕ is satisfied vacuously in systems in which requests are
never sent. An interesting witness for the satisfaction of ϕ is then a computation that satisfies ϕ and contains a request.
Beer et al. considered only specifications of a limited fragment of ACTL, and with a restricted interpretation of vacuity.
In this paper we present a general method for detection of vacuity and generation of interesting witnesses for specifications
in CTL*. Our definition of vacuity is stronger, in the sense that we check whether all the subformulas of the specification affect
its truth value in the system. In addition, we study the advantages and disadvantages of alternative definitions of vacuity,
study the problem of generating linear witnesses and counterexamples for branching temporal logic specifications, and analyze
the complexity of the problem.
Published online: 22 January 2002 相似文献
2.
Holger Hermanns Joost-Pieter Katoen Joachim Meyer-Kayser Markus Siegle 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):153-172
Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking
of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and
continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov
chains, the Erlangen–Twente Markov Chain Checker E⊢MC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and
discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting
lessons learned during the development and application of E⊢MC2.
Published online: 19 November 2002
Correspondence to: Holger Hermanns 相似文献
3.
Yonit Kesten Amir Pnueli 《International Journal on Software Tools for Technology Transfer (STTT)》2000,2(4):328-342
In spite of the impressive progress in the development of the two main methods for formal verification of reactive systems
– Symbolic Model Checking and Deductive Verification, they are still limited in their ability to handle large systems. It
is generally recognized that the only way these methods can ever scale up is by the extensive use of abstraction and modularization,
which break the task of verifying a large system into several smaller tasks of verifying simpler systems.
In this paper, we review the two main tools of compositionality and abstraction in the framework of linear temporal logic.
We illustrate the application of these two methods for the reduction of an infinite-state system into a finite-state system
that can then be verified using model checking.
The technical contributions contained in this paper are a full formulation of abstraction when applied to a system with both
weak and strong fairness requirements and to a general temporal formula, and a presentation of a compositional framework for
shared variables and its application for forming network invariants. 相似文献
4.
Flip Korn Alexandros Labrinidis Yannis Kotidis Christos Faloutsos 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):254-266
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing
error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the
given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values
from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess”
the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting,
answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules
in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods
which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics,
biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in
binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than
using straightforward column averages.
Received: March 15, 1999 / Accepted: November 1, 1999 相似文献
5.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
6.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
7.
Scott D. Stoller 《Distributed Computing》2000,13(2):85-98
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized
real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and
“possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting
them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin
and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms
are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to
illustrate the benefits of this approach.
Received: January 1999 / Accepted: November 1999 相似文献
8.
Z -coordinates} of the vertices are completely governed by the Z-coordinates assigned to four selected ones. This allows describing the spatial polygonal mesh with just its 2D projection
plus the heights of four vertices. As a consequence, these projections essentially capture the “spatial meaning” of the given
surface, in the sense that, whatever spatial interpretations are drawn from them, they all exhibit essentially the same shape.
Published online: 5 February 2003 相似文献
9.
Henry S. Baird Allison L. Coates Richard J. Fateman 《International Journal on Document Analysis and Recognition》2003,5(2-3):158-163
Abstract. We exploit the gap in ability between human and machine vision systems to craft a family of automatic challenges that tell
human and machine users apart via graphical interfaces including Internet browsers. Turing proposed [Tur50] a method whereby
human judges might validate “artificial intelligence” by failing to distinguish between human and machine interlocutors. Stimulated
by the “chat room problem” posed by Udi Manber of Yahoo!, and influenced by the CAPTCHA project [BAL00] of Manuel Blum et
al. of Carnegie-Mellon Univ., we propose a variant of the Turing test using pessimal print: that is, low-quality images of machine-printed text synthesized pseudo-randomly over certain ranges of words, typefaces,
and image degradations. We show experimentally that judicious choice of these ranges can ensure that the images are legible
to human readers but illegible to several of the best present-day optical character recognition (OCR) machines. Our approach
is motivated by a decade of research on performance evaluation of OCR machines [RJN96,RNN99] and on quantitative stochastic
models of document image quality [Bai92,Kan96]. The slow pace of evolution of OCR and other species of machine vision over
many decades [NS96,Pav00] suggests that pessimal print will defy automated attack for many years. Applications include `bot'
barriers and database rationing.
Received: February 14, 2002 / Accepted: March 28, 2002
An expanded version of: A.L. Coates, H.S. Baird, R.J. Fateman (2001) Pessimal Print: a reverse Turing Test. In: {\it Proc.
6th Int. Conf. on Document Analysis and Recognition}, Seattle, Wash., USA, September 10–13, pp. 1154–1158
Correspondence to: H. S. Baird 相似文献
10.
Approximate query mapping: Accounting for translation closeness 总被引:2,自引:0,他引:2
Kevin Chen-Chuan Chang Héctor García-Molina 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(2-3):155-181
In this paper we present a mechanism for approximately translating Boolean query constraints across heterogeneous information
sources. Achieving the best translation is challenging because sources support different constraints for formulating queries,
and often these constraints cannot be precisely translated. For instance, a query [score>8] might be “perfectly” translated
as [rating>0.8] at some site, but can only be approximated as [grade=A] at another. Unlike other work, our general framework
adopts a customizable “closeness” metric for the translation that combines both precision and recall. Our results show that
for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we
identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm
then translates complex queries by rewriting them in terms of the semantic units. We show that, under practical assumptions,
our algorithm generates the best approximate translations with respect to the closeness metric of choice. We also present
a case study to show how our technique may be applied in practice.
Received: 15 October 2000 / Accepted: 15 April 2001 Published online: 28 June 2001 相似文献
11.
Fady Copty Amitai Irron Osnat Weissberg Nathan Kropp Gila Kamhi 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(3):335-348
In this paper, we emphasize the importance of efficient debugging in formal verification and present capabilities that we
have developed in order to aid debugging in Intel’s Formal Verification Environment. We have given the name “Counter-Example
Wizard” to the bundle of capabilities that we have developed to address the needs of the verification engineer in the context
of counter-example diagnosis and rectification. The novel features of the Counter-Example Wizard are the multi-value counter-example annotation, constraint-based debugging, and multiple counter-example generation mechanisms. Our experience with the verification of real-life Intel designs shows that these capabilities complement one
another and can help the verification engineer diagnose and fix a reported failure. We use real-life verification cases to
illustrate how our system solution can significantly reduce the time spent in the loop of model checking, specification, and
design modification.
Published online: 21 February 2003 相似文献
12.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no
k-fault-tolerant solution to the n-process k-set-agreement problem, for any n.
More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed
computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity,
expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include
a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write
shared memory systems is also presented.
Received: February 2001 / Accepted: February 2001 相似文献
13.
Design and analysis of a video-on-demand server 总被引:6,自引:0,他引:6
The availability of high-speed networks, fast computers and improved storage technology is stimulating interest in the development
of video on-demand services that provide facilities similar to a video cassette player (VCP). In this paper, we present a
design of a video-on-demand (VOD) server, capable of supporting a large number of video requests with complete functionality
of a remote control (as used in VCPs), for each request. In the proposed design, we have used an interleaved storage method
with constrained allocation of video and audio blocks on the disk to provide continuous retrieval. Our storage scheme interleaves
a movie with itself (while satisfying the constraints on video and audio block allocation. This approach minimizes the starting delay and the
buffer requirement at the user end, while ensuring a jitter-free display for every request. In order to minimize the starting
delay and to support more non-concurrent requests, we have proposed the use of multiple disks for the same movie. Since a
disk needs to hold only one movie, an array of inexpensive disks can be used, which reduces the overall cost of the proposed
system. A scheme supported by our disk storage method to provide all the functions of a remote control such as “fast-forwarding”,
“rewinding” (with play “on” or “off”), “pause” and “play” has also been discussed. This scheme handles a user request independent
of others and satisfies it without degrading the quality of service to other users. The server design presented in this paper
achieves the multiple goals of high disk utilization, global buffer optimization, cost-effectiveness and high-quality service
to the users. 相似文献
14.
Theo C. Ruys Ed Brinksma 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):246-259
In this paper we take a closer look at the automated analysis of designs, in particular of verification by model checking.
Model checking tools are increasingly being used for the verification of real-life systems in an industrial context. In addition
to ongoing research aimed at curbing the complexity of dealing with the inherent state space explosion problem – which allows
us to apply these techniques to ever larger systems – attention must now also be paid to the methodology of model checking,
to decide how to use these techniques to their best advantage. Model checking “in the large” causes a substantial proliferation
of interrelated models and model checking sessions that must be carefully managed in order to control the overall verification
process. We show that in order to do this well both notational and tool support are required. We discuss the use of software
configuration management techniques and tools to manage and control the verification trajectory. We present Xspin/Project,
an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin.
Published online: 18 June 2002 相似文献
15.
Fast techniques for the optimal smoothing of stored video 总被引:3,自引:0,他引:3
Work-ahead smoothing is a technique whereby a server, transmitting stored compressed video to a client, utilizes client buffer
space to reduce the rate variability of the transmitted stream. The technique requires the server to compute a schedule of
transfer under the constraints that the client buffer neither overflows nor underflows. Recent work established an optimal
off-line algorithm (which minimizes peak, variance and rate variability of the transmitted stream) under the assumptions of
fixed client buffer size, known worst case network jitter, and strict playback of the client video. In this paper, we examine
the practical considerations of heterogeneous and dynamically variable client buffer sizes, variable worst case network jitter
estimates, and client interactivity. These conditions require on-line computation of the optimal transfer schedule. We focus on techniques for reducing on-line computation time. Specifically,
(i) we present an algorithm for precomputing and storing the optimal schedules for all possible client buffer sizes in a compact
manner; (ii) we show that it is theoretically possible to precompute and store compactly the optimal schedules for all possible
estimates of worst case network jitter; (iii) in the context of playback resumption after client interactivity, we show convergence
of the recomputed schedule with the original schedule, implying greatly reduced on-line computation time; and (iv) we propose
and empirically evaluate an “approximation scheme” that produces a schedule close to optimal but takes much less computation
time. 相似文献
16.
Constraint-based deductive model checking 总被引:2,自引:0,他引:2
Giorgio Delzanno Andreas Podelski 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(3):250-270
We show that constraint logic programming (CLP) can serve as a conceptual basis and as a practical implementation platform
for the model checking of infinite-state systems. CLP programs are logical formulas (built up from constraints) that have
both a logical interpretation and an operational semantics. Our contributions are: (1) a translation of concurrent systems
(imperative programs) into CLP programs with the same operational semantics; and (2) a deductive method for verifying safety
and liveness properties of the systems which is based on the logical interpretation of the CLP programs produced by the translation.
We have implemented the method in a CLP system and verified well-known examples of infinite-state programs over integers,
using linear constraints here as opposed to Presburger arithmetic as in previous solutions.
Published online: 18 July 2001 相似文献
17.
Robust Algorithms For Generalized Pham Systems 总被引:1,自引:0,他引:1
18.
Yukio Itakura Masaki Hashiyada Toshio Nagashima Shigeo Tsujii 《International Journal of Information Security》2002,1(3):149-160
The individual differences in the repeat count of several bases, short tandem repeat (STR), among all of the deoxyribonucleic
acid (DNA) base sequences, can be used as unique DNA information for a personal identification (ID). We propose a method to
generate a personal identifier (hereafter referred to as a “DNA personal ID”) by specifying multiple STR locations (called
“loci”) and then sequencing the repeat count information. We also conducted a validation experiment to verify the proposed
principle based on actual DNA data.
We verified that the matching probability of DNA personal IDs becomes exponentially smaller, to about 10-n, as n stages of loci are used and that no correlation exists among the loci.
Next, we considered the various issues that will be encountered when applying DNA personal IDs to information security systems,
such as biometric personal authentication systems.
Published online: 9 April 2002 相似文献
19.
Laura M. Haas Michael J. Carey Miron Livny Amit Shukla 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):241-256
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas
for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm
performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op
timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations
can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's
predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly
useful to implementors of relational query optimizers and query processing systems.
Edited by M. Adiba. Received May 1993 / Accepted April 1996 相似文献
20.
Summary. We set out a modal logic for reasoning about multilevel security of probabilistic systems. This logic contains expressions
for time, probability, and knowledge. Making use of the Halpern-Tuttle framework for reasoning about knowledge and probability,
we give a semantics for our logic and prove it is sound. We give two syntactic definitions of perfect multilevel security
and show that their semantic interpretations are equivalent to earlier, independently motivated characterizations. We also
discuss the relation between these characterizations of security and between their usefulness in security analysis. 相似文献