共查询到20条相似文献,搜索用时 20 毫秒
1.
Many safety-critical systems that have been considered by the verification community are parameterized by the number of concurrent components in the system, and hence describe an infinite family of systems. Traditional model checking techniques can only be used to verify specific instances of this family. In this paper, we present a technique based on compositional model checking and program analysis for automatic verification of infinite families of systems. The technique views a parameterized system as an expression in a process algebra (CCS) and interprets this expression over a domain of formulas (modal mu-calculus), considering a process as a property transformer. The transformers are constructed using partial model checking techniques. At its core, our technique solves the verification problem by finding the limit of a chain of formulas. We present a widening operation to find such a limit for properties expressible in a subset of modal mu-calculus. We describe the verification of a number of parameterized systems using our technique to demonstrate its utility. 相似文献
2.
A symbolic manipulator for automated verification of reactive systems with heterogeneous data types 总被引:1,自引:0,他引:1
In this paper, we present the design and implementation of the Composite Symbolic Library, a symbolic manipulator for model checking systems with heterogeneous data types. Our tool provides a common interface for different symbolic representations, such as BDDs, for representing Boolean logic formulas and polyhedral representations for linear arithmetic formulas. Based on this common interface, these data structures are combined using a disjunctive composite representation. We propose several heuristics for efficient manipulation of this composite representation and present experimental results that demonstrate their performance. We used an object-oriented design to implement the Composite Symbolic Library. We imported the CUDD library (a BDD library) and the Omega Library (a linear arithmetic constraint manipulator that uses polyhedral representations) to our tool by writing wrappers around them which conform to our symbolic representation interface. Our tool supports polymorphic verification procedures which dynamically select symbolic representations based on the input specification. Our symbolic representation library can be used as an interface between different symbolic libraries, model checkers, and specification languages. We expect our tool to be useful in integrating different tools and techniques for symbolic model checking, and in comparing their performance. 相似文献
3.
In this paper we present HySAT, a bounded model checker for linear hybrid systems, incorporating a tight integration of a
DPLL–based pseudo–Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT,
ICS, or CVC, our tool exploits the various optimizations that arise naturally in the bounded model checking context, e.g.isomorphic
replication of learned conflict clauses or tailored decision strategies, and extends them to the hybrid domain. We demonstrate
that those optimizations are crucial to the performance of the tool. 相似文献
4.
Centre and Range method for fitting a linear regression model to symbolic interval data 总被引:2,自引:0,他引:2
Eufrásio de A. Lima Neto 《Computational statistics & data analysis》2008,52(3):1500-1515
This paper introduces a new approach to fitting a linear regression model to symbolic interval data. Each example of the learning set is described by a feature vector, for which each feature value is an interval. The new method fits a linear regression model on the mid-points and ranges of the interval values assumed by the variables in the learning set. The prediction of the lower and upper bounds of the interval value of the dependent variable is accomplished from its mid-point and range, which are estimated from the fitted linear regression model applied to the mid-point and range of each interval value of the independent variables. The assessment of the proposed prediction method is based on the estimation of the average behaviour of both the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment. Finally, the approaches presented in this paper are applied to a real data set and their performance is compared. 相似文献
5.
We develop a data structure for maintaining a dynamic multiset that uses bits and O(1) words, in addition to the space required by the n elements stored, supports searches in worst-case time and updates in amortized time. Compared to earlier data structures, we improve the space requirements from O(n) bits to bits, but the running time of updates is amortized, not worst-case. 相似文献
6.
Selective Quantitative Analysis and Interval Model Checking: Verifying Different Facets of a System 总被引:1,自引:0,他引:1
In this work we propose a verification methodology consisting of selective quantitative timing analysis and interval model checking. Our methods can aid not only in determining if a system works correctly, but also in understanding how well the system works. The selective quantitative algorithms compute minimum and maximum delays over a selected subset of system executions. A linear-time temporal logic (LTL) formula is used to select either infinite paths or finite intervals over which the computation is performed. We show how tableau for LTL formulas can be used for selecting either paths or intervals and also for model checking formulas interpreted over paths or intervals.To demonstrate the usefulness of our methods we have verified a complex and realistic distributed real-time system. Our tool has been able to analyze the system and to compute the response time of the various components. Moreover, we have been able to identify inefficiencies that caused the response time to increase significantly (about 50%). After changing the design we not only verified that the response time was lower, but were also able to determine the causes for the poor performance of the original model using interval model checking. 相似文献
7.
Sérgio Vale Aguiar Campos Edmund Clarke 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):260-269
The task of checking if a computer system satisfies its timing specifications is extremely important. These systems are often
used in critical applications where failure to meet a deadline can have serious or even fatal consequences. This paper presents
an efficient method for performing this verification task. In the proposed method a real-time system is modeled by a state-transition
graph represented by binary decision diagrams. Efficient symbolic algorithms exhaustively explore the state space to determine
whether the system satisfies a given specification. In addition, our approach computes quantitative timing information such
as minimum and maximum time delays between given events. These results provide insight into the behavior of the system and
assist in the determination of its temporal correctness. The technique evaluates how well the system works or how seriously
it fails, as opposed to only whether it works or not. Based on these techniques a verification tool called Verus has been constructed. It has been used in the verification of several industrial real-time systems such as the robotics system
described below. This demonstrates that the method proposed is efficient enough to be used in real-world designs. The examples
verified show how the information produced can assist in designing more efficient and reliable real-time systems. 相似文献
8.
HYTECH: a model checker for hybrid systems 总被引:10,自引:1,他引:9
Thomas A. Henzinger Pei-Hsin Ho Howard Wong-Toi 《International Journal on Software Tools for Technology Transfer (STTT)》1997,1(1-2):110-122
9.
Roberto Barbuti Nicoletta De Francesco Antonella Santone Gigliola Vaglini 《Software》1999,29(12):1123-1147
LOTOS is a formal specification language for concurrent and distributed systems. Basic LOTOS is the version of LOTOS without value‐passing. A widely used approach to the verification of temporal properties is model checking. Often, in this approach the formal specification is translated into a labeled transition system on which formulae expressing properties are checked. A problem with this verification technique is state explosion: concurrent systems are often represented by automata with a prohibitive number of states. In this paper we show how, given a set ρ of actions, it is possible to automatically obtain for a Basic LOTOS program a reduced transition system to which only the arcs labeled by actions in ρ belong. The set ρ of actions plays a fundamental role in conjunction with a temporal logic defined by the authors in a previous paper: selective mu‐calculus. The reduced system with respect to ρ preserves the truth value of all selective mu‐calculus formulae with actions from the set ρ. We act at both syntactic and semantic levels. From a syntactic point of view, we define a set of transformation rules obtaining a smaller program. On the semantic side, we define a non‐standard semantics which dynamically reduces the transition system during generation. We present a tool implementing both the syntactic and the semantic reduction. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
10.
The Spatio-Temporal Consistency Language(STeC)is a high-level modeling language that deals natively with spatio-temporal behaviour,i.e.,behaviour relating to certain locations and time.Such restriction by both locations and time is of first importance for some types of real-time systems.CCSL is a formal specification language based on logical clocks.It is used to describe some crucial safety properties for real-time systems,due to its powerful expressiveness of logical and chronometric time constraints.We consider a novel verification framework combining STeC and CCSL,with the advantages of addressing spatio-temporal consistency of system behaviour and easily expressing some crucial time constraints.We propose a theory combining these two languages and a method verifying CCSL properties in STeC models.We adopt UPPAAL as the model checking tool and give a simple example to illustrate how to carry out verification in our framework. 相似文献
11.
The structure of data in computer-based files is information which may not be explicit in the files themselves, but is incorporated in part in the computer software designed to process the files. If a computer-processable file of data is to be processed using a “system” other than the one used to generate the file initially, conversion of the file to another format is normally necessary. A format, called FILEMATCH, is presented which for structures encountered in earth science data, incorporates the structural information in the files themselves, thus providing a medium for interchange of files among a variety of software systems. 相似文献
12.
A step toward STEP-compatible engineering data management: the data models of product structure and engineering changes 总被引:1,自引:0,他引:1
In an iterative design process, there is a large amount of engineering data to be processed. Well-managed engineering data can ensure the competitiveness of companies in the competitive market. It has been recognized that a product data model is the basis for establishing engineering database. To fully support the complete product data representation in its life cycle, an international product data representation and exchange standard, STEP, is applied to model the representation of a product. In this paper, the architecture of an engineering data management (EDM) system is described, which consists of an integrated product database. There are six STEP-compatible data models constructed to demonstrate the integratibility of EDM system using common data modeling format. These data models are product definition, product structure, shape representation, engineering change, approval, and production scheduling. These data models are defined according to the integrated resources of STEP/ISO 10303 (Parts 41-44), which support a complete product information representation and a standard data format. Thus, application systems, such as CAD/CAM and MRP systems, can interact with the EDM system by accessing the database based on the STEP data exchange standard. 相似文献
13.
14.
This paper describes an ongoing work in the development of a finite element analysis system, called TopFEM, based on the compact topological data structure, TopS [1], [2]. This new framework was written to take advantage of the topological data structure together with object-oriented programming concepts to handle a variety of finite element problems, spanning from fracture mechanics to topology optimization, in an efficient, but generic fashion. The class organization of the TopFEM system is described and discussed within the context of other frameworks in the literature that share similar ideas, such as GetFEM++, deal.II, FEMOOP and OpenSees. Numerical examples are given to illustrate the capabilities of TopS attached to a finite element framework in the context of fracture mechanics and to establish a benchmark with other implementations that do not make use of a topological data structure. 相似文献
15.
Gianni Franceschini Roberto Grossi J.Ian Munro Linda Pagli 《Journal of Computer and System Sciences》2004,68(4):788-807
An implicit data structure for the dictionary problem maintains n data values in the first n locations of an array in such a way that it efficiently supports the operations insert, delete and search. No information other than that in O(1) memory cells and in the input data is to be retained; and the only operations performed on the data values (other than reads and writes) are comparisons. This paper describes the implicit B-tree, a new data structure supporting these operations in block transfers like in regular B-trees, under the realistic assumption that a block stores keys, so that reporting r consecutive keys in sorted order has a cost of block transfers. En route a number of space efficient techniques for handling segments of a large array in a memory hierarchy are developed. Being implicit, the proposed data structure occupies exactly ⌈n/B⌉ blocks of memory after each update, where n is the number of keys after each update and B is the number of keys contained in a memory block. In main memory, the time complexity of the operations is , disproving a conjecture of the mid 1980s. 相似文献
16.
Andrew T. F. Hutt 《Software》1979,9(2):157-169
One of the chief difficulties which needs to be overcome during the early design stages of a system is that of establishing a satisfactory design for that system. From the time it was first conceived it was apparent that the Relational Data Base Management System is like a compiler in so far as it takes a succession of user requests for information formulated in an applied predicate calculus and translates each one into a series of calls which access an underlying data base and transform data from that data base into the form the user wishes to see. This paper compares the architecture of the Relational Data Base Management System with that of a compiler, and then demonstrates the use of the architecture when processing a language based on an applied predicate calculus. Finally, the paper describes a number of extensions to that architecture which are required to solve the particular problems raised by the data base system. 相似文献
17.
Tao Luo Yin Liao Guoliang Chen Yunquan Zhang 《International Journal of Parallel, Emergent and Distributed Systems》2016,31(3):233-253
In response to the high demand of big data analytics, several programming models on large and distributed cluster systems have been proposed and implemented, such as MapReduce, Dryad and Pregel. However, compared with high performance computing areas, the basis and principles of computation and communication behaviour of big data analytics is not well studied. In this paper, we review the current big data computational model DOT and DOT Advanced, and propose a more general and practical model p-DOT (p-phases DOT). p-DOT is not a simple extension, but with profound significance: for general aspects, any big data analytics job execution expressed in DOT model or bulk synchronous parallel model can be represented by it; for practical aspects, it considers I/O behaviour to evaluate performance overhead. Moreover, we provide a cost function of p-DOT implying that the optimal number of machines is near-linear to the square root of input size for a fixed algorithm and workload, and certify that the processing paradigm of p-DOT is scalable and fault-tolerant. Finally, we demonstrate the effectiveness of the model through several experiments. 相似文献
18.
Adaptive piggybacking: a novel technique for data sharing in video-on-demand storage servers 总被引:17,自引:0,他引:17
Recent technology advances have made multimedia on-demand services, such as home entertainment and home-shopping, important
to the consumer market. One of the most challenging aspects of this type of service is providing access either instantaneously
or within a small and reasonable latency upon request. We consider improvements in the performance of multimedia storage servers
through data sharing between requests for popular objects, assuming that the I/O bandwidth is the critical resource in the system. We discuss a novel approach to data sharing,
termed adaptive piggybacking, which can be used to reduce the aggregate I/O demand on the multimedia storage server and thus
reduce latency for servicing new requests. 相似文献
19.
Gopal RacherlaAuthor Vitae Sridhar RadhakrishnanAuthor Vitae 《Pattern recognition》2002,35(10):2303-2309
The data structure that is probably most used in the pattern recognition and image processing of geometric objects is the segment tree and its optimized variant, the “layered segment tree”. In all the versions currently known, except the work of Vaishnavi and Wood described later, these data structures do not operate in real time. Even in the latter scheme, although the structure can be implemented in real time and in an on-line fashion, the operation of insertion involves the sorting of the representations of the line segments in the tree. In essence, for all the reported algorithms, there is no known strategy to insert the segments one by one, other than the trivial strategy of processing them all together as in a batched-mode. In this paper, we present a strategy in which all the operations done on the tree can be done efficiently. Indeed, by improving the bottleneck, we prove that an arbitrary horizontal segment can be inserted into this data structure without invoking an expensive sorting process. We show that while this is accomplished by maintaining the same space and query complexity of the best-known algorithm, the version presented here is applicable to on-line real-time processing of line segments. The paper thus has applications in all areas of pattern recognition and image processing involving geometric objects. 相似文献
20.
Elizabeth M. HashimotoEdwin M.M. Ortega Gilberto A. PaulaMauricio L. Barreto 《Computational statistics & data analysis》2011,55(2):993-1007
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. 相似文献