首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper presents divergence control methods for epsilon serializability (ESR) in centralized databases. ESR alleviates the strictness of serializability (SR) in transaction processing by allowing for limited inconsistency. The bounded inconsistency is automatically maintained by divergence control (DC) methods in a way similar to SR is maintained by concurrency control (CC) mechanisms. However, DC for ESR allows more concurrency than CC for SR. The authors first demonstrate the feasibility of ESR by showing the design of three representative DC methods: two-phase locking, timestamp ordering and optimistic approaches. DC methods are designed by systematically enhancing CC algorithms in two stages: extension and relaxation. In the extension stage, a CC algorithm is analyzed to locate the places where it identifies non-SR conflicts of database operations. In the relaxation stage, the non-SR conflicts are relaxed to allow for controlled inconsistency. They then demonstrate the applicability Of ESR by presenting the design of DC methods using other most known inconsistency specifications, such as absolute value, age and total number of nonserializably read data items. In addition, they present a performance study using an optimistic divergence control algorithm as an example to show that a substantial improvement in concurrency can be achieved in ESR by allowing for a small amount of inconsistency  相似文献   

2.
Concurrent execution of database transactions is desirable from the point of view of speed, but may introduce inconsistencies. A commonly used criterion of correctness of a concurrent execution of transactions is serializability, i.e., the equivalence of the execution to some serial schedule or schedules. In the literature several transaction models have been used and several different notions of serializability have been introduced. In this paper, we investigate the various serializability families in the general transaction model, in the two-step model, and in the restricted two-step model. We also examine these families in the multiversion database model.This research was supported in part by a Japan Society for the Promotion of Science Research Fellowship, in part by a Scientific Research Grant-in-Aid from the Ministry of Education, Science and Culture of Japan, and in part by the Natural Science and Engineering Research Council of Canada under Grant No. A1617. Most of this work was done when the first author was a Visiting Professor in the Department of Applied Mathematics and Physics, Kyoto University, Kyoto, Japan.  相似文献   

3.
Summary An interleaved execution of transactions in a database system is serializable if the effect of the execution is equivalent to that of some serial execution of the transactions. Several notions of serializability have been defined in the literature. They differ with respect to whether the effect is equivalent regarding the values read by the transactions or the final database state, that is, the values read by the fictitious final transaction. We introduce a generalized notion called S-serializability, where the subset S of transactions read the same values as in some serial execution. We also introduce C-serializability where the final values for the subset C of data items are the same as in some serial execution. By combining these two notions, we get (S + C)-serializability. We give simple and intuitive characterizations for all these notions of serializability.This research is supported in part by the Natural Sciences and Engineering Research Council of Canada Individual Operating Grant A-3182  相似文献   

4.
Snapshot isolation (SI) is commonly used in some commercial DBMSs with a multiversion concurrency control mechanism since it never blocks read-only transactions. Recent database replication protocols have been designed using SI replicas where transactions are firstly executed in a delegate replica and their updates (if any) are propagated to the rest of the replicas at commit time; i.e. they follow the Read One Write All (ROWA) approach. This paper provides a formalization that shows the correctness of abstract protocols which cover these replication proposals. These abstract protocols differ in the properties demanded for achieving a global SI level and those needed for its generalized SI (GSI) variant — allowing reads from old snapshots. Additionally, we propose two more relaxed properties that also ensure a global GSI level. Thus, some applications can further optimize their performance in a replicated system while obtaining GSI.  相似文献   

5.
A well-known result by Sethi (1982) states that strict view- and final-state serializability are NP-complete in general, but decidable in polynomial time if useless values are absent. This paper falsifies one of these claims by proving that the absence of useless values is not sufficient to make strict final-state serializability decidable in polynomial time. Moreover, we show that a slightly stronger condition is sufficient, namely the absence of dead values. All claims refer to the two-step transaction model which is used by Bernstein et al. (1979), Kelter (1985, 1986) and Papadimitriou (1979). These results give new insights into the differences between strict view- and final-state serializability and between uselessness and deadness of values.  相似文献   

6.
The multi-level transaction concept provides a powerful tool for structuring activities in multidatabase systems. However, even multi-level serializability is sometimes too restrictive as a correctness criterion, either because of very high concurrency requirements, or because of the practical difficulties of implementing a scheduler in actual production environments. The extended multi-level transaction model presented in this paper supports higher concurrency in cases where higher level operations commute in one direction, but not in the other-i.e., when it is valid to interchange them when they occur in one order in a history, but not when they occur in the other order. We introduce a relaxed correctness criterion based on allowing a bounded number of out of order conflicts at each level in the multi-level framework, where the bound can be different for different levels. Finally we discuss the properties of compensation in this framework, developing a theory of compensation which depends only on the semantics of the operations and not on the particular state of the database. We illustrate the use of these concepts in the context of a particular class of practical applications. Recommended by: Tamer ÖzsuThis work was supported in part by MCC, Bellcore, and by the Texas Advanced Research Program under Grant No. 3652008. Majority of Sheth's work was performed at Bellcore.  相似文献   

7.
To enforce global serializability in a multidatabase environment the multidatabase transaction manager must take into account the indirect (transitive) conflicts between multidatabase transactions caused by local transactions. Such conflicts are difficult to resolve because the behavior or even the existence of local transactions is not known to the multidatabase system. To overcome these difficulties, we propose to incorporate additional data manipulation operations in the subtransactions of each multidatabase transaction. We show that if these operations create direct conflicts between subtransactions at each participating local database system, indirect conflicts can be resolved even if the multidatabase system is not aware of their existence. Based on this approach, we introduce optimistic and conservative multidatabase transaction management methods that require the local database systems to ensure only local serializability. The proposed methods do not violate the autonomy of the local database systems and guarantee global serializability by preventing multidatabase transactions from being serialized in different ways at the participating database systems. Refinements of these methods are also proposed for multidatabase environments where the participating database systems allow schedules that are cascadeless or transactions have analogous execution and serialization orders. In particular, we show that forced local conflicts can be eliminated in rigorous local systems, local cascadelessness simplifies the design of a global scheduler, and that local strictness offers no significant advantages over cascadelessness  相似文献   

8.
A year-long trial has seen a large lightweight verification problem treated by an ad hoc distributed network of identical solvers. The trialled problem is the semantic analysis of the C code in the Linux kernel to exclude a common deadlock possibility. The aim of the programme behind the experiment is to develop a viable loosely coupled distributed formal method which a community of interested part-time helpers on the net can lend their computing cycles to as they will, or send their own verification problems to for solving.  相似文献   

9.
As information systems become more complex, formal methods offer a solution to the increasing problem of ensuring correctness of design and implementation. This paper illustrates the use of mathematical specification to formalise the notion of explanation. This is relevant to any information system which engages in inference. An information system is regarded as a homogeneous function on states of information. An explanation for a conclusion is regarded as a weakening of the theory relating input to output, such that the same conclusion is reached. As an example, the specification is implemented for a simple statistical classifier to assist medical diagnosis.  相似文献   

10.
11.
Word frequencies in text documents can be reasonably described by the Mandelbrot distribution, which has Zipf's Law as a special case. Furthermore, the growth of vocabulary size as a function of the text size (its number of words) has been described in Heaps' Law. It has been shown that these two experimental laws are related.

In this paper we go a step further, and provide a (formal) derivation of Heaps' Law from the Mandelbrot distribution. We also provide a specification of the validity area for applying Heaps' Law.  相似文献   


12.
This article presents a formal dialogue game for adjudication dialogues. Existing AI & law models of legal dialogues and argumentation-theoretic models of persuasion are extended with a neutral third party, to give a more realistic account of the adjudicator’s role in legal procedures. The main feature of the model is a division into an argumentation phase, where the adversaries plea their case and the adjudicator has a largely mediating role, and a decision phase, where the adjudicator decides the dispute on the basis of the claims, arguments and evidence put forward in the argumentation phase. The model allows for explicit decisions on admissibility of evidence and burden of proof by the adjudicator in the argumentation phase. Adjudication is modelled as putting forward arguments, in particular undercutting and priority arguments, in the decision phase. The model reconciles logical aspects of burden of proof induced by the defeasible nature of arguments with dialogical aspects of burden of proof as something that can be allocated by explicit decisions on legal grounds.
Henry PrakkenEmail:
  相似文献   

13.
Primeness of nD polynomial matrices is of fundamental importance in multidimensional systems theory. In this paper we define a quantity which describes the “amount of primeness” of a matrix and identify it as the concept of grade in commutative algebra. This enables us to produce a theory which unifies many existing results, such as the Bézout identities and complementation laws, while placing them on a firm algebraic footing. We also present applications to autonomous systems, behavioural minimality of regular systems, and transfer matrix factorization. This work has been sponsored by EPSRC Grant No. GR/K 18504.  相似文献   

14.
15.
In multi-agent systems (MAS), negotiation provides a powerful metaphor for automating the allocation and reallocation of resources. Methods for automated negotiation in MAS include auction-based protocols and alternating offer bargaining protocols. Recently, argumentation-based negotiation has been accepted as a promising alternative to such approaches. Interest-based negotiation (IBN) is a form of argumentation-based negotiation in which agents exchange (1) information about their underlying goals; and (2) alternative ways to achieve these goals. However, the usefulness of IBN has been mostly established in the literature by appeal to intuition or by use of specific examples. In this paper, we propose a new formal model for reasoning about interest-based negotiation protocols. We demonstrate the usefulness of this framework by defining and analysing two different IBN protocols. In particular, we characterise conditions that guarantee their advantage (in the sense of expanding the set of individual rational deals) over the more classic proposal-based approaches to negotiation.  相似文献   

16.
D. Bgay  A. Rauzy 《Software》2001,31(2):191-208
In this article, we report a real‐life application of so‐called ‘Formal Methods’. The part of the project we were involved in was to verify that an embedded circuit satisfies a safety property. We describe the circuit as well as the mathematical and computer tools we used. We discuss methodological issues and we present some of the various experiments we performed. Finally, we draw some general conclusions about the practicability of formal verification techniques. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
18.
19.
20.
Object-oriented database systems are the focus of current research and development efforts. Yet, there is no commonly accepted object model, nor is it clear whether such a model can be developed. This paper reports on efforts to develop a formal framework that contains most features found in current object oriented database systems. The framework contains two parts. The first is a structural object model, including concepts such as structured objects, identity, and some form of inheritance. For this model, we explain the distinction between values and (abstract) objects, describe a system as a directed graph, and discuss declarative languages. The second part deals with higher-order concepts, such as classes and functions as data, methods, and inheritance. This part is a sketch, and leaves many issues unresolved. Throughout the paper, the emphasis is on logic-oriented modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号