共查询到20条相似文献,搜索用时 15 毫秒
1.
The past few years have seen a dramatic increase in the business use of centralized multi-user microcomputer
without a concomitant attention to the concurrency control mechanisms employed by those
to ensure the integrity of the data they manage. This paper examines the state-of-the art, examines the types of work done previously on the performance of
concurrency control, and proposes independent measures to be used in an empirical study to evaluate the efficiency and effectiveness of multi-user microcomputer
concurrency control. 相似文献
2.
David Lomet Betty Salzberg 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):224-240
Although many suggestions have been made for concurrency in B-trees, few of these have considered recovery as well. We describe an approach which provides high concurrency while preserving
well-formed trees across system crashes. Our approach works for a class of index trees that is a generalization of the B-tree. This class includes some multi-attribute indexes and temporal indexes. Structural changes in an index tree are decomposed
into a sequence of atomic actions, each one leaving the tree well-formed and each working on a separate level of the tree.
All atomic actions on levels of the tree above the leaf level are independent of database transactions, and so are of short
duration. Incomplete structural changes are detected in normal operations and trigger completion.
Edited by A. Reuter. Received August 1995 / accepted July 1996 相似文献
3.
Building knowledge base management systems 总被引:1,自引:0,他引:1
John Mylopoulos Vinay Chaudhri Dimitris Plexousakis Adel Shrufi Thodoros Topologlou 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(4):238-263
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital
libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases
cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in
terms of existing database technology, because such technology does not support the rich representational structure and inference
mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management
system intended for such applications. The architecture assumes an object-oriented knowledge representation language with
an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference
and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For
storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing
algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation.
On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure
and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally,
algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described.
The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation
data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases
are feasible.
Edited by Gunter Schlageter and H.-J. Schek.
Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995 相似文献
4.
Analysis of locking behavior in three real database systems 总被引:1,自引:0,他引:1
Vigyan Singhal Alan Jay Smith 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(1):40-52
Concurrency control is essential to the correct functioning of a database due to the need for correct, reproducible results.
For this reason, and because concurrency control is a well-formulated problem, there has developed an enormous body of literature
studying the performance of concurrency control algorithms. Most of this literature uses either analytic modeling or random
number-driven simulation, and explicitly or implicitly makes certain assumptions about the behavior of transactions and the
patterns by which they set and unset locks. Because of the difficulty of collecting suitable measurements, there have been
only a few studies which use trace-driven simulation, and still less study directed toward the characterization of concurrency
control behavior of real workloads. In this paper, we present a study of three database workloads, all taken from IBM DB2
relational database systems running commercial applications in a production environment. This study considers topics such
as frequency of locking and unlocking, deadlock and blocking, duration of locks, types of locks, correlations between applications
of lock types, two-phase versus non-two-phase locking, when locks are held and released, etc. In each case, we evaluate the
behavior of the workload relative to the assumptions commonly made in the research literature and discuss the extent to which
those assumptions may or may not lead to erroneous conclusions.
Edited by H. Garcia-Molina. Received April 5, 1994 / Accepted November 1, 1995 相似文献
5.
6.
在分布式系统中,采用的并发控制(CC)方法对事务处理系统的性能有着重要影响。介绍了主要的并发控制方法、加锁模型和两阶段锁(2PL)协议,提出了基于锁机制且遵守2PL协议的悲观控制方法--积分法。该方法既能减少网络中数据的传送量,又具有很好的并发性,可以很好地处理多副本并发控制问题。实验结果证明,该积分法可以比其他方法取得更好的性能。 相似文献
7.
蒋亚虎 《数字社区&智能家居》2007,(4):24-25
并发控制机制是数据库事务管理中重要的组成部分,是衡量一个数据库系统功能强弱和性能好坏的重要标志之一。分布式并发控制的目的是保证分布事务和分布式数据库的一致性,实现分布事务的可串行性,使事务具有良好的并发度以保证系统具有用户满意的效率。本文首先就分布式数据库并发事务的可串行化进行探讨并在此基础上提出分布式数据库并发控制的基本方法。 相似文献
8.
蒋亚虎 《数字社区&智能家居》2007,2(7):24-25
并发控制机制是数据库事务管理中重要的组成部分,是衡量一个数据库系统功能强弱和性能好坏的重要标志之一。分布式并发控制的目的是保证分布事务和分布式数据库的一致性,实现分布事务的可串行性,使事务具有良好的并发度以保证系统具有用户满意的效率。本文首先就分布式数据库并发事务的可串行化进行探讨并在此基础上提出分布式数据库并发控制的基本方法。 相似文献
9.
Managing database server performance to meet QoS requirements in electronic commerce systems 总被引:1,自引:0,他引:1
Patrick Martin Wendy Powley Hoi-Ying Li Keri Romanufa 《International Journal on Digital Libraries》2002,3(4):316-324
The performance of electronic commerce systems has a major impact on their acceptability to users. Different users also demand
different levels of performance from the system, that is, they will have different Quality of Service (QoS) requirements. Electronic commerce systems are the integration of several different types of servers and each server must
contribute to meeting the QoS demands of the users. In this paper we focus on the role, and the performance, of a database server within an electronic commerce system.
We examine the characteristics of the workload placed on a database server by an electronic commerce system and suggest a
range of QoS requirements for the database server based on this analysis of the workload. We argue that a database server
must be able to dynamically reallocate its resources in order to meet the QoS requirements of different transactions as the
workload changes. We describe Quartermaster, which is a system to support dynamic goal-oriented resource management in database
management systems, and discuss how it can be used to help meet the QoS requirements of the electronic commerce database server.
We provide an example of the use of Quartermaster that illustrates how the dynamic reallocation of memory resources can be
used to meet the QoS requirements of a set of transactions similar to transactions found in an electronic commerce workload.
We briefly describe the memory reallocation algorithms used by Quartermaster and present experiments to show the impact of
the reallocations on the performance of the transactions.
Published online: 22 August 2001 相似文献
10.
Multimedia systems must be able to support a certain quality of service (QoS) to satisfy the stringent real-time performance
requirements of their applications. HeiRAT, the Heidelberg Resource Administration Technique, is a comprehensive QoS management
system that was designed and implemented in connection with a distributed multimedia platform for networked PCs and workstations.
HeiRAT includes techniques for QoS negotiation, QoS calculation, resource reservation, and resource scheduling for local and
network resources. 相似文献
11.
Update propagation strategies to improve freshness in lazy master replicated databases 总被引:4,自引:0,他引:4
Esther Pacitti Eric Simon 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):305-318
Many distributed database applications need to replicate data to improve data availability and query response time. The two-phase
commit protocol guarantees mutual consistency of replicated data but does not provide good performance. Lazy replication has
been used as an alternative solution in several types of applications such as on-line financial transactions and telecommunication
systems. In this case, mutual consistency is relaxed and the concept of freshness is used to measure the deviation between
replica copies. In this paper, we propose two update propagation strategies that improve freshness. Both of them use immediate
propagation: updates to a primary copy are propagated towards a slave node as soon as they are detected at the master node
without waiting for the commitment of the update transaction. Our performance study shows that our strategies can improve
data freshness by up to five times compared with the deferred approach.
Received April 24, 1998 / Revised June 7, 1999 相似文献
12.
Summary. This paper formulates necessary and sufficient conditions on the information required for enforcing causal ordering in a
distributed system with asynchronous communication. The paper then presents an algorithm for enforcing causal message ordering.
The algorithm allows a process to multicast to arbitrary and dynamically changing process groups. We show that the algorithm
is optimal in the space complexity of the overhead of control information in both messages and message logs. The algorithm
achieves optimality by transmitting the bare minimum causal dependency information specified by the necessity conditions,
and using an encoding scheme to represent and transmit this information. We show that, in general, the space complexity of
causal 0message ordering in an asynchronous system is , where is the number of nodes in the system. Although the upper bound on space complexity of the overhead of control information
in the algorithm is , the overhead is likely to be much smaller on the average, and is always the least possible.
Received: January 1996 / Accepted: February 1998 相似文献
13.
The throughput of a transaction processing system can be improved by decomposing transactions into steps and allowing the steps of concurrent transactions to be interleaved. In some cases all interleavings are assumed to be acceptable; in others certain interleavings are forbidden. In this paper we describe a new concurrency control that guarantees that only acceptable interleavings occur. We describe the implementation of the new control within the CA-Open Ingrestm database management system and experiments that were run to evaluate its effectiveness using the TPC-Ctm Benchmark Transactions. The experiments demonstrate up to 80% improvement when lock contention is high, when long running transactions are a part of the transaction suite, and/or when sufficient system resources are present to support the additional concurrency that the new control allows. Finally, we describe a new correctness criterion that is weaker than serializability and yet guarantees that the specifications of all transactions are met. The criterion can be used to determine the acceptable interleavings for a particular application. The specification of these interleavings can serve as input to the new control. 相似文献
14.
Semantic heterogeneity resolution in federated databases by metadata implantation and stepwise evolution 总被引:3,自引:0,他引:3
Goksel Aslan Dennis McLeod 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):120-132
A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database
boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into
the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema
implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme
in which remote and local (meta)data are integrated in a stepwise manner over time. We introduce metadata implantation and
stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure
and semantics of database elements (classes, attributes, and individual instances). We employ a semantically rich canonical
data model, and an incremental integration and semantic heterogeneity resolution scheme. In our approach, relationships between
local and remote information units are determined whenever enough knowledge about their semantics is acquired. The metadata
folding problem is solved by implanting remote database elements into the local database, a process that imports remote database
elements into the local database environment, hypothesizes the relevance of local and remote classes, and customizes the organization
of remote metadata. We have implemented a prototype system and demonstrated its use in an experimental neuroscience environment.
Received June 19, 1998 / Accepted April 20, 1999 相似文献
15.
Ajay D. Kshemkalyani 《Distributed Computing》1998,11(4):169-189
Summary. In a distributed system, high-level actions can be modeled by nonatomic events. This paper proposes causality relations between
distributed nonatomic events and provides efficient testing conditions for the relations. The relations provide a fine-grained
granularity to specify causality relations between distributed nonatomic events. The set of relations between nonatomic events
is complete in first-order predicate logic, using only the causality relation between atomic events. For a pair of distributed
nonatomic events X and Y, the evaluation of any of the causality relations requires integer comparisons, where and , respectively, are the number of nodes on which the two nonatomic events X and Y occur. In this paper, we show that this polynomial complexity of evaluation can by simplified to a linear complexity using
properties of partial orders. Specifically, we show that most relations can be evaluated in integer comparisons, some in integer comparisons, and the others in integer comparisons. During the derivation of the efficient testing conditions, we also define special system execution prefixes
associated with distributed nonatomic events and examine their knowledge-theoretic significance.
Received: July 1997 / Accepted: May 1998 相似文献
16.
Synchronous Byzantine quorum systems 总被引:2,自引:0,他引:2
Rida A. Bazzi 《Distributed Computing》2000,13(1):45-52
Summary. Quorum systems have been used to implement many coordination problems in distributed systems such as mutual exclusion, data
replication, distributed consensus, and commit protocols. Malkhi and Reiter recently proposed quorum systems that can tolerate
Byzantine failures; they called these systems Byzantine quorum systems and gave some examples of such quorum systems. In this
paper, we propose a new definition of Byzantine quorums that is appropriate for synchronous systems. We show how these quorums
can be used for data replication and propose a general construction of synchronous Byzantine quorums using standard quorum
systems. We prove tight lower bounds on the load of synchronous Byzantine quorums for various patterns of failures and we
present synchronous Byzantine quorums that have optimal loads that match the lower bounds for two failure patterns.
Received: June 1998 / Accepted: August 1999 相似文献
17.
Scene change detection techniques for video database systems 总被引:1,自引:0,他引:1
Haitao Jiang Abdelsalam Helal Ahmed K. Elmagarmid Anupam Joshi 《Multimedia Systems》1998,6(3):186-195
Scene change detection (SCD) is one of several fundamental problems in the design of a video database management system (VDBMS).
It is the first step towards the automatic segmentation, annotation, and indexing of video data. SCD is also used in other
aspects of VDBMS, e.g., hierarchical representation and efficient browsing of the video data. In this paper, we provide a
taxonomy that classifies existing SCD algorithms into three categories: full-video-image-based, compressed-video-based, and
model-based algorithms. The capabilities and limitations of the SCD algorithms are discussed in detail. The paper also proposes
a set of criteria for measuring and comparing the performance of various SCD algorithms. We conclude by discussing some important
research directions. 相似文献
18.
Semantic integrity support in SQL:1999 and commercial (object-)relational database management systems 总被引:1,自引:0,他引:1
Can Türker Michael Gertz 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):241-269
The correctness of the data managed by database systems is vital to any application that utilizes data for business, research,
and decision-making purposes. To guard databases against erroneous data not reflecting real-world data or business rules,
semantic integrity constraints can be specified during database design. Current commercial database management systems provide
various means to implement mechanisms to enforce semantic integrity constraints at database run-time.
In this paper, we give an overview of the semantic integrity support in the most recent SQL-standard SQL:1999, and we show
to what extent the different concepts and language constructs proposed in this standard can be found in major commercial (object-)relational
database management systems. In addition, we discuss general design guidelines that point out how the semantic integrity features
provided by these systems should be utilized in order to implement an effective integrity enforcing subsystem for a database.
Received: 14 August 2000 / Accepted: 9 March 2001 / Published online: 7 June 2001 相似文献
19.
E. Panagos A. Biliris 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):209-223
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms
of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS
storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking
scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers
are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only
logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction
commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes
requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also
present a preliminary performance evaluation of the implementation of the above mechanisms.
Edited by R. King. Received July 1993 / Accepted May 1996 相似文献