首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 57 毫秒
1.
Concurrency control is the activity of synchronizing operations issued by concurrent executing transactions on a shared database. The aim of this control is to provide an execution that has the same effect as a serial (non-interleaved) one. The optimistic concurrency control technique allows the transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Effectiveness of the optimistic techniques depends on the conflict rate of transactions. Since different systems have various patterns of conflict and the patterns may also change over time, so applying the optimistic scheme to the entire system results in degradation of performance. In this paper, a novel algorithm is proposed that dynamically selects the optimistic or pessimistic approach based on the value of conflict rate. The proposed algorithm uses an adaptive resonance theory–based neural network in making decision for granting a lock or detection of the winner transaction. In addition, the parameters of this neural network are optimized by a modified gravitational search algorithm. On the other hand, in the real operational environments we know the writeset (WS) and readset (RS) only for a fraction of transactions set before execution. So, the proposed algorithm is designed based on optional knowledge about WS and RS of transactions. Experimental results show that the proposed hybrid concurrency control algorithm results in more than 35 % reduction in the number of aborts in high-transaction rates as compared to strict two-phase locking algorithm that is used in many commercial database systems. This improvement is 13 % as compared to pure-pessimistic approach and is more than 31 % as compared to pure-optimistic approach.  相似文献   

2.
在实时主动数据库中,事务不仅有时间约束,而且事务执行可能触发其它事务执行。传统并发控制协议不适应于实时主动数据库系统。该文研究了实时主动数据库事务执行模式,提出了有效性检查并发控制协议。协议使用动态调整串行次序策略,避免不必要的事务重启动。通过仿真模拟与HP2PL协议和OCC-TI-WAIT-50协议进行了比较。结果表明,该协议能有效地降低事务延误截止时间率和事务重启动率,性能优于HP2PL和OCC-TI-WAIT-50协议。  相似文献   

3.
范璧健  庄毅 《计算机科学》2016,43(11):280-283, 290
并发控制算法能够保证数据库事务集并发执行的正确性和一致性。为了提高并发事务的执行效率,提出了一种基于冲突率预测的自适应并发控制算法(ACC-PRC)。该算法将并发控制过程分为信息收集和策略选择两个阶段。信息收集阶段利用先验事务队列保证事务执行的可串行化,并且利用循环冲突队列收集系统的事务执行状态。策略选择阶段在循环冲突队列上运用改进的加权移动平均法预测下一阶段冲突率,并根据双向阈值决策下一阶段的并发策略。所提算法在事务到达率较高时能保持良好的事务执行效率,同时能够准确及时地感知冲突率的变化。对比实验表明ACC-PRC算法的综合性能优于HCC算法和ADCC算法。  相似文献   

4.
主动实时数据库因结合了时间限制与主动机制而使系统事务的并发控制变得更为复杂。主动规则的引入使事务触发新的事务且在执行上具有多种耦合方式,传统的实时并发控制策略无法对具有复杂执行模式的事务进行有效调度,而基于主动数据库的并发控制机制也没有考虑事务的实时性问题。通过对事务不同耦合方式的实时要求及事务间冲突关系进行分析,提出了新的主动实时数据库乐观并发控制方法,对不同事务级联深度进行评估,结合事务执行的时间信息对冲突事务进行动态调整串行化顺序。理论分析与实验证明,能在保证事务可串行性的同时降低了不必要事务重启个数,更好地满足系统的实时性。  相似文献   

5.
This paper presents a comparative study of some concurrency control algorithms for distributed databases of computer clusters which emphasize high availability and high performance requirements. For this purpose, we have analyzed some concurrency control algorithms which are used in commercial DBMSs, such as the pessimistic locking algorithm as it verifies transaction conflicts early in their execution phase, and the optimistic algorithm which investigates the presence of conflicts after the execution phase. A new algorithm is proposed and implemented by a simulation program. The three algorithms were tested using different configurations. Simulation results showed that the locking algorithm performed better than the optimistic method in presence of conflicts between transactions, while the optimistic algorithm provided better results in the absence of conflicts. Furthermore, in a distributed database with a certain probability of conflicts, the locking algorithm can be used to guarantee strong consistency and an acceptable level of performance. However, if this probability is negligible, the system performance can be improved by using the optimistic algorithm. The proposed algorithm offers improved performance in numerous cases. As a result, it can be used in a distributed database to guarantee a satisfactory level of performance in the presence of conflicts.  相似文献   

6.
A method for concurrency control in distributed database management systems that increases the level of concurrent execution of transactions, called ordering by serialization numbers (OSN), is proposed. The OSN method works in the certifier model and uses time-interval techniques in conjunction with short-term locks to provide serializability and prevent deadlocks. The scheduler is distributed, and the standard transaction execution policy is assumed, that is, the read and write operations are issued continuously during transaction execution. However, the write operations are copied into the database only when the transaction commits. The amount of concurrency provided by the OSN method is demonstrated by log classification. It is shown that the OSN method provides more concurrency than basic timestamp ordering and two-phase locking methods and handles successfully some logs which cannot be handled by any of the past methods. The complexity analysis of the algorithm indicates that the method works in a reasonable amount of time  相似文献   

7.
A heterogeneous distributed database environment integrates a set of autonomous database systems to provide global database functions. A flexible transaction approach has been proposed for the heterogeneous distributed database environments. In such an environment, flexible transactions can increase the failure resilience of global transactions by allowing alternate (but in some sense equivalent) executions to be attempted when a local database system fails or some subtransactions of the global transaction abort. We study the impact of compensation, retry, and switching to alternative executions on global concurrency control for the execution of flexible transactions. We propose a new concurrency control criterion for the execution of flexible and local transactions, termed F-serializability, in the error-prone heterogeneous distributed database environments. We then present a scheduling protocol that ensures F-serializability on global schedules. We also demonstrate that this scheduler avoids unnecessary aborts and compensation  相似文献   

8.
J. Xu 《Acta Informatica》1992,29(2):121-160
This paper presents a new model for studying the concurrency vs. computation time tradeoffs involved in on-line multiversion database concurrency control. The basic problem that is studied in our model is the following: Given:a current database system state which includes information such as which transaction previously read a version from which other transaction; which transaction has written which versions into the database; and the ordering of versions previously written; anda set of read and write requests of requesting transactions. Question: Does there exist a new database system state in which the requesting transactions can be immediately put into execution (their read and write requests satisfied, or in the case of predeclared writeset transactions, write requests are guaranteed to be satisfied) while preserving consistency under a given set of additional constraints? (The amount of concurrency achieved is defined by the set of additional constraints). In this paper we derive “limits” of performance achievable by polynomial time concurrency control algorithms. Each limit is characterized by a minimal set of constraints that allow the on-line scheduling problem to be solved in polynomial time. If any one constraint in that minimal set is omitted, although it could increase the amount of concurrency, it would also have the dramatic negative effect of making the scheduling problem NP-complete; whereas if we do not omit any constraint in the minimal set, then the scheduling problem can be solved in polynomial time. With each of these limits, one can construct an efficient scheduling algorithm that achieves an optimal level of concurrency in polynomial computation time according to the constraints defined in the minimal set.  相似文献   

9.
Many applications need to read an entire database in a consistent way. This global-reading of an entire database formulated as a global-read transaction (GRT) is not a trivial issue since it will cause a high degree of interference to other concurrent transactions. Conventional concurrency control protocols are obviously inadequate in handling the long-lived GRT. Previous studies proposed additional tests, namely, the color test and the shade test, to handle conflicts between the GRT and update transactions. However, we discovered that both algorithms can bring about nonserializable schedules of transactions. We propose an enhanced algorithm directly built on the two algorithms to guarantee the serializability of transactions.  相似文献   

10.
Applying semantic knowledge to real-time update of access control policies   总被引:1,自引:0,他引:1  
Real-time update of access control policies, that is, updating policies while they are in effect and enforcing the changes immediately, is necessary for many security-critical applications. In this paper, we consider real-time update of access control policies in a database system. Updating policies while they are in effect can lead to potential security problems, such as, access to database objects by unauthorized users. In this paper, we propose several algorithms that not only prevent such security breaches but also ensure the correctness of execution. The algorithms differ from each other in the degree of concurrency provided and the semantic knowledge used. Of the algorithms presented, the most concurrency is achieved when transactions are decomposed into atomic steps. Once transactions are decomposed, the atomicity, consistency, and isolation properties no longer hold. Since the traditional transaction processing model can no longer be used to ensure the correctness of the execution, we use an alternate semantic-based transaction processing model. To ensure correct behavior, our model requires an application to satisfy a set of necessary properties, namely, semantic atomicity, consistent execution, sensitive transaction isolation, and policy-compliant. We show how one can verify an application statically to check for the existence of these properties.  相似文献   

11.
The concurrency control algorithm is a key approach for a database system to guarantee the correctness and efficiency of the transaction execution. Thus, substantial effort has been devoted to proposing new concurrency control algorithms in both the database industry and academia. In this paper, we take the lead in summarizing the fundamental ideas of concurrency control algorithms as ``ordering-and-verifying''. We then redescribe and sort out the existing concurrency control algorithms following the ordering-and-verifying paradigm. On the basis of extensive comparative experiments on an open-source main-memory distributed transaction testbed called 3TS, we systematically investigate the advantages and disadvantages of the mainstream concurrency control algorithms and finally summarize the preferable application scenario for each algorithm to provide valuable references for follow-up research on concurrency control algorithms used in main-memory databases.  相似文献   

12.
A Mobile Ad-hoc Network (MANET) is a collection of mobile, wireless and battery-powered nodes without any fixed infrastructure. Therefore, it fits well in mission-critical applications such as disaster rescue and military operations. However, when a node runs out of energy, communication may fail and transactions may be aborted if they are time-critical and miss their deadlines. In order to provide timely and correct results for multiple concurrent transactions, energy-efficient database concurrency control (CC) techniques become critical for database systems built for MANET. Due to the characteristics of MANET databases, existing CC algorithms cannot work effectively. In this paper, an energy-efficient CC algorithm is developed for mission-critical MANET databases in a clustered network architecture where nodes are divided into clusters, each of which has a cluster head, responsible for the processing of all nodes in the cluster. The cluster structure is constructed using a novel weighted clustering algorithm, which uses node mobility, remaining energy and workload to group nodes into clusters and select cluster heads. In our CC algorithm, we elect cluster heads to work as coordinating servers to conserve energy and balance energy consumption among servers, and propose an optimistic CC algorithm to offer high concurrency and avoid wasting limited system resources. Besides correctness proof and theoretical analysis, comprehensive simulation experiments were conducted, and simulation results show the superiority of our CC algorithm over existing techniques in terms of transaction abort rate, total energy consumption by all servers, and degree of balancing energy consumption among servers.  相似文献   

13.
主动实时数据库中触发事务与被触发事务在执行上具有多种耦合模式,传统的并发控制无法对具有复杂耦合模式的事务进行有效调度。通过对不同耦合模式实时要求及事务间冲突关系的分析,提出了新的主动实时并发控制算法(ARTCC-CM),采用时戳区间策略,在验证阶段检测冲突事务触发度及执行时间,动态调整串行化顺序。理论分析与实验证明,在保证可串行性同时减少了不必要的事务重启,提高了系统性能。  相似文献   

14.
可串行化的并发控制对传统应用是合适的。而在实时数据库中,为了满足事务定时限制(典型地为截止期),并且考虑到局部的数据库不一致能够随下一次数据采样恢复,人们提出了准一致可串行化标准。本文基于这一标准提出了一种新的乐观并发控制协议,它考虑了数据的相似性及事务特点,提高了事务执行的并发度,有利于实时事务定时限制的满足。  相似文献   

15.
Two-phase locking (2PL) is the concurrency control mechanism that is used in most commercial database systems. In 2PL, for a transaction to access a data item, it has to hold the appropriate lock (read or write) on the data item by issuing a lock request. While the way transactions set their lock requests and the way the requests are granted would certainly affect a system's performance, such aspects have not received much attention in the literature. In this paper, a general transaction-processing model is proposed. In this model, a transaction is comprised of a number of stages, and in each stage the transaction can request to lock one or more data items. Methods for granting transaction requests and scheduling policies for granting blocked transactions are also proposed. A comprehensive simulation model is developed from which the performance of 2PL with our proposals is evaluated. Results indicate that performance models in which transactions request locks on an item-by-item basis and use first-come-first-served (FCFS) scheduling in granting blocked transactions underestimate the performance of 2PL. The performance of 2PL can be greatly improved if locks are requested in stages as dictated by the application. A scheduling policy that uses global information and/or schedules blocked transactions dynamically shows a better performance than the default FCFS.  相似文献   

16.
该文提出了实时Client/Server数据库系统多版本两阶段封锁并发控制协议和有效的恢复机制。协议区分只读事务和更新事务。只读事务在执行读操作时遵从多版本时间排序协议,更新事务执行强两阶段封锁协议,即持有全部锁直到事务结束。只读事务读请求从不失败,不必等待等特性。在典型数据库系统中,读操作比写操作频繁。这个特性对于实践来说至关重要。为了提高只读事务的响应时间,协议让每个客户端与一个一致数据库影子相联,只读事务在客户端处理。更新事务提交到服务端运行。服务端每个事务Ti在提交时系统必须向所有客户端广播信息。客户端根据得到的广播信息自动构造一致数据库影子。一致数据库影子还将用于系统恢复。通过仿真模拟。与2V2PL和OCC-TI-WAIT-50协议进行比较,结果表明:该并发控制协议不仅能有效降低事务延误截止时间率和重起动率,而且能改善只读事务的响应时间,减少优先级高事务的锁等待时间。协议性能优于2V2PL协议和OCC-TI-WAIT-50协议。  相似文献   

17.
Lam  Kam-Yiu  Hung  Sheung-Lun  Son  Sang H. 《Real-Time Systems》1997,13(2):141-166
The use of Static Two Phase Locking Protocols (S2PL) for concurrency control in real-time database systems (RTDBS) has received little attention in the past. Actually, real-time S2PL (RT-S2PL) protocols do possess some desirable features making them suitable for RTDBS, especially for distributed real-time database systems (DRTDBS) in which remote locking is required and distributed deadlock is possible. In this paper, different RT-S2PL protocols are proposed. They differ in their methods of reducing the blocking time of higher priority transactions. Their performance is studied and compared with a real-time dynamic two phase locking protocol (RT-D2PL), called Hybrid Two Phase Locking (Hb2PL). The impact of different system and workload parameters, such as mean inter-arrival time of transactions, number of remote lock requests of a transaction, communication overhead for sending messages, and database size on their performance have been examined. The performance results indicate that the RT-S2PL protocols are suitable for DRTDBS in which the proportion of local locks of a transaction is small and the communication overhead for locking is high.  相似文献   

18.
分布式实时事务提交协议   总被引:2,自引:1,他引:2  
在分布式实时数据库系统中,保证事务原子性的唯一途径是研究和开发出一个实时的原子提交协议.首先详细分析了事务因数据访问冲突而形成的各种依赖关系,在此基础上提出了实时的原子乐观提交协议——2SC协议,该协议减少了事务的等待时间,提高了事务的并发度,且能无缝地和现有的并发控制协议集成在一起,保证事务的可串行化和原子性.通过模拟实验研究表明,采用该协议能够减少超过截止期的事务数目。  相似文献   

19.
Real-time databases are poised to be an important component of complex embedded real-time systems. In real-time databases (as opposed to real-time systems), transactions must satisfy the ACID properties in addition to satisfying the timing constraints specified for each transaction (or task). Although several approaches have been proposed to combine real-time scheduling and database concurrency control methods, to the best of our knowledge, none of them provide a framework for taking into account the dynamic cost associated with aborts, rollbacks, and restarts of transactions. In this paper, we propose a framework in which both static and dynamic costs of transactions can be taken into account. Specifically, we present: i) a method for pre-analyzing transactions based on the notion of branch-points for data accessed up to a branch point and predicting expected data access to be incurred for completing the transaction, ii) a formulation of cost that includes static and dynamic factors for prioritizing transactions, iii) a scheduling algorithm which uses the above two, and iv) simulation of the algorithm for several operating conditions and workload. Our dynamic priority assignment policy (termed the cost conscious approach or CCA) adapts well to fluctuations in the system load without causing excessive numbers of transaction restarts. Our simulations indicate that i) CCA performs better than the EDF-HP algorithm for both soft and firm deadlines, ii) CCA is more fair than EDF-HP, iii) CCA is better than EDF-CR for soft deadline, even though CCA requires and uses less information, and iv) CCA is especially good for disk-resident data.  相似文献   

20.
Incremental recovery in main memory database systems   总被引:5,自引:0,他引:5  
Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号