首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Real-time index concurrency control   总被引:2,自引:0,他引:2  
Real time database systems are expected to rely heavily on indexes to speed up data access and thereby help more transactions meet their deadlines. Accordingly, high performance index concurrency control (ICC) protocols are required to prevent contention for the index from becoming a bottleneck. We develop real time variants of a representative set of classical B-tree ICC protocols and, using a detailed simulation model, compare their performance for real time transactions with firm deadlines. We also present and evaluate a real time ICC protocol called GUARD-link that augments the classical B-link protocol with a feedback based admission control mechanism. Both point and range queries, as well as the undos of the index action transactions are included in the study. The performance metrics used in evaluating the ICC protocols are the percentage of transactions that miss their deadlines and the fairness with respect to transaction type and size. Experimental results show that the performance characteristics of the real time version of an ICC protocol could be significantly different from the performance of the same protocol in a conventional (nonreal time) database system. In particular, B-link protocols, which are reputed to provide the best overall performance in conventional database systems, perform poorly under heavy real time loads. The new GUARD-link protocol, however, although based on the B-link approach, delivers the best performance (with respect to all performance metrics) for a variety of real time transaction workloads, by virtue of its admission control mechanism. GUARD-link provides close to ideal fairness in most environments  相似文献   

2.
Broadcast disk technique has been often used to disseminate frequently requested data efficiently to a large volume of mobile clients over wireless channels. In broadcast disk environments, a server often broadcasts different data items with differing frequencies to reflect the skewed data access patterns of mobile clients. Previously proposed concurrency control methods for mobile transactions in wireless broadcast environments are focused on the mobile transactions with uniform data access patterns. These protocols perform poorly in broadcast disk environments where the data access patterns of mobile transactions are skewed. In broadcast disk environments, the time length of a broadcast cycle usually becomes large to reflect the skewed data access patterns. This will often cause read-only transactions to access old data items rather than the latest data items. Furthermore, updating mobile transactions will be frequently aborted and restarted in the final validation stage due to the update conflict of the same data items with high access frequencies. This problem will increase the average response time of the update mobile transactions and waste the uplink communication bandwidth. In this paper, we extend the existing FBOCC concurrency control method to efficiently handle mobile transactions with skewed data access patterns in broadcast disk environments. Our method allows read-only transactions to access the more updated data, and reduces the average response time of updating transactions through early aborts and restarts. Our method also reduces the amount of uplink communication bandwidth for the final validation of the update transactions. We present an in-depth experimental analysis of our method by comparing with existing concurrency control protocols. Our performance analysis shows that it significantly decreases the average response time and the amount of uplink bandwidths over existing methods.  相似文献   

3.
Acta Informatica - A concurrency control mechanism (or a scheduler) is the component of a database system that safeguards the consistency of the database in the presence of interleaved accesses and...  相似文献   

4.
In this paper, we present a version of the linear hash structure algorithm to increase concurrency using multi-level transaction model. We exploit the semantics of the linear hash operations at each level of transaction nesting to allow more concurrency. We implement each linear hash operation by a sequence of operations at lower level of abstraction. Each linear hash operation at leaf-level is a combination of search and read/write operations. We consider locks at both vertex (page) and key level (tuple) to further increase concurrency. As undo-based recovery is not possible with multi-level transactions, we use compensation-based undo to achieve atomicity. We have implemented our model using object-oriented technology and multithreading paradigm. In our implementation, linear hash operations such as find, insert, delete, split, and merge are implemented as methods and correspond to multi-level transactions.  相似文献   

5.
包斌  李亚岗 《计算机应用》2006,26(1):220-0222
在对B链树极高同步性能研究的基础上,提出了一种将B链树作为数据库索引并和多版本技术相接合的一种新颖方案。该方案将事务分为只读事务或更新事务,只读事务不需要获取锁,而更新事务也只需要少量的锁,不会形成死锁。实验表明,在并发环境下这种方案能较大的提高数据库性能和事务的吞吐量。  相似文献   

6.
A method called optimistic method with dummy locks (ODL) is suggested for concurrency control in distributed databases. It is shown that by using long-term dummy locks, the need for the information about the write sets of validated transactions is eliminated and, during the validation test, only the related sites are checked. The transactions to be aborted are immediately recognized before the validation test, reducing the costs of restarts. Usual read and write locks are used as short-term locks during the validation test. The use of short-term locks in the optimistic approach eliminates the need for the system-wide critical section and results in a distributed and parallel validation test. The performance of ODL is compared with strict two-phase locking (2PL) through simulation, and it is found out that for the low conflict cases they perform almost the same, but for the high conflicting cases, ODL performs better than strict 2PL  相似文献   

7.
Physical data layout is a crucial factor in the performance of queries and updates in large data warehouses. Data layout enhances and complements other performance features such as materialized views and dynamic caching of aggregated results. Prior work has identified that the multidimensional nature of large data warehouses imposes natural restrictions on the query workload. A method based on a “uniform” query class approach has been proposed for data clustering and shown to be optimal. However, we believe that realistic query workloads will exhibit data access skew. For instance, if time is a dimension in the data model, then more queries are likely to focus on the most recent time interval. The query class approach does not adequately model the possibility of multidimensional data access skew. We propose the affinity graph model for capturing workload characteristics in the presence of access skew and describe an efficient algorithm for physical data layout. Our proposed algorithm considers declustering and load balancing issues which are inherent to the multidisk data layout problem. We demonstrate the validity of this approach experimentally.  相似文献   

8.
Restart-oriented concurrency control (CC) methods, such as optimistic CC, outperform blocking-oriented methods, such as standard two-phase locking in a high data contention environment, but this is at the cost of wasted processing due to restarts. Volatile savepoints are considered in this study as a method to reduce this wasted processing and to improve response time. There is the usual tradeoff between the checkpointing overhead and the wasted processing when a transaction is restarted. Our study shows that in a system where objects are accessed and updated uniformly during the lifetime of transactions, significant improvement in performance at high data conflict levels are attainable only when the checkpointing cost is low. This implies few optimally placed checkpoints per transaction. It is observed that checkpointing may result in a significant improvement in performance when access to database hot-spots are deferred to the final steps of transaction execution. The parametric studies reported in this paper are facilitated by closed-form analytic solutions expressing system performance, as well as an iterative solution which takes into account hardware resource contention in addition to data contention  相似文献   

9.
is paper presents an adaptive strategy called K-locking algorithm for concurrency control in database system.The algorithm integrates an optimistic approach with he K-lock mechanism to control the degree of transaction interference.It is shown that the K-locking strategy is adaptive to the changes in transaction parameters and outperforms both an optimistic approach and a pessimistic approach.  相似文献   

10.
The paper deals with the foundations of concurrency theory. We show how structurally complex concurrent behaviours can be modelled by relational structures (X, ¨, \sqsubset){(X, \diamondsuit, \sqsubset)} , where X is a set (of event occurrences), and ¨{\diamondsuit} (interpreted as commutativity) and \sqsubset{\sqsubset} (interpreted as weak causality) are binary relations on X. The paper is a continuation of the approach initiated in Gaifman and Pratt (Proceedings of LICS’87, pp 72–85, 1987), Lamport (J ACM 33:313–326, 1986), Abraham et al. (Semantics for concurrency, workshops in computing. Springer, Heidelberg, pp 311–323, 1990) and Janicki and Koutny (Lect Notes Comput Sci 506:59–74, 1991), substantially developed in Janicki and Koutny (Theoretical Computer Science 112:5–52, 1993) and Janicki and Koutny (Acta Informatica 34:367–388, 1997), and recently generalized in Guo and Janicki (Lect Notes Comput Sci 2422:178–191, 2002) and Janicki (Lect Notes Comput Sci 3407:84–98, 2005). For the first time the full model for the most general case is given.  相似文献   

11.
In this paper, a new method for intelligent robust control design is presented that achieves the best possible convergence rate of the system, utilizing the knowledge on the range of uncertain parameter. Thus resulting in enhanced stability and performance. The proposed method is applied to the grid-connected voltage source inverter (VSI) system with uncertainties in grid-impedance. Simulation and experimental results illustrate the efficacy of the proposed scheme. Comparison with existing methods shows that the proposed scheme can provide better reference tracking, stability for a wider uncertainty range, and improved transient and steady-state performance with low implementation cost.  相似文献   

12.
ATP是专门针对AdHoc网络的特点制定的传榆协议,当路由中断后,ATP源端立即进入探测状态,周期性地向目的端发送探测包,造成吞吐量的降低。EATP是一种基于反馈的ATP增强机制,它通过在ATP源端设置定时器,路由中断时限制源端发送探测包的数量,避免了多个重复确认问题。性能分析与仿真实验结果显示,与传统TCP—ELFN相比,EATP能有效地提高AdHoc网络的吞吐量,减少不必要的能源消耗。  相似文献   

13.
Cautious schedulers, which never resort to rollbacks for the purpose of concurrency control, are investigated. In particular, cautious schedulers for classes WW consisting of schedules serializable under the write-write constraints, and WRW, a superclass of W, are considered. The cautious WW-scheduler has a number of nice properties, one of which is the existence of a polynomial-time scheduling algorithm. Since cautious WRW-scheduling is, in general, NP-complete, some restrictions are introduced which allow polynomial-time scheduling. All of these cautious schedulers are based on the assumption that transaction predeclare their read and write sets on arrival. Anomalies which occur when transaction modify their read sets or write sets during execution are discussed and countermeasures are proposed  相似文献   

14.
用SQL实现工作流的并发控制   总被引:1,自引:0,他引:1  
工作流技术在信息系统的应用中,并发控制机制的设计是经常要面临的问题。给出一种基于将工作流中数据和任务分离的工作流并发控制机制,在保证工作流正确性的前提下,引入“数据约束”和“任务约束”的概念来提高工作流的工作性能和降低工作流设计的复杂性,并用数据库中SQL语言强有力的约束控制加以实现。  相似文献   

15.
We consider a finitary procedural programming language (finite data-types, no recursion) extended with parallel composition and binary semaphores. Having first shown that may-equivalence of second-order open terms is undecidable we set out to find a framework in which decidability can be regained with minimum loss of expressivity. To that end we define an annotated type system that controls the number of concurrent threads created by terms and give a fully abstract game semantics for the notion of equivalence induced by typable terms and contexts. Finally, we show that the semantics of all typable terms, at any order and in the presence of iteration, has a regular-language representation and thus the restricted observational equivalence is decidable.  相似文献   

16.
Concurrency control (CC) algorithms guarantee the correctness and consistency criteria for concurrent execution of a set of transactions in a database. A precondition that is seen in many CC algorithms is that the writeset (WS) and readset (RS) of transactions should be known before the transaction execution. However, in real operational environments, we know the WS and RS only for a fraction of transaction set before execution. However, optional knowledge about WS and RS of transactions is one of the advantages of the proposed CC algorithm in this paper. If the WS and RS are known before the transaction execution, the proposed algorithm will use them to improve the concurrency and performance. On the other hand, the concurrency control algorithms often use a specific static or dynamic equation in making decision about granting a lock or detection of the winner transaction. The proposed algorithm in this paper uses an adaptive resonance theory (ART)-based neural network for such a decision making. In this way, a parameter called health factor (HF) is defined for transactions that is used for comparing the transactions and detecting the winner one in accessing the database objects. HF is calculated using ART2 neural network. Experimental results show that the proposed neural-based CC (NCC) algorithm increases the level of concurrency by decreasing the number of aborts. The performance of proposed algorithm is compared with strict two-phase locking (S2PL) algorithm, which has been used in most commercial database systems. Simulation results show that the performance of proposed NCC algorithm, in terms of number of aborts, is better than S2PL algorithm in different transaction rates.  相似文献   

17.
《Computers in Industry》2007,58(8-9):823-831
In today's manufacturing environment, enterprises having work groups geographically dispersed are not uncommon. A product data management (PDM) system is therefore required for controlling the distribution and maintaining the integrity of the product data throughout its entire lifecycle; the efficiency of a PDM system is greatly affected by the concurrency control method it adopts. The paper proposes a concurrent control model for PDM that can also caters for version management and product architecture. The paper discusses how granularity and versioning are being embedded into a lock-based concurrency control model. The concurrent accessibility of an example product data is explained to illustrate the adjustability according to the actions taken by the users and the architecture of the corresponding entities.  相似文献   

18.
The authors propose a paradigm for developing, describing, and proving the correctness of concurrency control protocols for replicated databases in the presence of failures or communication restrictions. The approach used is to hierarchically divide the problem of achieving one-copy serializability by introducing the notion of a `group' that is a higher level of abstraction than transactions. Instead of dealing with the overall problem, the paradigm breaks it into two simpler ones: (1) a local policy for each group that ensures a total order of all transactions in that group; and (2) a global policy that ensures a correct serialization of all groups. The paradigm is used to demonstrate the similarities between several concurrency control protocols by comparing the way they achieve correctness  相似文献   

19.
An adaptive control scheme is developed for a robot manipulator to track a desired trajectory as closely as possible in spite of a wide range of manipulator motions and parameter uncertainties of links and payload.

The presented control scheme has two components: a nominal control and a variational control. The nominal control, generated from direct calculation of the manipulator dynamics along a desired trajectory, drives the manipulator to a neighbourhood of the trajectory. Then a new adaptive regulation scheme is devised based on the Lyapunov direct method, which generates the variational control that regulates the perturbation in the vicinity of the desired trajectory.  相似文献   

20.
In recent past, Mir and Nikooghadam presented an enhanced biometrics based authentication scheme using lightweight symmetric key primitives for telemedicine networks. This scheme was introduced in an anticipation to the former biometrics based authentication system proposed by Yan et al. Mir and Nikooghadam declared that their scheme is invincible against potential attacks while providing user anonymity. Our study and in-depth analysis unveil that Mir and Nikooghadam’s authentication scheme is susceptible to smart card stolen attack, moreover anonymity violation is still possible despite the claim of Mir and Nikooghadam. We have utilized the random oracle model in order to perform security analysis. The analysis endorses that the proposed scheme is robust enough to provide protection against all potential attacks specially smart card stolen attack and user anonymity violation attack. Analysis is further substantiated through an automated software application ProVerif. The analysis also shows that proposed scheme is computationally efficient than Mir and Nikooghadam’s scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号