首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Many applications need to read an entire database in a consistent way. This global-reading of an entire database formulated as a global-read transaction (GRT) is not a trivial issue since it will cause a high degree of interference to other concurrent transactions. Conventional concurrency control protocols are obviously inadequate in handling the long-lived GRT. Previous studies proposed additional tests, namely, the color test and the shade test, to handle conflicts between the GRT and update transactions. However, we discovered that both algorithms can bring about nonserializable schedules of transactions. We propose an enhanced algorithm directly built on the two algorithms to guarantee the serializability of transactions.  相似文献   

2.
面向更新密集型应用的内存数据库系统,其检查点技术应符合几个关键的要求,包括检查点操作对正常事务处理的干扰尽可能小、能够处理存取倾斜状况、支持数据库系统的快速恢复、提供恢复过程中的系统可用性等.该文提出一种事务一致的分区检查点技术,采用基于元组的动态多版本并发控制机制,避免了读写事务的加锁冲突,提高系统吞吐能力;检查点操作以只读事务形式实现,存多版本并发控制下,避免检查点操作对正常事务处理的堵塞;由于检查点文件是事务一致的,只需要记录事务的Redo 日志信息,在系统恢复过程中,只需要对日志文件进行一遍扫描处理,加快恢复过程;基于优先级的数据分区装载和恢复,使得恢复过程中新事务的数据存取请求迅速得到满足,保证了恢复过程中的系统可用性.由于采用两级版本管理机制以及动态版本共享技术,多版本管理的空间开销降低到可以接受的水平.实验结果表明,文中提出的检查点技术方案获得比模糊检查点技术高27%的系统吞吐量,同时版本管理的空间开销在可接受的范围之内,满足高性能应用的要求.  相似文献   

3.
王盛  董黎刚  李群 《计算机工程》2011,37(5):65-67,70
设计一种基于二进制数及项目的支持度分布的Apriori改进算法BF-Apriori。该算法通过分析项目的概率分布并对项目集中的项目按概率从大到小进行排序,经维度编码为二进制数后,降低事务数据库的读取开销和存储开销,同时采用切片运算和剪枝技术降低规则挖掘运算的时间复杂度。实验结果表明,BF-Apriori算法降低了50%左右的存储开销及400%以上的执行时间,能提高数据挖掘的存储效率和运算速度。  相似文献   

4.
包斌  李亚岗 《计算机应用》2006,26(1):220-0222
在对B链树极高同步性能研究的基础上,提出了一种将B链树作为数据库索引并和多版本技术相接合的一种新颖方案。该方案将事务分为只读事务或更新事务,只读事务不需要获取锁,而更新事务也只需要少量的锁,不会形成死锁。实验表明,在并发环境下这种方案能较大的提高数据库性能和事务的吞吐量。  相似文献   

5.
The design of a database is a rather complex and dynamic process that requires comprehensive knowledge and experience. There exist many manual design tools and techniques, but the step from a schema to an implementation is still a delicate subject. The interactive database design tool Gambit supports the whole process in an optimal way. It is based on an extended relational-entity relationship model. The designer is assisted in outlining and describing data structures and consistency preserving update transactions. The constraints are formulated using the database programming language Modula/R which is based upon first-order predicate calculus. The update transactions are generated automatically as Modula/R programs and include all defined integrity constraints. They are collected in so-called data modules that represent the only interface to the database apart from read operations. The prototype facility of Gambit allows the designer to test the design of the database. The results can be used as feedback leading to an improvement of the conceptual schema and the transactions.  相似文献   

6.
Concurrency control is the activity of synchronizing operations issued by concurrent executing transactions on a shared database. The aim of this control is to provide an execution that has the same effect as a serial (non-interleaved) one. The optimistic concurrency control technique allows the transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Effectiveness of the optimistic techniques depends on the conflict rate of transactions. Since different systems have various patterns of conflict and the patterns may also change over time, so applying the optimistic scheme to the entire system results in degradation of performance. In this paper, a novel algorithm is proposed that dynamically selects the optimistic or pessimistic approach based on the value of conflict rate. The proposed algorithm uses an adaptive resonance theory–based neural network in making decision for granting a lock or detection of the winner transaction. In addition, the parameters of this neural network are optimized by a modified gravitational search algorithm. On the other hand, in the real operational environments we know the writeset (WS) and readset (RS) only for a fraction of transactions set before execution. So, the proposed algorithm is designed based on optional knowledge about WS and RS of transactions. Experimental results show that the proposed hybrid concurrency control algorithm results in more than 35 % reduction in the number of aborts in high-transaction rates as compared to strict two-phase locking algorithm that is used in many commercial database systems. This improvement is 13 % as compared to pure-pessimistic approach and is more than 31 % as compared to pure-optimistic approach.  相似文献   

7.
主动规则的并发控制与死锁处理   总被引:1,自引:0,他引:1  
在基于规则的主动数据库系统中 ,被触发规则通常以事务模式运行 ,这些并行事务由规则耦合方式确定其开始处理时刻和可串行化提交次序 .本文根据并行事务对于共享数据对象的锁继承和锁剥夺关系 ,提出了一个并发控制算法 ,并基于事务树 (森林 )给出一个有效的死锁检测算法和具有最小代价的死锁恢复算法 .  相似文献   

8.
Data consistency in intermittently connected distributed systems   总被引:9,自引:0,他引:9  
Mobile computing introduces a new form of distributed computation in which communication is most often intermittent, low-bandwidth, or expensive, thus providing only weak connectivity. We present a replication scheme tailored for such environments. Bounded inconsistency is defined by allowing controlled deviation among copies located at weakly connected sites. A dual database interface is proposed that in addition to read and write operations with the usual semantics supports weak read and write operations. In contrast to the usual read and write operations that read consistent values and perform permanent updates, weak operations access only local and potentially inconsistent copies and perform updates that are only conditionally committed. Exploiting weak operations supports disconnected operation since mobile clients can employ them to continue to operate even while disconnected. The extended database interface coupled with bounded inconsistency offers a flexible mechanism for adapting replica consistency to the networking conditions by appropriately balancing the use of weak and normal operations. Adjusting the degree of divergence among copies provides additional support for adaptivity. We present transaction-oriented correctness criteria for the proposed schemes, introduce corresponding serializability-based methods, and outline protocols for their implementation. Then, some practical examples of their applicability are provided. The performance of the scheme is evaluated for a range of networking conditions and varying percentages of weak transactions by using an analytical model developed for this purpose  相似文献   

9.
In this study, we investigate a different approach to maintaining serializability in real-time database systems (RTDBS) such that concurrency among transactions can be increased. The study is motivated by the dominance of read only transactions (ROTs) in many real-time applications. Given the knowledge about the read/write characteristics of transactions, it can be more efficient and effective to process ROTs separately from update transactions (UTs). In particular, we have devised an independent algorithm to process ROTs while a conventional concurrency control protocol such as optimistic concurrency control (OCC) can be employed to process UTs. Using a separate algorithm to process ROTs can reduce the interference between UTs and ROTs. The undesirable overhead caused by transaction restart and blocking due to concurrency control can be alleviated. Consequently, the timeliness of the system can be improved. The performance of using this approach is examined through a series of simulation experiments. The results showed that the performance of ROTs in terms of miss rate and restart rate is improved significantly while the performance of UTs is also improved slightly. As a result, separate processing of ROTs is a viable approach that achieves better performance and resource utilization than using solely the OCC protocol, one of the best performing protocols in the literature of real-time database.  相似文献   

10.
We propose an algorithm for executing transactions in object-oriented databases. The object-oriented database model generalizes the classical model of database concurrency control by permitting accesses toclass andinstance objects, by permittingarbitrary operations on objects as opposed to traditional read and write operations, and by allowingnested execution of transactions on objects. In this paper, we first develop a uniform methodology for treating both classes and instances. We then develop a two-phase locking protocol with a new relationship between locks calledordered sharing for an object-oriented database. Ordered sharing does not restrict the execution of conflicting operations. Finally, we extend the protocol to handle objects that execute methods on other objects thus resulting in the nested execution of transactions. The resulting protocol permits more concurrency than other known locking-based protocols.  相似文献   

11.
Abstract: The purpose of the intrusion detection system (IDS) database is to detect transactions that access data without permission. This paper proposes a novel approach to identifying malicious transactions. The approach concentrates on two aspects of database transactions: (1) dependencies among data items and (2) variations of each individual data item which can be considered as time‐series data. The advantages are threefold. First, dependency rules among data items are extended to detect transactions that read or write data without permission. Second, a novel behaviour similarity criterion is introduced to reduce the false positive rate of the detection. Third, time‐series anomaly analysis is conducted to pinpoint intrusion transactions that update data items with unexpected pattern. As a result, the proposed approach is able to track normal transactions and detect malicious ones more effectively than existing approaches.  相似文献   

12.
J. Xu 《Acta Informatica》1992,29(2):121-160
This paper presents a new model for studying the concurrency vs. computation time tradeoffs involved in on-line multiversion database concurrency control. The basic problem that is studied in our model is the following: Given:a current database system state which includes information such as which transaction previously read a version from which other transaction; which transaction has written which versions into the database; and the ordering of versions previously written; anda set of read and write requests of requesting transactions. Question: Does there exist a new database system state in which the requesting transactions can be immediately put into execution (their read and write requests satisfied, or in the case of predeclared writeset transactions, write requests are guaranteed to be satisfied) while preserving consistency under a given set of additional constraints? (The amount of concurrency achieved is defined by the set of additional constraints). In this paper we derive “limits” of performance achievable by polynomial time concurrency control algorithms. Each limit is characterized by a minimal set of constraints that allow the on-line scheduling problem to be solved in polynomial time. If any one constraint in that minimal set is omitted, although it could increase the amount of concurrency, it would also have the dramatic negative effect of making the scheduling problem NP-complete; whereas if we do not omit any constraint in the minimal set, then the scheduling problem can be solved in polynomial time. With each of these limits, one can construct an efficient scheduling algorithm that achieves an optimal level of concurrency in polynomial computation time according to the constraints defined in the minimal set.  相似文献   

13.
The security of computers and their networks is of crucial concern in the world today. One mechanism to safeguard information stored in database systems is an Intrusion Detection System (IDS). The purpose of intrusion detection in database systems is to detect malicious transactions that corrupt data. Recently researchers are working on using data mining techniques for detecting such malicious transactions in database systems. Their approach concentrates on mining data dependencies among data items. However, the transactions not compliant with these data dependencies are identified as malicious transactions. Algorithms that these approaches use for designing their data dependency miner have limitations. For instance, they need to experimentally determine appropriate settings for minimum support and related constraints, which does not necessarily lead to strong data dependencies. In this paper we propose a new data mining algorithm, called the Optimal Data Access Dependency Rule Mining (ODADRM), for designing a data dependency miner for our database IDS. ODADRM is an extension of k-optimal rule discovery algorithm, which has been improved to be suitable in database intrusion detection domain. ODADRM avoids many limitations of previous data dependency miner algorithms. As a result, our approach is able to track normal transactions and detect malicious ones more effectively than existing approaches.  相似文献   

14.
Even state of the art database protection mechanisms often fail to prevent occurrence of malicious attacks. Since in a database environment, the modifications made by one transaction may affect the execution of some of the later transactions, it leads to spreading of the damage caused by malicious (bad) transactions. Following traditional log-based recovery schemes, one can rollback (undo) the effect of all the transactions, both malicious as well as non-malicious. In such a scenario, even the unaffected transactions are also rolled back. In this paper, we propose a column dependency-based approach to identify the affected transactions which need to be compensated along with the malicious transactions. To ensure durability, committed non-malicious transactions are then re-executed in a manner that retains database consistency. We present a static recovery algorithm as well as an on-line version of the same and prove their correctness. A detailed performance evaluation of the proposed scheme with TPC-C benchmark suite is also presented.  相似文献   

15.
This paper presents an efficient scheme for eliminating conflicts between distributed read-only transactions and distributed update transactions, thereby reducing synchronization delays. The scheme makes use of a multiversion mechanism in order to guarantee that distributed read-only transactions see semantically consistent snap-shots of the database, that they never have to be rolled-back due to their late arrival at retrieval sites, and that they inflict minimal synchronization delays on concurrent update transactions. Proof that the presented scheme guarantees semantic consistency is provided. Two important by-products of this scheme are that the recovery from transaction and system failures is greatly simplified and the taking of database dumps also can be accommodated while leaving the database on-line.  相似文献   

16.
一种改进的增量挖掘算法   总被引:1,自引:1,他引:0       下载免费PDF全文
李春喜  赵雷 《计算机工程》2010,36(24):42-44
Pre-FUFP算法基于次频繁项的概念有效处理了频繁模式树的更新,但当有次频繁项变成频繁项时,需要判定原数据库中哪些事务包含该数据项。为此,通过引入次频繁项对应原事务标识符的索引确定需要处理原数据库的事务,减少这一过程所消耗的时间,并用基于压缩FP-tree和矩阵技术代替原始FP-growth挖掘出频繁模式。实验证明该算法在时间效率上较Pre-FUFP有大幅度提高。  相似文献   

17.
随着计算机技术的发展,现代网络攻防形势日益严峻,秘密信息的安全传输问题亟待解决。隐蔽通信技术将秘密信息嵌入载体中通过隐蔽信道安全地传输,但传统的隐蔽信道存在数据易受损、易被攻击、易被检测等问题,无法满足更高的安全需求。区块链作为公共数据平台,能够在大量交易的掩盖下嵌入秘密信息,其具有的不易篡改、匿名性、去中心化等特点,可以很好地解决传统隐蔽信道存在的问题,实现安全的隐蔽通信,但现有的区块链隐蔽通信方案存在通信效率较低、安全性较差等问题,如何安全、高效地进行通信是区块链隐蔽通信的研究重点。提出一种基于正常交易掩盖下的区块链隐蔽通信方案,利用哈希算法构建免传输密码表在不改变任何交易数据的情况下实现秘密信息的嵌入,利用椭圆曲线特性可在海量的交易中快速筛选出带有隐藏信息的交易,从而快速提取秘密信息。所提方案提高了隐蔽通信的安全性、效率,可迁移性强,理论分析显示,攻击者无法区分普通交易和特殊交易,所提方案具有极高的抗检测性和可扩展性;比特币测试网的实验结果表明,所提方案的效率高。  相似文献   

18.
Existing host-based Intrusion Detection Systems use the operating system log or the application log to detect misuse or anomaly activities. These methods are not sufficient for detecting intrusion in the database systems. In this paper, we describe a method for detecting malicious activities in a database management system by using data dependency relationships. Typically, before a data item is updated in the database, some other data items are read or written. And after the update, other data items may also be written. These data items read or written in the course of update of a data item construct the read set, prewrite set, and the postwrite set for this data item. The proposed method identifies malicious transactions by comparing these sets with data items read or written in user transactions. We have provided mechanisms for finding data dependency relationships among transactions and use Petri-Nets to model normal data update patterns at user task level. Using this method, we ascertain more hidden anomalies in the database log. Our simulation on synthetic data reveals that the proposed model can achieve desirable performance when both transaction and user task level intrusion detection methods are employed.Yi Hu is a PhD candidate in Computer Science and Computer Engineering Department at the University of Arkansas. His research interests are in Database Intrusion Detection, Database Damage Assessment, Data Mining, and Trust Management. Previously, he received the BS and MS degree in Computer Science from the Southwest Jiaotong University and the University of Arkansas, respectively.Brajendra Panda received his MS degree in mathematics from Utkal University, India, in 1985 and PhD degree in computer science from North Dakota State University in 1994. He is currently an associate professor with the Computer Science and Computer Engineering Department at the University of Arkansas. His research interests include database systems, computer security, digital forensics, and information assurance. He has published over 60 research papers in these areas.  相似文献   

19.
一种实时数据库系统的基于时间戳的多版本并发控制协议   总被引:3,自引:1,他引:2  
实时数据库系统的定时限制包括数据的定时限制和事务的定时限制,一个好的并发控制协议必须要较好地满足这些定时限制。文章详细讨论了与并发控制有关的实时数据和实时事务的各种特征以及分类。并根据这些特点对传统数据库系统的多版本并发控制机制进行了扩展,提出了一种基于时间戳的多版本实时数据库系统的并发控制协议。该协议对硬实时事务不产生任何延迟,能够很好地保证实时事务和实时数据的定时限制,缺点是该并发控制协议仅是一个准一致性的协议。  相似文献   

20.
Scalability and availability in a large-scale distributed database is determined by the consistency strategies used by the transactions. Most of the big data applications demand consistency and availability at the same time. However, a suitable transaction model that handles the trade-obetween availability and consistency is presently lacking. In this article, we have proposed a hierarchical transaction model that supports multiple consistency levels for data items in a large-scale replicated database. The data items have been classified into different categories based on their consistency requirement, computed using a data mining algorithm. Thereafter, these have been mapped to the appropriate consistency level in the hierarchy. This allows parallel execution of several transactions belonging to each level. The topmost level called the Serializable (SR) level follows strong consistency applicable to data items that are mostly read and updated both. The next level of consistency, Snapshot Isolation (SI), maps to data items which are mostly read and demand unblocking read. Data items which are mostly updated do not follow strict consistent snapshot and have been mapped to the next lower level called Non- monotonic Snapshot Isolation (NMSI). The lowest level in the hierarchy correspond to data items for which ordering of operations does not matter. This level is called the Asynchronous (ASYNC) level. We have tested the proposed transaction model with two different workloads on a test-bed designed following the TPC-C benchmark schema. The performance of the proposed model has been evaluated against other transaction models that support single consistency policy. The proposed model has shown promising results in terms of transaction throughput, commit rate and average latency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号