首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Conventional DB systems suffer from poor execution speed due to disk I/O bottleneck. Recently, with a drop in memory chip price and rapid development of mass storage memory chip technology, research on main memory database systems has gained wide attention. This paper discusses the transaction management technique in a client-server main-memory DB environment. Although most researchers have studied client-server systems, the recovery technique has only been investigated in M. Franklin et al. TR 1081, 1992. The current recovery techniques transfer generated log records and their data pages from the client to the server during transaction execution in the client site. The problem associated with this approach is the increased data transfer time in the network and the global synchronization is not guaranteed. In this paper, client transfers only the completed log records to the server and resolves the problems in the current recovery techniques. In addition, the server manages only the execution of completed log records suggesting a simple recovery algorithm. Client uses system concurrency, fully, by acting to abort actions and it suggests a page unit recovery technique to reduce time that is required in the whole database.  相似文献   

2.
内存数据从传统的磁盘数据库发展而来,把整个数据表存放到内存中,极大地提高了数据库系统的处理能力,非常适合于移动通信的实时计费系统。本文通过综述内存数据库技术特点,针对移动通信实时计费的要求,将内存数据库技术应用于实时计费系统,阐述了实时计费系统的应用模型、内存数据库的系统结构及功能要求。  相似文献   

3.
With the expansion of Web sites to include business functions, a user interfaces with e-businesses through an interactive and multistep process, which is often time-consuming. For mobile users accessing the Web over digital cellular networks, the failure of the wireless link, a frequent occurrence, can result in the loss of work accomplished prior to the disruption. This work must then be repeated upon subsequent reconnection - often at significant cost in time and computation. This "disconnection-reconnection-repeat work" cycle may cause mobile clients to incur substantial monetary as well as resource (such as battery power) costs. In this paper, we propose a protocol for "recovering" a user to an appropriate recent interaction state after such a failure. The objective is to minimize the amount of work that needs to be redone upon restart after failure. Whereas classical database recovery focuses on recovering the system, i.e., all transactions, our work considers the problem of recovering a particular user interaction with the system. This recovery problem encompasses several interesting subproblems: (1) modeling user interaction in a way that is useful for recovery, (2) characterizing a user's "recovery state", (3) determining the state to which a user should be recovered, and (4) defining a recovery mechanism. We describe the user interaction with one or more Web sites using intuitive and familiar concepts from database transactions. We call this interaction an Internet transaction (iTX), distinguish this notion from extant transaction models, and develop a model for it, as well as for a user's state on a Web site. Based on the twin foundations of our iTX and state models, we finally describe an effective protocol for recovering users to valid states in Internet interactions.  相似文献   

4.
Blockchain technologies are expected to make a significant impact on a variety of industries. However, one issue holding them back is their limited transaction throughput, especially compared to established solutions such as distributed database systems. In this paper, we rearchitect a modern permissioned blockchain system, Hyperledger Fabric, to increase transaction throughput from 3000 to 20 000 transactions per second. We focus on performance bottlenecks beyond the consensus mechanism, and we propose architectural changes that reduce computation and I/O overhead during transaction ordering and validation to greatly improve throughput. Notably, our optimizations are fully plug‐and‐play and do not require any interface changes to Hyperledger Fabric.  相似文献   

5.
On real-time databases: concurrency control and scheduling   总被引:7,自引:0,他引:7  
In addition to maintaining database consistency as in conventional databases, real-time database systems must also handle transactions with timing constraints. While transaction response time and throughput are usually used to measure a conventional database system, the percentage of transactions satisfying the deadlines or a time-critical value function is often used to evaluate a real-time database system. Scheduling real-time transactions is far more complex than traditional real-time scheduling in the sense that (1) worst case execution times are typically hard to estimate, since not only CPU but also I/O requirement is involved; and (2) certain aspects of concurrency control may not integrate well with real-time scheduling. In this paper, we first develop a taxonomy of the underlying design space of concurrency control including the various techniques for achieving serializability and improving performance. This taxonomy provides us with a foundation for addressing the real-time issues. We then consider the integration of concurrency control with real-time requirements. The implications of using run policies to better utilize real-time scheduling in a database environment are examined. Finally, as timing constraints may be more important than data consistency in certain hard realtime database applications, we also discuss several approaches that explore the nonserializable semantics of real-time transactions to meet the hard deadlines  相似文献   

6.
With the intensive use of the internet, patient centric healthcare systems shifted away from paper-based records towards a computerized format. Electronic patient centric healthcare databases contain information about patients that should be kept available for further reference. Healthcare databases contain potential data that makes them a goal for attackers. Hacking into these systems and publishing their contents online exposes them to a challenge that affects their continuity. Any denial of this service will not be tolerated since we cannot know when we need to retrieve a patient’s record. Denial of service affects the continuity of the healthcare system which in turn threatens patients’ lives, decreases the efficiency of the healthcare system and increases the operating costs of the attacked healthcare organization. Although there are many defensive security methods that have been devised, nonetheless malicious transactions may find a way to penetrate the secured safeguard and then modify critical data of healthcare databases. When a malicious transaction modifies a patient record in a database, the damage may spread to other records through valid transactions. Therefore, recovery techniques are required. The efficiency of the data recovery algorithm is substantial for e-healthcare systems. A patient cannot wait too long for his/her medical history to be recovered so that the correct medication be prescribed. Nevertheless, in order to have fast data recovery, an efficient damage assessment process should precede the recovery stage. The damage assessment must be performed as the intrusion detection system detects the malicious activity. The execution time of the recovery process is a crucial factor for measuring the performance because it is directly proportional to the denial of service time of any healthcare system. This paper presents a high performance damage assessment and recovery algorithm for e-healthcare systems. The algorithm provides fast damage assessment after an attack by a malicious transaction to keep the availability of the e-healthcare database. Reducing the execution time of recovery is the key target of our algorithm. The proposed algorithm outperforms the existing algorithm. It is about six times faster than the most recent proposed algorithm. In the worst case, the proposed algorithm takes 8.81?ms to discover the damaged part of the database; however, the fastest recent algorithm requires 50.91?ms. In the best case, the proposed algorithm requires 0.43?ms, which is 86 times faster than the fastest recent work. This is a significant reduction of execution time compared with other available approaches. Saving the damage assessment time means shorter denial of service periods, which in turn guarantees the continuity of the patient centric healthcare system.  相似文献   

7.
Application recovery in mobile database systems (MDS) is more complex because of an unlimited geographical mobility of mobile units. The mobility of these units makes it tricky to an store application log and access it for recovery. This paper presents an application log management scheme, which uses a mobile-agent-based framework to facilitate seamless logging of application activities for recovery from transaction or system failure. We compare the performance of our scheme with lazy, pessimistic, and frequency-based schemes through simulation and show that compared to these schemes, our scheme reduces overall recovery time by efficiently managing resources and handoffs.  相似文献   

8.
文章针对传统数据库安全机制不能阻止合法的恶意用户的攻击破坏问题,在数据库的入侵容忍技术理论基础上,采用基于事务日志的数据恢复方法,对数据库入侵事后恢复系统进行研究。本研究能够增强数据库系统的安全,并在一定程度上,完善了在数据库系统受到入侵后的恢复技术。  相似文献   

9.
设计与实现了一种工控领域的内存数据库.给出了各模块的具体实现方法,重点给出了适合实时数据库特点的实时事务管理的方法和流程.在实际应用中,能够稳定而高效的运行.  相似文献   

10.
顾进广  罗盼  张智 《电信科学》2012,28(1):47-52
封锁机制在面向DaaS的XML数据库事务处理中有十分重要的作用,现有的封锁机制由于封锁粒度太大、不支持主流XML查询语言等因素,存在需要改进的空间.本文探讨了一种面向DaaS的XML数据库分布式封锁机制,在每个数据库节点上实现了一个基于模式树视图的细粒度语义锁,在全局节点上通过构建全局模式树视图来协调各节点的事务处理,最后比较了本文封锁机制与现有封锁机制的优缺点.  相似文献   

11.
Rapid advances in hardware and wireless communication technology have made the concept of mobile computing a reality. Thus, evolving database technology needs to address the requirements of the future mobile user. The frequent disconnection and migration of the mobile user violate underlying presumptions about connectivity that exist in wired database systems and introduce new issues that affect transaction management. In this paper, we present the PreSerialization (PS) transaction management technique for the mobile multidatabase environment. This technique addresses disconnection and migration and enforces a range of atomicity and isolation criteria. We also develop an analytical model to compare the performance of the PS technique to that of the Kangaroo model.  相似文献   

12.
For a transaction processing system to operate effectively and efficiently in cloud environments, it is important to distribute huge amount of data while guaranteeing the ACID (atomic, consistent, isolated, and durable) properties. Moreover, database partition and migration tools can help transplanting conventional relational database systems to the cloud environment rather than rebuilding a new system. This paper proposes a database distribution management (DBDM) system, which partitions or replicates the data according to the transaction behaviors of the application system. The principle strategy of DBDM is to keep together the data used in a single transaction, and thus, avoiding massive transmission of records in join operations. The proposed system has been implemented successfully. The preliminary experiments show that the DBDM performs the database partition and migration effectively. Also, the DBDM system is modularly designed to adapt to different database management system (DBMS) or different partition algorithms.  相似文献   

13.
In this paper, we approach the design of ID caching technology (IDCT) for graph databases, with the purpose of accelerating the queries on graph database data and avoiding redundant graph database query operations which will consume great computer resources. Traditional graph database caching technology (GDCT) needs a large memory to store data and has the problems of serious data consistency and low cache utilization. To address these issues, in the paper we propose a new technology which focuses on ID allocation mechanism and high-speed queries of ID on graph databases. Specifically, ID of the query result is cached in memory and data consistency is achieved through the real-time synchronization and cache memory adaptation. In addition, we set up complex queries and simple queries to satisfy all query requirements and design a mechanism of cache replacement based on query action time, query times, and memory capacity, thus improving the performance furthermore. Extensive experiments show the superiority of our techniques compared with the traditional query approach of graph databases.  相似文献   

14.
In distributed database systems, commit protocols are used to ensure the transaction atomicity. In the presence of failures, nonblocking commit protocols can guarantee the transaction atomicity without blocking the transaction execution. A (resilient) decentralized nonblocking commit protocol (RDCP) is proposed for distributed database systems. This protocol is based on the hypercube network topology and is `liub(log2 (N))-2' resilient to node failures (N=number of system-nodes). The number of messages sent among the N nodes is O(N·log22 (N)) which is only a factor of log2 (N) over the message complexity lower bound O(N·log2 (N)) of decentralized commit protocols. Furthermore, RDCP is an optimistic nonblocking protocol. It aborts the transaction only when some nodes want to abort or some nodes fail before they make local decisions  相似文献   

15.
文章提出了一种将非结构化数据集中存储,同时支持事务的存储方案,并依据此方案实现了一个高效、易用的数据存储系统GSL。GSL的数据存储接口与文件系统的接口风格一致,同时支持事务处理。文章将GSL与文件系统和Oracle数据库的BLOB存储效率进行了测试和比较,结果表明GSL的存储效率与文件系统相当,并优于BLOB。  相似文献   

16.
There are few studies of deadlock resolution in the real-time distributed database environment. A specially designed distributed deadlock detection algorithm known as the Enhanced Probe-Based Algorithm (EPBA) was evaluated by the authors and has found to be very effective. The present work is carried out to evaluate the EPBA for the firm real-time distributed database environment. The study also compares its performance with other existing deadlock resolution algorithms such as the timeouts algorithm and the global sequential locking algorithm. Results indicated that under high data contention and with slack transaction deadlines, the EPBA approach outperforms all the other deadlock resolution methods.  相似文献   

17.
在变电站综合自动化系统中,需要对遥测、脉冲、遥信等实时数据进行处理,由于电力系统对这些数据的存取具有较高的实时性,因此监控系统通常会采用实时数据库进行数据的存储管理。随着监控系统需要处理数据单元的增加和机器节点数的增长,监控系统普遍采用分布式内存数据库进行实时数据的存储。分布式内存数据库的一个重要问题就是如何实现多个机器节点之间的数据更新同步,提出了利用多播和TCP实现快速有效的数据同步方式,支持灵活的组网方式,并设计了一套稳定可靠的数据传输机制,使得分布式内存数据库的各个节点的数据保持良好的一致性。  相似文献   

18.
Two tightly coupled multi-computer testbeds, one providing efficient inter-node communications tailored to the application, and the other providing more flexible full connectivity among processors and memories are used to support validation of the design techniques for distributed real-time systems. The testbeds are valuable tools for evaluating, analyzing, and studying the behavior of many algorithms for distributed systems. We have used the testbeds in studying distributed recovery block scheme for handling hardware and software faults. A testbed has also been used to analyze database locking techniques and a fault-tolerant locking protocol for recovery from faults that occur during updating of replicated copies of files in tightly coupled distributed systems. Testbeds can be configured to represent the operating environments and input scenarios more accurately than software simulation. Therefore, testbed-based evaluation provides more accurate results than simulation and yields greater insight into the characteristics and limitations of proposed concepts. This is an important advantage in the complex field of distributed real-time system design evaluation and validation. Therefore, testbed-based experimentation is an effective approach to validate system concepts and design techniques for distributed systems for real-time applications.  相似文献   

19.
20.
对产业界金融领域事务数据库测试的现状和问题进行了分析,在此基础上提出了金融领域事务数据库测试场景的定义和业务设计,并基于此金融场景,实现了一款金融领域数据库测试工具,为金融领域事务数据库的测试和选型提供了新的指导和方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号