共查询到20条相似文献,搜索用时 15 毫秒
1.
The problems in building a transaction processing system are discussed, and it is shown that the difficulties are a function of specific attributes of the underlying database system. A model of a transaction processing system is presented, and five system dimensions important in classifying transaction processing systems-the process, machine, heterogeneity, data, and site components-are introduced. The specific problems posed by various combinations of system characteristics are analyzed. The evolution of transaction processing systems are described in terms of the framework 相似文献
2.
Heterogeneous and autonomous transaction processing 总被引:1,自引:0,他引:1
The problems specific to heterogeneous and autonomous transactions processing (HATP) systems are discussed. HATP is divided into three dimensions: distribution, heterogeneity, and autonomy. The authors regard the three dimensions as independent, and they present concrete design and implementation techniques to support this view 相似文献
3.
John A. Mills 《Journal of Systems Integration》1993,3(3-4):351-369
This article integrates an interoperability architecture, the OSCATM architecture, and a distributed transaction processing protocol, the X/Open® Distributed Transaction Processing model, into a unified model of large scale interoperability and distributed transaction processing. Applications supporting different business operations are often deployed in heterogeneous environments in which applications are stand alone islands and operations are fragmented. But in order to have integrated operations, a loosely coupled system of autonomous applications is required often bound together via a distributed transaction processing protocol. This article describes a model for this configuration. It will propose that the span of control of a transaction manager defines the transaction environment for a single application. Any two applications need not conform to the same supplier's transaction environment nor reside in the same environment. Interoperability must be provided among applications, since any one application cannot assume that any other application is under the control of the same transaction manager. Requirements are imposed upon the interactions of applications to support interoperability. The interface between transaction managers must be compatible with these requirements. Other distributed architecture standards must define the requirements for release independence, resource independence, accessibility transparency, location transparency, contract interfaces, and secure environment. 相似文献
4.
A transaction processing queue manages a database which is partitioned into N items. Each arriving class-i customer requests to read and write a certain subset of the N items (called the shared and exclusive access sets
and
). Classes i and j are said to conflict if
. No two conflicting classes of customers can be processed simultaneously. All classes arrive according to independent Poisson processes and have general i.i.d. service times.In this paper, we discuss database systems without queuing. We show the insensitivity property of the system, and derive analytical expressions for performance measures such as blocking probabilities, throughput, etc. 相似文献
Full-size image
5.
Ryan Johnson Ippokratis Pandis Anastasia Ailamaki 《The VLDB Journal The International Journal on Very Large Data Bases》2014,23(1):1-23
Multicore hardware demands software parallelism. Transaction processing workloads typically exhibit high concurrency, and, thus, provide ample opportunities for parallel execution. Unfortunately, because of the characteristics of the application, transaction processing systems must moderate and coordinate communication between independent agents; since it is notoriously difficult to implement high performing transaction processing systems that incur no communication whatsoever. As a result, transaction processing systems cannot always convert abundant, even embarrassing, request-level parallelism into execution parallelism due to communication bottlenecks. Transaction processing system designers must therefore find ways to achieve scalability while still allowing communication to occur. To this end, we identify three forms of communication in the system—unbounded, fixed, and cooperative—and argue that only the first type poses a fundamental threat to scalability. The other two types tend not impose obstacles to scalability, though they may reduce single-thread performance. We argue that proper analysis of communication patterns in any software system is a powerful tool for improving the system’s scalability. Then, we present and evaluate under a common framework techniques that attack significant sources of unbounded communication during transaction processing and sketch a solution for those that remain. The solutions we present affect fundamental services of any transaction processing engine, such as locking, logging, physical page accesses, and buffer pool frame accesses. They either reduce such communication through caching, downgrade it to a less-threatening type, or eliminate it completely through system design. We find that the later technique, revisiting the transaction processing architecture, is the most effective. The final design cuts unbounded communication by roughly an order of magnitude compared with the baseline, while exhibiting better scalability on multicore machines. 相似文献
6.
In recent years, a clear trend has emerged where businesses need to provide flexible access to its services so as to increase their usage by a much wider cross-section of users operating over public infrastructures but still within a trusted environment. This trusted environment must be established between all participating users and service provider entities before any transactions are carried out. To meet the challenge of enabling mobile users to work within a trusted environment on any untrusted machine, the notion of a trusted personal device (TPD) has emerged. This paper provides a survey giving a snapshot of the growing body of work ongoing in the area of TPDs and the services they support. 相似文献
7.
单一的实时事务并发控制策略因为对事务性能以及事务对数据的访问方式有着特殊限制而无法满足不同类型事务同时并存的混合实时数据库的要求.针对不同类型实时事务特征,提出了一种新的混合实时事务并发控制算法,对不同类型实时事务采用不同并发控制策略,具有极强的针对性和自适应性,算法同时通过分析数据的相关语义,利用数据相似性定义,合理放宽可串行化的正确性标准,在优先考虑硬实时事务的前提下,尽可能增加软实时事务成功提交的比例以提高系统整体性能.仿真实验结果证明MRTT_CC算法性能良好. 相似文献
8.
9.
Chung C. Wang 《Computer Standards & Interfaces》1991,13(1-3):233-242
This position paper provides a strawman reference model which can be used to compare and reason about transaction management in an Object-Oriented Data Base system (OODB). The model is described as consisting of a collection of characteristics that can be used for comparing existing and future features in transaction management of an OODB. Some of the features in this collection are really alternatives to one another. The purpose of inclusion of these alternatives is to help the evaluation process when developing standards. 相似文献
10.
原生XML数据库(NXD)的事务处理机制是保障数据库正常运行的核心机制,是当前研究的一个重点.在分析了现事务处理机制的基础上,结合关系数据库中成熟的封锁理论,提出基于XPath的XPL四种锁封锁机制,对数据库的操作和事务做出了明确的定义,并给出了实例进行验证说明. 相似文献
11.
In this paper we discuss the issues relating the evaluation and reporting of security assurance of runtime systems. We first highlight the shortcomings of current initiatives in analyzing, evaluating and reporting security assurance information. Then, the paper proposes a set of metrics to help capture and foster a better understanding of the security posture of a system. Our security assurance metric and its reporting depend on whether or not the user of the system has a security background. The evaluation of such metrics is described through the use of theoretical criteria, a tool implementation and an application to a case study based on an insurance company network. 相似文献
12.
13.
Sebastian Obermeier Stefan Böttcher Martin Hett Panos K. Chrysanthis George Samaras 《Distributed and Parallel Databases》2009,25(3):165-192
Atomic commit protocols for distributed transactions in mobile ad-hoc networks have to consider message delays and network
failures. We consider ad-hoc network scenarios, in which participants hold embedded databases and offer services to other
participants. Services that are composed of several other services can access and manipulate data of physically different
databases. In such a scenario, distributed transaction processing can be used to guarantee atomicity and serializability throughout
all databases. However, with problems like message loss, node failure, and network partitioning, mobile environments make
it hard to get estimations on the duration of a simple message exchange.
In this article, we focus on the problem of setting up reasonable time-outs when guaranteeing atomicity for transaction processing
within mobile ad-hoc networks, and we show the effect of setting up “wrong” time-outs on the transaction throughput and blocking
time. Our solution, which does not depend on time-outs, shows a better performance in unreliable networks and remarkably reduces
the amount of blocking. 相似文献
14.
Advanced transaction processing in multilevel secure file stores 总被引:4,自引:0,他引:4
Bertino E. Jajodia S. Mancini L. Ray I. 《Knowledge and Data Engineering, IEEE Transactions on》1998,10(1):120-135
The concurrency control requirements for transaction processing in a multilevel secure file system are different from those in conventional transaction processing systems. In particular, there is the need to coordinate transactions at different security levels avoiding both potential timing covert channels and the starvation of transactions at higher security levels. Suppose a transaction at a lower security level attempts to write a data item that is being read by a transaction at a higher security level. On the one hand, a timing covert channel arises if the transaction at the lower security level is either delayed or aborted by the scheduler. On the other hand, the transaction at the high security level may be subjected to an indefinite delay if it is forced to abort repeatedly. This paper extends the classical two-phase locking mechanism to multilevel secure file systems. The scheme presented here prevents potential timing covert channels and avoids the abort of higher level transactions nonetheless guaranteeing serializability. The programmer is provided with a powerful set of linguistic constructs that supports exception handling, partial rollback, and forward recovery. The proper use of these constructs can prevent the indefinite delay in completion of a higher level transaction, and allows the programmer to trade off starvation with transaction isolation 相似文献
15.
H. S. M. Kruijer 《Software》1982,12(5):445-454
This paper presents a medium-sized operating system written in Concurrent Pascal, thereby describing further experience with this language and giving further indications of its scope. The operating system was developed to support an application in the area that is usually termed ‘commercial’ or ‘administrative’. Both the functional capabilities and the structure of the operating system are described, with emphasis on its facilities for data file management, and its size and performance are given. A secondary theme of the paper is the relationship of the operating system's qualities to the properties and facilities of the programming language Concurrent Pascal used for its development. 相似文献
16.
There is an ever-increasing demand for more complex transactions and higher throughputs in transaction processing systems leading to higher degrees of transaction concurrency and, hence, higher data contention. The conventional two-phase locking (2PL) Concurrency Control (CC) method may, therefore, restrict system throughput to levels inconsistent with the available processing capacity. This is especially a concern in shared-nothing or data-partitioned systems due to the extra latencies for internode communication and a reliable commit protocol. The optimistic CC (OCC) is a possible solution, but currently proposed methods have the disadvantage of repeated transaction restarts. We present a distributed OCC method followed by locking, such that locking is an integral part of distributed validation and two-phase commit. This method ensures at most one re-execution, if the validation for the optimistic phase fails. Deadlocks, which are possible with 2PL, are prevented by preclaiming locks for the second execution phase. This is done in the same order at all nodes. We outline implementation details and compare the performance of the new OCC method with distributed 2PL through a detailed simulation that incorporates queueing effects at the devices of the computer systems, buffer management, concurrency control, and commit processing. It is shown that for higher data contention levels, the hybrid OCC method allows a much higher maximum transaction throughput than distributed 2PL in systems with high processing capacities. In addition to the comparison of CC methods, the simulation study is used to study the effect of varying the number of computer systems with a fixed total processing capacity and the effect of locality of access in each case. We also describe several interesting variants of the proposed OCC method, including methods for handling access variance, i.e., when rerunning a transaction results in accesses to a different set of objects 相似文献
17.
Kemme B. Pedone F. Alonso G. Schiper A. Wiesmann M. 《Knowledge and Data Engineering, IEEE Transactions on》2003,15(4):1018-1032
Atomic broadcast primitives are often proposed as a mechanism to allow fault-tolerant cooperation between sites in a distributed system. Unfortunately, the delay incurred before a message can be delivered makes it difficult to implement high performance, scalable applications on top of atomic broadcast primitives. Recently, a new approach has been proposed for atomic broadcast which, based on optimistic assumptions about the communication system, reduces the average delay for message delivery to the application. We develop this idea further and show how applications can take even more advantage of the optimistic assumption by overlapping the coordination phase of the atomic broadcast algorithm with the processing of delivered messages. In particular, we present a replicated database architecture that employs the new atomic broadcast primitive in such a way that communication and transaction processing are fully overlapped, providing high performance without relaxing transaction correctness. 相似文献
18.
移动事务处理是移动计算系统的一个基本功能,但是,移动计算系统固有的客户机的移动性、频繁的网络断接以及资源有限等特点限制了传统事务处理技术在移动系统的应用,因此,改进传统事务处理的方法使之适应移动计算的要求是提高移动事务效率的关键.介绍了移动事务的概念,分析了移动事务的特点以及移动事务处理的基本要求,提出了解决移动事务移动性、频繁断接性和数据一致性的关键技术. 相似文献
19.
Increasing the parallelism in transaction processing and maintaining data consistency appear to be two conflicting goals in designing distributed database systems (DDBSs). This problem is especially difficult if the DDBS is serving long-lived transactions (LLTs). A special case of LLTs, called sagas, has been introduced that addresses this problem. A DDBS with sagas provides high parallelism to transactions by allowing sagas to release their locks as early as possible. However, it is also subject to an overhead, due to the efforts needed to restore data consistency in the case of failure. We conduct a series of simulation studies to compare the performance of LLT systems with and without saga implementation in a faulty environment. The studies show that saga systems outperform their nonsaga counterparts under most of conditions, including heavy failure cases. We thus propose an analytical queuing model to investigate the performance behavior of saga systems. The development of this analytical model assists us to quantitatively study the performance penalty of a saga implementation due to the failure recovery overhead. Furthermore, the analytical solution can be used by system administrators to fine-tune the performance of a saga system. This analytical model captures the primary aspects of a saga system, namely data locking, resource contention and failure recovery. Due to the complicated nature of the analytical modeling, we solve the model approximately for various performance metrics using decomposition methods, and validate the accuracy of the analytical results via simulations 相似文献
20.
Sebastian Obermeier Stefan Böttcher Dominik Kleine 《Distributed and Parallel Databases》2009,26(2-3):319-351
Transaction processing leads to new challenges in mobile ad-hoc networks, which, in comparison to fixed-wired networks, suffer from problems like node disconnection, message loss, and frequently appearing network partitioning. As the atomic commit protocol is that part of transaction processing in which failures can lead to the most serious data blocking, we have developed a robust and failure-tolerant distributed cross-layer atomic commit protocol called CLCP that uses multiple coordinators. In order to reduce the number of both, failures and messages, our protocol makes use of acknowledgement messages for piggybacking information. We have evaluated our protocol in mobile ad-hoc networks by using several mobility models (i.e. Random Waypoint, Manhattan, and Attraction Point), and compared CLCP with other atomic commit protocols, i.e. 2PC and Paxos Commit, each implemented in 3 versions, i.e. without sending message acknowledgements, with a Relay Routing technique, and with Nearest Forward Progress Routing. Special to our simulation environment is the use of the quasi-unit-disc model, which assumes a non-binary message reception probability that captures real-world behavior much better than the classical unit-disc-model, often used in theory. Using the quasi-unit-disc model, our evaluation shows the following results. CLCP and “2PC without acknowledgement messages” have a significantly lower energy consumption than the other protocols, and CLCP is able to commit significantly more distributed transactions than all the other atomic commit protocols for each of the mobility models. 相似文献