首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
In this paper, we present an approach to global transaction management in workflow environments. The transaction mechanism is based on the well-known notion of compensation, but extended to deal with both arbitrary process structures to allow cycles in processes and safepoints to allow partial compensation of processes. We present a formal specification of the transaction model and transaction management algorithms in set and graph theory, providing clear, unambiguous transaction semantics. The specification is straightforwardly mapped to a modular architecture, the implementation of which is first applied in a testing environment, then in the prototype of a commercial workflow management system. The modular nature of the resulting system allows easy distribution using middleware technology. The path from abstract semantics specification to concrete, real-world implementation of a workflow transaction mechanism is thus covered in a complete and coherent fashion. As such, this paper provides a complete framework for the application of well-founded transactional workflows. Received: 16 November 1999 / Accepted 29 August 2001 Published online: 6 November 2001  相似文献   

2.
The performance of electronic commerce systems has a major impact on their acceptability to users. Different users also demand different levels of performance from the system, that is, they will have different Quality of Service (QoS) requirements. Electronic commerce systems are the integration of several different types of servers and each server must contribute to meeting the QoS demands of the users. In this paper we focus on the role, and the performance, of a database server within an electronic commerce system. We examine the characteristics of the workload placed on a database server by an electronic commerce system and suggest a range of QoS requirements for the database server based on this analysis of the workload. We argue that a database server must be able to dynamically reallocate its resources in order to meet the QoS requirements of different transactions as the workload changes. We describe Quartermaster, which is a system to support dynamic goal-oriented resource management in database management systems, and discuss how it can be used to help meet the QoS requirements of the electronic commerce database server. We provide an example of the use of Quartermaster that illustrates how the dynamic reallocation of memory resources can be used to meet the QoS requirements of a set of transactions similar to transactions found in an electronic commerce workload. We briefly describe the memory reallocation algorithms used by Quartermaster and present experiments to show the impact of the reallocations on the performance of the transactions. Published online: 22 August 2001  相似文献   

3.
The class of transaction scheduling mechanisms in which the transaction serialization order can be determined by controlling their commitment order, is defined. This class of transaction management mechanisms is important, because it simplifies transaction management in a multidatabase system environment. The notion of analogous execution and serialization orders of transactions is defined and the concept of strongly recoverable and rigorous execution schedules is introduced. It is then proven that rigorous schedulers always produce analogous execution and serialization orders. It is shown that the systems using the rigorous scheduling can be naturally incorporated in hierarchical transaction management mechanisms. It is proven that several previously proposed multidatabase transaction management mechanisms guarantee global serializability only if all participating databases systems produce rigorous schedules  相似文献   

4.
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also present a preliminary performance evaluation of the implementation of the above mechanisms. Edited by R. King. Received July 1993 / Accepted May 1996  相似文献   

5.
Secure buffering in firm real-time database systems   总被引:2,自引:0,他引:2  
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions, of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue. Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel bandwidths of up to one bit per second (a limit specified in military standards). Received March 1, 1999 / Accepted October 1, 1999  相似文献   

6.
To enforce global serializability in a multidatabase environment the multidatabase transaction manager must take into account the indirect (transitive) conflicts between multidatabase transactions caused by local transactions. Such conflicts are difficult to resolve because the behavior or even the existence of local transactions is not known to the multidatabase system. To overcome these difficulties, we propose to incorporate additional data manipulation operations in the subtransactions of each multidatabase transaction. We show that if these operations create direct conflicts between subtransactions at each participating local database system, indirect conflicts can be resolved even if the multidatabase system is not aware of their existence. Based on this approach, we introduce optimistic and conservative multidatabase transaction management methods that require the local database systems to ensure only local serializability. The proposed methods do not violate the autonomy of the local database systems and guarantee global serializability by preventing multidatabase transactions from being serialized in different ways at the participating database systems. Refinements of these methods are also proposed for multidatabase environments where the participating database systems allow schedules that are cascadeless or transactions have analogous execution and serialization orders. In particular, we show that forced local conflicts can be eliminated in rigorous local systems, local cascadelessness simplifies the design of a global scheduler, and that local strictness offers no significant advantages over cascadelessness  相似文献   

7.
Amultidatabase system is an interconnected collection of autonomous databases each managed by an autonomous database management system (DBMS). When integrating multiple DBMSs, the key is the autonomy of the underlying participants. Much research has been undertaken in the past five years aimed at describing and building an integrated multidatabase system, but to date the termautonomy has only been defined intuitively. This article provides a rigorous definition for autonomy tailored to the multidatabase environment specifically but applicable to any system environment that involves the collaboration of autonomous participants. The major contribution of this article is a technique that measures autonomy along multiple dimensions so a single numeric value describing the amount of autonomy violated by a particular system design is quantified. This has a two-fold implication. First, the technique described forces researchers to consider autonomy from several different aspects that may not be the central focus of their research, but must be considered because assumptions made regarding one aspect of a system may have implications in other areas. Second, the value can be used as a measure for direct comparison among different systems or proposals. Finally, the article demonstrates the quantification technique's applicability by applying it to several recent multidatabase research efforts.  相似文献   

8.
Overview of multidatabase transaction management   总被引:8,自引:0,他引:8  
A multidatabase system (MDBS) is a facility that allows users access to data located in multiple autonomous database management systems (DBMSs). In such a system,global transactions are executed under the control of the MDBS. Independently,local transactions are executed under the control of the local DBMSs. Each local DBMS integrated by the MDBS may employ a different transaction management scheme. In addition, each local DBMS has complete control over all transactions (global and local) executing at its site, including the ability to abort at any point any of the transactions executing at its site. Typically, no design or internal DBMS structure changes are allowed in order to accommodate the MDBS. Furthermore, the local DBMSs may not be aware of each other and, as a consequence, cannot coordinate their actions. Thus, traditional techniques for ensuring transaction atomicity and consistency in homogeneous distributed database systems may not be appropriate for an MDBS environment. The objective of this article is to provide a brief review of the most current work in the area of multidatabase transaction management. We first define the problem and argue that the multidatabase research will become increasingly important in the coming years. We then outline basic research issues in multidatabase transaction management and review recent results in the area. We conclude with a discussion of open problems and practical implications of this research.  相似文献   

9.
Building knowledge base management systems   总被引:1,自引:0,他引:1  
Advanced applications in fields such as CAD, software engineering, real-time process control, corporate repositories and digital libraries require the construction, efficient access and management of large, shared knowledge bases. Such knowledge bases cannot be built using existing tools such as expert system shells, because these do not scale up, nor can they be built in terms of existing database technology, because such technology does not support the rich representational structure and inference mechanisms required for knowledge-based systems. This paper proposes a generic architecture for a knowledge base management system intended for such applications. The architecture assumes an object-oriented knowledge representation language with an assertional sublanguage used to express constraints and rules. It also provides for general-purpose deductive inference and special-purpose temporal reasoning. Results reported in the paper address several knowledge base management issues. For storage management, a new method is proposed for generating a logical schema for a given knowledge base. Query processing algorithms are offered for semantic and physical query optimization, along with an enhanced cost model for query cost estimation. On concurrency control, the paper describes a novel concurrency control policy which takes advantage of knowledge base structure and is shown to outperform two-phase locking for highly structured knowledge bases and update-intensive transactions. Finally, algorithms for compilation and efficient processing of constraints and rules during knowledge base operations are described. The paper describes original results, including novel data structures and algorithms, as well as preliminary performance evaluation data. Based on these results, we conclude that knowledge base management systems which can accommodate large knowledge bases are feasible. Edited by Gunter Schlageter and H.-J. Schek. Received May 19, 1994 / Revised May 26, 1995 / Accepted September 18, 1995  相似文献   

10.
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be manageable. Object technology, with its promise of improved productivity and quality in application development, delivers these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications. For object technology to be a serious contender in the construction of these solutions requires: – technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar technology from Microsoft appeared at the end of 1995. – methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure. – an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing (TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust execution environment which offers a superior alternative for building and deploying client/server applications. Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996  相似文献   

11.
Abstract. This paper describes the design of a reconfigurable architecture for implementing image processing algorithms. This architecture is a pipeline of small identical processing elements that contain a programmable logic device (FPGA) and double port memories. This processing system has been adapted to accelerate the computation of differential algorithms. The log-polar vision selectively reduces the amount of data to be processed and simplifies several vision algorithms, making possible their implementation using few hardware resources. The reconfigurable architecture design has been devoted to implementation, and has been employed in an autonomous platform, which has power consumption, size and weight restrictions. Two different vision algorithms have been implemented in the reconfigurable pipeline, for which some experimental results are shown. Received: 30 March 2001 / Accepted: 11 February 2002 RID="*" ID="*" This work has been supported by the Ministerio de Ciencia y Tecnología and FEDER under project TIC2001-3546 Correspondence to: J.A. Boluda  相似文献   

12.
Summary.   Different replication algorithms provide different solutions to the same basic problem. However, there is no precise specification of the problem itself, only of particular classes of solutions, such as active replication and primary-backup. Having a precise specification of the problem would help us better understand the space of possible solutions and possibly come out with new ones. We present a formal definition of the problem solved by replication in the form of a correctness criterion called x-ability (exactly-once ability). An x-able service has obligations to its environment and its clients. It must update its environment under exactly-once semantics. Furthermore, it must provide idempotent, non-blocking request processing and deliver consistent results to its clients. We illustrate the value of x-ability through a novel replication protocol that handles non-determinism and external side-effects. The replication protocol is asynchronous in the sense that it may vary, at run-time and according to the asynchrony of the system, between some form of primary-backup and some form of active replication. Received: December 2000 / Accepted: September 2001  相似文献   

13.
Due to their large bandwidth demand and synchronization requirements, multimedia applications, in general, consume buffers of huge size, which prevents potential customers from using multimedia services. We recognize the problem and propose a hierarchical architecture to reduce the buffer size. The architecture can be applied to both 1- and - applications. We establish the architecture by first determining neighbor sets and then applying a grouping algorithm and a renegotiation process. This architecture can also meet the synchronization requirements of multimedia applications. We evaluate the performance of the architecture through simulations and compare it with that of a direct connection architecture. The result shows that the hierarchical architecture reduces the buffer size significantly without serious penalty to the total bandwidth and without introducing extra hot spots.  相似文献   

14.
A lot of research efforts have focused on global serializability, global atomicity, and global deadlocks in multidatabase systems. Surprisingly, however, very few transaction processing model exists that ensures global serializability, global atomicity, and freedom from global deadlocks in a uniform manner. In this paper, we examine previous transaction processing models and propose a new transaction processing model that generates globally serializable and deadlock-free schedules in failure-prone multidatabase systems. A new transaction processing model adopts rigid conflict serializability as a correctness criterion on global serializability, and follows an emulated 2PC, criteria for global commitment, and an abort-based multidatabase recovery scheme for global serializability in failure-prone multidatabase systems. In addition, a deadlock-free policy is suggested where rigid conflict serializability is enforced when each subtransaction, including redo transactions, begins its execution. To practically support a new transaction processing model, Rigid Ticket Ordering (RTO) methods are designed. The proposed transaction processing model has the following improvements: (a) it resolves abnormal direct conflicts identified in this paper, (b) it imposes no restrictions on the execution of local transactions, and (c) it relaxes the restrictions on the execution of global transactions.  相似文献   

15.
The task of checking if a computer system satisfies its timing specifications is extremely important. These systems are often used in critical applications where failure to meet a deadline can have serious or even fatal consequences. This paper presents an efficient method for performing this verification task. In the proposed method a real-time system is modeled by a state-transition graph represented by binary decision diagrams. Efficient symbolic algorithms exhaustively explore the state space to determine whether the system satisfies a given specification. In addition, our approach computes quantitative timing information such as minimum and maximum time delays between given events. These results provide insight into the behavior of the system and assist in the determination of its temporal correctness. The technique evaluates how well the system works or how seriously it fails, as opposed to only whether it works or not. Based on these techniques a verification tool called Verus has been constructed. It has been used in the verification of several industrial real-time systems such as the robotics system described below. This demonstrates that the method proposed is efficient enough to be used in real-world designs. The examples verified show how the information produced can assist in designing more efficient and reliable real-time systems.  相似文献   

16.
Failure detection and consensus in the crash-recovery model   总被引:2,自引:0,他引:2  
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system. Received: May 1998 / Accepted: November 1999  相似文献   

17.
Image-processing systems, each consisting of massively parallel photodetectors and digital processing elements on a monolithic circuit, are currently being developed by several researchers. Some earlyvision-like processing algorithms are installed in the vision systems. However, they are not sufficient for applications because their output is in the form of pattern information, so that, in order to respond to input, some feature values are required to be extracted from the pattern. In the present paper, we propose a robust method for extracting feature values associated with images in a massively parallel vision system.  相似文献   

18.
Information systems are the glue between people and computers. Both the social and business environments are in a continual, some might say chaotic, state of change while computer hardware continues to double its performance about every 18 months. This presents a major challenge for information system developers.  The term user-friendly is an old one, but one which has come to take on a multitude of meanings. However, in today’s context we might well take a user-friendly system to be one where the technology fits the user’s cognitive models of the activity in hand. This article looks at the relationship between information systems and the changing demands of their users as the underlying theme for the current issue of Cognition, Technology and Work.  People, both as individuals and organisations, change. The functionalist viewpoint, which attempts to freeze and inhibit such change, has failed systems developers on numerous occasions. Responding to, and building on, change in the social environment is still a significant research issue for information systems specialists who need to be able to create living information systems.  相似文献   

19.
This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards, a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”, i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast to all other algorithms, is very robust with respect to differing load and access profiles. Received December 4, 1997 / Accepted February 2, 1999  相似文献   

20.
Effective timestamping in databases   总被引:3,自引:0,他引:3  
Many existing database applications place various timestamps on their data, rendering temporal values such as dates and times prevalent in database tables. During the past two decades, several dozen temporal data models have appeared, all with timestamps being integral components. The models have used timestamps for encoding two specific temporal aspects of database facts, namely transaction time, when the facts are current in the database, and valid time, when the facts are true in the modeled reality. However, with few exceptions, the assignment of timestamp values has been considered only in the context of individual modification statements. This paper takes the next logical step: It considers the use of timestamping for capturing transaction and valid time in the context of transactions. The paper initially identifies and analyzes several problems with straightforward timestamping, then proceeds to propose a variety of techniques aimed at solving these problems. Timestamping the results of a transaction with the commit time of the transaction is a promising approach. The paper studies how this timestamping may be done using a spectrum of techniques. While many database facts are valid until now, the current time, this value is absent from the existing temporal types. Techniques that address this problem using different substitute values are presented. Using a stratum architecture, the performance of the different proposed techniques are studied. Although querying and modifying time-varying data is accompanied by a number of subtle problems, we present a comprehensive approach that provides application programmers with simple, consistent, and efficient support for modifying bitemporal databases in the context of user transactions. Received: March 11, 1998 / Accepted July 27, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号