首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

For maintaining the consistency of database, the recovery algorithms traditionally depend on complete rollback to a consistent checkpoint. The recovery problem from committed malicious transactions can be solved by determining the dependencies between the transactions in window of vulnerability. Since the size of transactional log may grow very large, recovery becomes a complex and time-consuming process. In this paper, we propose an approach which incorporates application specific information to determine transactional dependencies. The approach is applied to column based transaction dependency to obtain better performance. The system is implemented at application layer where SQL queries are generated. In recovery phase, we consider only affected and malicious transactions for rollback and skip the good transactions.  相似文献   

2.
Database applications often impose temporal dependencies between transactions that must be satisfied to preserve data consistency. The extant correctness criteria used to schedule the execution of concurrent transactions are either time independent or use strict, difficult to satisfy real-time constraints. On one end of the spectrum, serializability completely ignores time. On the other end, deadline scheduling approaches consider the outcome of each transaction execution correct only if the transaction meets its real-time deadline. In this article, we explore new correctness criteria and scheduling methods that capture temporal transaction dependencies and belong to, the broad area between these two extreme approaches. We introduce the concepts ofsuccession dependency andchronological dependency and define correctness criteria under which temporal dependencies between transactions are preserved even if the dependent transactions execute concurrently. We also propose achronological scheduler that can guarantee that transaction executions satisfy their chronological constraints. The advantages of chronological scheduling over traditional scheduling methods, as well as the main issues in the implementation and performance of the proposed scheduler, are discussed.  相似文献   

3.
ContextDependency management often suffers from labor intensity and complexity in creating and maintaining the dependency relations in practice. This is even more critical in a distributed development, in which developers are geographically distributed and a wide variety of tools is used. In those settings, different interpretations of software requirements or usage of different terminologies make it challenging to predict the change impact.Objectiveis (a) to describe a method facilitating change management in geographically distributed software engineering by effective discovery and establishment of dependency links using domain models; (b) to evaluate the effectiveness of the proposed method.MethodA domain model, providing a common reference point, is used to manage development objects and to automatically support dependency discovery. We propose to associate (annotate) development objects with the concepts from the model. These associations are used to compute dependency among development objects, and are stepwise refined to direct dependency links (i.e. enabling product traceability). To evaluate the method, we conducted a laboratory-based randomized experiment on two real cases. Six participants were using an implemented prototype and two comparable tools to perform simulated tasks.ResultsIn the paper we elaborate on the proposed method discussing its functional steps. Results from the experiment show that the method can be effectively used to assist in discovery of dependency links. Users have discovered on average fourteen percent more dependency links than by using the comparable tools.ConclusionsThe proposed method advocates the use of domain models throughout the whole development life-cycle and is apt to facilitate multi-site software engineering. The experimental study and results suggest that the method is effective in the discovery of dependencies among development objects.  相似文献   

4.
CORBA的一个重要的服务是OMG定义面向对象的事务服务(OTS),它是对分布事务进行处理的,由于事务处理的重要性,OTS本应该作为ORB的一部分来提高事务处理应用的效率,基于上述考虑,OMG在OTS中提出了interposition技术,文章在方法论的基础上探讨如何采用该技术实现OTS。  相似文献   

5.
CORBA的一个重要的服务是OMG定义面向对象的事务服务(OTS),它是对分布事务进行处理的。由于事务处理的重要性,OTS本应该作为ORB的一部分来提高事务处理应用的效率。基于上述考虑,OMG在OTS中提出了interposition技术,文章在方法论的基础上探讨如何采用该技术实现OTS。  相似文献   

6.
An important function of many cyber-physical systems (CPS) is to provide a close monitoring of the operation environment to be able to adapt to changing situations effectively. One of the commonly applied techniques for that is to invoke time-constrained periodic application transactions to check the status of the operation environment. The status of the environment is represented by the values of the physical entities in the operation environment which are maintained as real-time data objects in a real-time database. Unfortunately, meeting the deadlines of application transactions and maintaining the quality of real-time data objects are conflicting with each other, because they compete for the same computation resources. To address this problem of update and application transactions co-scheduling problem, in this paper, we propose a fixed priority co-scheduling algorithm called periodic co-scheduling (PCS). PCS uses periodic update transactions to maintain the temporal validity of real-time data objects. It judiciously decides the priority orders among all the update and application transactions so that the constructed schedule can satisfy the deadline constraints of all the application transactions and at the same time maximize the qualities of the real-time data objects to ensure the correct execution of application transactions. The effectiveness of the algorithm is validated through extensive simulation experiments.  相似文献   

7.
Hawk is a language-independent runtime system for writing data-parallel programs using partitioned objects. A partitioned object is a multidimensional array of elements that can be partitioned and distributed by the programmer. The Hawk runtime system uses the user-defined partitioning of objects and a runtime mechanism based on Partition Dependency Graphs (PDGs) to increase the granularity of data transfers and consistency checks to a partition. Hawk further oplimizes the execution of parallel operations by prefetching data and overlapping communication and computation.

We first present the partitioned object model. Then, we give an overview of Hawk and describe how it uses PDGs to reduce communication overhead and optimize parallel operations. Finally, we discuss the effectiveness of our optimization technique with two applications written on top of Hawk.  相似文献   

8.
《Information Systems》2002,27(4):245-275
Entity relationship (ER) schemas include cardinality constraints, that restrict the dependencies among entities within a relationship type. The cardinality constraints have direct impact on the application maintenance, since insertions or deletions of entities or relationships might affect related entities. Indeed, maintenance of a system or of a database can be strengthened to enforce consistency with respect to the cardinality constraints in a schema. Yet, once an ER schema is translated into a logical database schema, or translated within a system, the direct correlation between the cardinality constraints and maintenance transactions is lost, since the components of the ER schema might be decomposed among those of the logical database schema or the target system.In this paper, a full solution to the enforcement of cardinality constraints in EER schemas is given. We extend the enhanced ER (EER) data model with structure-based update methods that are fully defined by the cardinality constraints. The structure methods are provably terminating and cardinality faithful, i.e., they do not insert new inconsistencies and can only decrease existing ones. A refined approach towards measuring the cardinality consistency of a database is introduced. The contribution of this paper is in the automatic creation of update methods, and in building the formal basis for proving their correctness.  相似文献   

9.
10.
The security of computers and their networks is of crucial concern in the world today. One mechanism to safeguard information stored in database systems is an Intrusion Detection System (IDS). The purpose of intrusion detection in database systems is to detect malicious transactions that corrupt data. Recently researchers are working on using data mining techniques for detecting such malicious transactions in database systems. Their approach concentrates on mining data dependencies among data items. However, the transactions not compliant with these data dependencies are identified as malicious transactions. Algorithms that these approaches use for designing their data dependency miner have limitations. For instance, they need to experimentally determine appropriate settings for minimum support and related constraints, which does not necessarily lead to strong data dependencies. In this paper we propose a new data mining algorithm, called the Optimal Data Access Dependency Rule Mining (ODADRM), for designing a data dependency miner for our database IDS. ODADRM is an extension of k-optimal rule discovery algorithm, which has been improved to be suitable in database intrusion detection domain. ODADRM avoids many limitations of previous data dependency miner algorithms. As a result, our approach is able to track normal transactions and detect malicious ones more effectively than existing approaches.  相似文献   

11.
Transactional dependencies play an important role in coordinating and executing the subtransactions in advanced transaction processing models, such as, nested transactions and workflow transactions. Researchers have formalized the notion of transactional dependencies and have shown how various advanced transaction models can be expressed using different kinds of dependencies. Incorrect specification of dependencies can result in unpredictable behavior of the advanced transaction, which, in turn, can lead to unavailability of resources and information integrity problems. In this work, we focus on how to correctly specify dependencies in an advanced transaction. We enumerate the different kinds of dependencies that may be present in an advanced transaction and classify them into two broad categories: event ordering and event enforcement dependencies. Different event ordering and event enforcement dependencies in an advanced transaction often interact in subtle ways resulting in conflicts and redundancies. We describe the different types of conflicts that can arise due to the presence of multiple dependencies and describe how one can detect such conflicts. An advanced transaction may also contain redundant dependencies—these are dependencies that can be logically derived from other dependencies. We show how such extraneous dependencies can be eliminated to get an equivalent set of dependencies that has the same effect as the original set. Our dependency analysis is done in the context of a generalized advanced transaction model that is capable of expressing different kinds of advanced transactions. Recommended by: Amit Sheth  相似文献   

12.
13.
In order to specify databases completely at the conceptual level, conceptual database specification languages should contain a data definition (sub)language (DDL), for specifying data structures (+constraints), a data retrieval (sub)language (DRL), for specifying queries, as well as a (declarative) data manipulation (sub)language (DML), for specifying transactions.Object Role Modeling (ORM) is a powerful method for designing and querying database models at the conceptual level. By means of verbalization the application is also described in natural language as used by domain experts, for communication and validation purposes. ORM currently comprises a DDL and a DRL (ConQuer). However, the ORM-method does not yet contain an expressive DML for specifying transactions at the conceptual level.In an earlier paper we designed a syntactic extension of the ORM-method with a DML for specifying transactions at the conceptual level in a purely declarative way. For all transactions we proposed syntaxes, verbalizations, and diagrams. However, we did not give a formal semantics then.The purpose of this paper is to add a clear, formal and purely declarative semantics to the proposed ORM-transactions. The paper also formally defines rollbacks and illustrates everything with examples (including a solution to a well-known transaction specification problem). The extension of ORM with an expressive set of completely declaratively specified transactions makes ORM complete as a database specification method at the conceptual level.  相似文献   

14.
15.
A simple semantic or object-based data model is considered, which includes objects and object identifiers, classes and class hierarchies, attributes ranging over atomic values. Transactions are composed from five, basic operators manipulating objects. Preservation of functional and acyclic inclusion dependencies by transactions is studied in such a context of semantic databases and update transactions. It is shown to be decidable whether a given transaction preserves a given set of functional dependencies, or acyclic inclusion dependencies, or both functional and acyclic inclusion dependencies. Time complexity (with respect to the sizes of transactions and database schemas) for testing preservation is also discussed. It turns out that the problem is co-NP-complete in the simplest cases where there is only one nontrivial dependency and transactions consist of only creations and deletions of objects. It implies that the problem is at least co-NP-hard in general.This work supported in part by NSF grants IRI-9109520 and IRI-9117094  相似文献   

16.
Previous works on maintaining temporal consistency of real-time data objects mainly focuses on real-time database systems in which the transmission delays (jitters) of update jobs are simply ignored. However, this assumption does not hold in distributed real-time systems where the jitters of the update jobs can be large and change unpredictably with time. In this paper, we examine the design problems when the More-Less (ML) approach (Xiong and Ramamritham in Proc. of the IEEE real-time systems symposium 1999; IEEE Trans Comput 53:567?C583, 2004), known to be an efficient scheme for maintaining temporal consistency of real-time data objects, is applied in a distributed real-time system environment. We propose two new extensions based on ML, called Jitter-based More-Less (JB-ML) and Statistical Jitter-based More-Less (SJB-ML) to address the jitter problems. JB-ML assumes that in the system the jitter is a constant for each update task, and it provides a deterministic guarantee in temporal consistency of the real-time data objects. SJB-ML further relaxes this restriction and provides a statistical guarantee based on the given QoS requirements of the real-time data objects. We demonstrate through extensive simulation experiments that both JB-ML and SJB-ML are effective approaches and they significantly outperform ML in terms of improving schedulability.  相似文献   

17.
In this paper, we present a process algebra with a minimal form of semantics for actions given by dependencies. Action dependencies are interpreted in the Mazurkiewicz sense: independent actions should be able to commute, or (from a different perspective) should be unordered, whereas dependent actions are always ordered. In this approach, the process algebra operators are used to describe the conceptual behavioural structure of the system, and the action dependencies determine the minimal necessary orderings and thereby the additionally possible parallelism within this structure. In previous work on the semantics of specifications using Mazurkiewicz dependencies, the main interest has been on linear time. We present in this paper a branching time semantics, both operationally and denotationally. For this purpose, we introduce a process algebra that incorporates, besides some standard operators, also an operator for action refinement. For interpreting the operators in the presence of action dependencies, a new concept of partial termination has to be developed. We show consistency of the operational and denotational semantics; furthermore, we give a axiomatisation of bisimilarity, which is complete for finite terms. Some small examples demonstrate the flexibility of this process algebra in the design of distributed reactive systems. Received: 19 November 1998 / 18 July 2001  相似文献   

18.
19.
An Object Grammar is a variation on traditional BNF grammars, where the notation is extended to support declarative bidirectional mappings between text and object graphs. The two directions for interpreting Object Grammars are parsing and formatting. Parsing transforms text into an object graph by recognizing syntactic features and creating the corresponding object structure. In the reverse direction, formatting recognizes object graph features and generates an appropriate textual presentation. The key to Object Grammars is the expressive power of the mapping, which decouples the syntactic structure from the graph structure. To handle graphs, Object Grammars support declarative annotations for resolving textual names that refer to arbitrary objects in the graph structure. Predicates on the semantic structure provide additional control over the mapping. Furthermore, Object Grammars are compositional so that languages may be defined in a modular fashion. We have implemented our approach to Object Grammars as one of the foundations of the Ensō system and illustrate the utility of our approach by showing how it enables definition and composition of domain-specific languages (DSLs).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号