共查询到20条相似文献,搜索用时 0 毫秒
1.
Semantic integrity support in SQL:1999 and commercial (object-)relational database management systems 总被引:1,自引:0,他引:1
Can Türker Michael Gertz 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):241-269
The correctness of the data managed by database systems is vital to any application that utilizes data for business, research,
and decision-making purposes. To guard databases against erroneous data not reflecting real-world data or business rules,
semantic integrity constraints can be specified during database design. Current commercial database management systems provide
various means to implement mechanisms to enforce semantic integrity constraints at database run-time.
In this paper, we give an overview of the semantic integrity support in the most recent SQL-standard SQL:1999, and we show
to what extent the different concepts and language constructs proposed in this standard can be found in major commercial (object-)relational
database management systems. In addition, we discuss general design guidelines that point out how the semantic integrity features
provided by these systems should be utilized in order to implement an effective integrity enforcing subsystem for a database.
Received: 14 August 2000 / Accepted: 9 March 2001 / Published online: 7 June 2001 相似文献
2.
Concurrency control in hierarchical multidatabase systems 总被引:1,自引:0,他引:1
Sharad Mehrotra Henry F. Korth Avi Silberschatz 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(2):152-172
Over the past decade, significant research has been done towards developing transaction management algorithms for multidatabase
systems. Most of this work assumes a monolithic architecture of the multidatabase system with a single software module that
follows a single transaction management algorithm to ensure the consistency of data stored in the local databases. This monolithic
architecture is not appropriate in a multidatabase environment where the system spans multiple different organizations that
are distributed over various geographically distant locations. In this paper, we propose an alternative multidatabase transaction
management architecture, where the system is hierarchical in nature. Hierarchical architecture has consequences on the design
of transaction management algorithms. An implication of the architecture is that the transaction management algorithms followed
by a multidatabase system must be composable– that is, it must be possible to incorporate individual multidatabase systems as elements in a larger multidatabase system.
We present a hierarchical architecture for a multidatabase environment and develop techniques for concurrency control in such
systems.
Edited by R. Sacks-Davis. Received June 27, 1994 / Accepted September 26, 1995 相似文献
3.
Analysis of locking behavior in three real database systems 总被引:1,自引:0,他引:1
Vigyan Singhal Alan Jay Smith 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(1):40-52
Concurrency control is essential to the correct functioning of a database due to the need for correct, reproducible results.
For this reason, and because concurrency control is a well-formulated problem, there has developed an enormous body of literature
studying the performance of concurrency control algorithms. Most of this literature uses either analytic modeling or random
number-driven simulation, and explicitly or implicitly makes certain assumptions about the behavior of transactions and the
patterns by which they set and unset locks. Because of the difficulty of collecting suitable measurements, there have been
only a few studies which use trace-driven simulation, and still less study directed toward the characterization of concurrency
control behavior of real workloads. In this paper, we present a study of three database workloads, all taken from IBM DB2
relational database systems running commercial applications in a production environment. This study considers topics such
as frequency of locking and unlocking, deadlock and blocking, duration of locks, types of locks, correlations between applications
of lock types, two-phase versus non-two-phase locking, when locks are held and released, etc. In each case, we evaluate the
behavior of the workload relative to the assumptions commonly made in the research literature and discuss the extent to which
those assumptions may or may not lead to erroneous conclusions.
Edited by H. Garcia-Molina. Received April 5, 1994 / Accepted November 1, 1995 相似文献
4.
Multimedia systems must be able to support a certain quality of service (QoS) to satisfy the stringent real-time performance
requirements of their applications. HeiRAT, the Heidelberg Resource Administration Technique, is a comprehensive QoS management
system that was designed and implemented in connection with a distributed multimedia platform for networked PCs and workstations.
HeiRAT includes techniques for QoS negotiation, QoS calculation, resource reservation, and resource scheduling for local and
network resources. 相似文献
5.
In this paper we develop an evaluation framework for Knowledge Management Systems (KMS). The framework builds on the theoretical
foundations underlying organizational Knowledge Management (KM) to identify key KM activities and the KMS capabilities required
to support each activity. These capabilities are then used to form a benchmark for evaluating KMS. Organizations selecting
KMS can use the framework to identify gaps and overlaps in the extent to which the capabilities provided and utilized by their
current KMS portfolio meet the KM needs of the organization. Other applications of the framework are also discussed.
相似文献
Brent FurneauxEmail: |
6.
R. Bělohlávek V. Novák 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2002,7(2):79-88
Linguistic fuzzy control is introduced in a pure logical framework. The problem of learning of the linguistic rule base from
the data obtained by monitoring of successful control and the problem of learning the linguistic context are studied. The
methods are demonstrated by results of experiments.
Supported by the grant No. 201/96/0985 of the GAČR and the project VS96037 of MŠMT of the Czech Republic. 相似文献
7.
Managing database server performance to meet QoS requirements in electronic commerce systems 总被引:1,自引:0,他引:1
Patrick Martin Wendy Powley Hoi-Ying Li Keri Romanufa 《International Journal on Digital Libraries》2002,3(4):316-324
The performance of electronic commerce systems has a major impact on their acceptability to users. Different users also demand
different levels of performance from the system, that is, they will have different Quality of Service (QoS) requirements. Electronic commerce systems are the integration of several different types of servers and each server must
contribute to meeting the QoS demands of the users. In this paper we focus on the role, and the performance, of a database server within an electronic commerce system.
We examine the characteristics of the workload placed on a database server by an electronic commerce system and suggest a
range of QoS requirements for the database server based on this analysis of the workload. We argue that a database server
must be able to dynamically reallocate its resources in order to meet the QoS requirements of different transactions as the
workload changes. We describe Quartermaster, which is a system to support dynamic goal-oriented resource management in database
management systems, and discuss how it can be used to help meet the QoS requirements of the electronic commerce database server.
We provide an example of the use of Quartermaster that illustrates how the dynamic reallocation of memory resources can be
used to meet the QoS requirements of a set of transactions similar to transactions found in an electronic commerce workload.
We briefly describe the memory reallocation algorithms used by Quartermaster and present experiments to show the impact of
the reallocations on the performance of the transactions.
Published online: 22 August 2001 相似文献
8.
Global transaction support for workflow management systems: from formal specification to practical implementation 总被引:6,自引:0,他引:6
Paul Grefen Jochem Vonk Peter Apers 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):316-333
In this paper, we present an approach to global transaction management in workflow environments. The transaction mechanism
is based on the well-known notion of compensation, but extended to deal with both arbitrary process structures to allow cycles
in processes and safepoints to allow partial compensation of processes. We present a formal specification of the transaction
model and transaction management algorithms in set and graph theory, providing clear, unambiguous transaction semantics. The
specification is straightforwardly mapped to a modular architecture, the implementation of which is first applied in a testing
environment, then in the prototype of a commercial workflow management system. The modular nature of the resulting system
allows easy distribution using middleware technology. The path from abstract semantics specification to concrete, real-world
implementation of a workflow transaction mechanism is thus covered in a complete and coherent fashion. As such, this paper
provides a complete framework for the application of well-founded transactional workflows.
Received: 16 November 1999 / Accepted 29 August 2001 Published online: 6 November 2001 相似文献
9.
Denise J. Ecklund Vera Goebel Thomas Plagemann Earl F. Ecklund Jr. 《Multimedia Systems》2002,8(5):431-442
In this paper, we present a separable, reusable middleware solution that provides coordinated, end-to-end QoS management
over any type of service component, and can use existing (legacy) QoS management solutions (by using wrappers) in a distributed
multimedia system. Our middleware solution incorporates strategic and tactical QoS managers, and supports protocols and messages
between tactical managers and managed application components, and between QoS managers in the management hierarchy. Strategic
QoS managers take a global view of QoS provided by a set of application components within the manager's policy domain. Tactical
QoS managers provide local control over application components. We introduce the concept of QoS policy domains to scope the
authority of a strategic QoS manager. We describe how the management hierarchy is dynamically configured and reconfigured
based on runtime needs of the application. 相似文献
10.
Workflow management systems (WfMS) offer a promising technology for the realization of process-centered application systems.
A deficiency of existing WfMS is their inadequate support for dealing with exceptional deviations from the standard procedure.
In the ADEPT project, therefore, we have developed advanced concepts for workflow modeling and execution, which aim at the
increase of flexibility in WfMS. On the one hand we allow workflow designers to model exceptional execution paths already
at buildtime provided that these deviations are known in advance. On the other hand authorized users may dynamically deviate
from the pre-modeled workflow at runtime as well in order to deal with unforeseen events. In this paper, we focus on forward
and backward jumps needed in this context. We describe sophisticated modeling concepts for capturing deviations in workflow
models already at buildtime, and we show how forward and backward jumps (of different semantics) can be correctly applied
in an ad-hoc manner during runtime as well. We work out basic requirements, facilities, and limitations arising in this context.
Our experiences with applications from different domains have shown that the developed concepts will form a key part of process
flexibility in process-centered information systems.
Received: 6 October 2002 / Accepted: 8 January 2003
Published online: 27 February 2003
This paper is a revised and extended version of [40]. The described work was partially performed in the research project “Scalability
in Adaptive Workflow Management Systems” funded by the Deutsche Forschungsgemeinschaft (DFG). 相似文献
11.
HweeHwa Pang 《Multimedia Systems》1997,5(6):386-399
Multimedia applications that are required to manipulate large collections of objects are becoming increasingly common. Moreover,
the size of multimedia objects, which are already huge, are getting even bigger as the resolution of output devices improve.
As a result, many multimedia storage systems are not likely to be able to keep all of their objects disk-resident. Instead,
a majority of the less popular objects have to be off-loaded to tertiary storage to keep costs down. The speed at which objects
can be accessed from tertiary storage is thus an important consideration. In this paper, we propose an adaptive data retrieval
algorithm that employs a combination of staging and direct access in servicing tertiary storage retrieval requests. At retrieval
time, an object that resides in tertiary storage can either be staged to and then played back from disks, or the object can
be accessed directly from the tertiary drives. We show that a simplistic policy that adheres strictly to staging or direct
access does not exploit the full retrieval capacity of both the tertiary library and the secondary storage. To overcome the
problem, we propose a data retrieval algorithm that dynamically chooses between staging and direct access, based on the relative
load on the tertiary versus secondary devices. A series of simulation experiments confirms that the algorithm achieves good
access times over a wide range of workloads and resource configurations. Moreover, the algorithm is very responsive to changing
load conditions. 相似文献
12.
Summary. This paper formulates necessary and sufficient conditions on the information required for enforcing causal ordering in a
distributed system with asynchronous communication. The paper then presents an algorithm for enforcing causal message ordering.
The algorithm allows a process to multicast to arbitrary and dynamically changing process groups. We show that the algorithm
is optimal in the space complexity of the overhead of control information in both messages and message logs. The algorithm
achieves optimality by transmitting the bare minimum causal dependency information specified by the necessity conditions,
and using an encoding scheme to represent and transmit this information. We show that, in general, the space complexity of
causal 0message ordering in an asynchronous system is , where is the number of nodes in the system. Although the upper bound on space complexity of the overhead of control information
in the algorithm is , the overhead is likely to be much smaller on the average, and is always the least possible.
Received: January 1996 / Accepted: February 1998 相似文献
13.
Integration – supporting multiple application classes with heterogeneous performance requirements – is an emerging trend
in networks, file systems, and operating systems. We evaluate two architectural alternatives – partitioned and integrated
– for designing next-generation file systems. Whereas a partitioned server employs a separate file system for each application
class, an integrated file server multiplexes its resources among all application classes; we evaluate the performance of the
two architectures with respect to sharing of disk bandwidth among the application classes. We show that although the problem
of sharing disk bandwidth in integrated file systems is conceptually similar to that of sharing network link bandwidth in
integrated services networks, the arguments that demonstrate the superiority of integrated services networks over separate
networks are not applicable to file systems. Furthermore, we show that: an integrated server outperforms the partitioned server
in a large operating region and has slightly worse performance in the remaining region; the capacity of an integrated server
is larger than that of the partitioned server; and an integrated server outperforms the partitioned server by a factor of
up to 6 in the presence of bursty workloads. 相似文献
14.
Byzantine quorum systems 总被引:12,自引:0,他引:12
Summary. Quorum systems are well-known tools for ensuring the consistency and availability of replicated data despite the benign failure
of data repositories. In this paper we consider the arbitrary (Byzantine) failure of data repositories and present the first
study of quorum system requirements and constructions that ensure data availability and consistency despite these failures.
We also consider the load associated with our quorum systems, i.e., the minimal access probability of the busiest server.
For services subject to arbitrary failures, we demonstrate quorum systems over servers with a load of , thus meeting the lower bound on load for benignly fault-tolerant quorum systems. We explore several variations of our quorum
systems and extend our constructions to cope with arbitrary client failures.
Received: October 1996 / Accepted June 1998 相似文献
15.
Summary. We set out a modal logic for reasoning about multilevel security of probabilistic systems. This logic contains expressions
for time, probability, and knowledge. Making use of the Halpern-Tuttle framework for reasoning about knowledge and probability,
we give a semantics for our logic and prove it is sound. We give two syntactic definitions of perfect multilevel security
and show that their semantic interpretations are equivalent to earlier, independently motivated characterizations. We also
discuss the relation between these characterizations of security and between their usefulness in security analysis. 相似文献
16.
The KMS has been widely implemented in organizations. However, its availability does not guarantee that employees have been willing to spend time and effort using it. We explored the use of KMS with emphasis on social relationship. Specifically, social capital theory was employed to establish the social relationship construct and its three dimensions: tie strength, shared norms, and trust. By studying a company that had implemented a KMS, we explored the dimensions of social relationship and its importance in the use of a KMS by employees. A theoretical framework was used to depict the antecedents of employee's usage behavior. Implications for both researchers and practitioners are discussed, especially for companies expecting to exploit knowledge sharing in the Chinese business environment. 相似文献
17.
E. Panagos A. Biliris 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):209-223
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms
of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS
storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking
scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers
are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only
logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction
commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes
requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also
present a preliminary performance evaluation of the implementation of the above mechanisms.
Edited by R. King. Received July 1993 / Accepted May 1996 相似文献
18.
We have developed a novel approach to the extraction of cloud base height (CBH) from pairs of whole-sky imagers (WSIs). The
core problem is to spatially register cloud fields from widely separated WSIs; this complete, triangulation provides the CBH
measurements. The wide camera separation and the self-similarity of clouds defeats standard matching algorithms when applied
to static views of the sky. In response, we use optical flow methods that exploit the fact that modern WSIs provide image
sequences. We will describe the algorithm, a confidence metric for its performance, a method to correct the severe projective
effects of the WSI camera, and results on real data. 相似文献
19.
Synchronous Byzantine quorum systems 总被引:2,自引:0,他引:2
Rida A. Bazzi 《Distributed Computing》2000,13(1):45-52
Summary. Quorum systems have been used to implement many coordination problems in distributed systems such as mutual exclusion, data
replication, distributed consensus, and commit protocols. Malkhi and Reiter recently proposed quorum systems that can tolerate
Byzantine failures; they called these systems Byzantine quorum systems and gave some examples of such quorum systems. In this
paper, we propose a new definition of Byzantine quorums that is appropriate for synchronous systems. We show how these quorums
can be used for data replication and propose a general construction of synchronous Byzantine quorums using standard quorum
systems. We prove tight lower bounds on the load of synchronous Byzantine quorums for various patterns of failures and we
present synchronous Byzantine quorums that have optimal loads that match the lower bounds for two failure patterns.
Received: June 1998 / Accepted: August 1999 相似文献
20.
Srirangaraj Setlur Alfred Lawson Venugopal Govindaraju Sargur Srihari 《International Journal on Document Analysis and Recognition》2002,4(3):154-169
This paper describes the issues involved in the design of a system for evaluating improvements in the performance of a real-time
address recognition system being used by the United States Postal Service for processing mail-piece images. Evaluation of
the performance of recognition systems is normally carried out by measuring the performance of the system on a representative
sample of images. Designing a comprehensive and valid testing scenario is a complex task that requires careful attention.
Sampling live mail-stream to generate a deck of images representative of the general mail-stream for testing, truthing (generating
reference data on a significant number of images), grading and evaluation, and designing tools to facilitate these functions
are important topics that need to be addressed. This paper describes the efforts of the United States Postal Service and CEDAR
towards developing an infrastructure for sampling, truthing, and testing of mail-stream images.
Received: July 25, 2000 / Revised version: July 31, 2001 相似文献