共查询到20条相似文献,搜索用时 15 毫秒
1.
R. Braumandl J. Claussen A. Kemper D. Kossmann 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):156-177
Inter-object references are one of the key concepts of object-relational and object-oriented database systems. In this work,
we investigate alternative techniques to implement inter-object references and make the best use of them in query processing,
i.e., in evaluating functional joins. We will give a comprehensive overview and performance evaluation of all known techniques
for simple (single-valued) as well as multi-valued functional joins. Furthermore, we will describe special order-preserving\/ functional-join techniques that are particularly attractive for decision support queries that require ordered results. While
most of the presentation of this paper is focused on object-relational and object-oriented database systems, some of the results
can also be applied to plain relational databases because index nested-loop joins\/ along key/foreign-key relationships, as they are frequently found in relational databases, are just one particular way to
execute a functional join.
Received February 28, 1999 / Accepted September 27, 1999 相似文献
2.
The most common way of designing databases is by means of a conceptual model, such as E/R, without taking into account other
views of the system. New object-oriented design languages, such as UML (Unified Modelling Language), allow the whole system,
including the database schema, to be modelled in a uniform way. Moreover, as UML is an extendable language, it allows for
any necessary introduction of new stereotypes for specific applications. Proposals exist to extend UML with stereotypes for
database design but, unfortunately, they are focused on relational databases. However, new applications require complex objects
to be represented in complex relationships, object-relational databases being more appropriate for these requirements. The
framework of this paper is an Object-Relational Database Design Methodology, which defines new UML stereotypes for Object-Relational
Database Design and proposes some guidelines to translate a UML conceptual schema into an object-relational schema. The guidelines
are based on the SQL:1999 object-relational model and on Oracle8i as a product example.
Initial submission: 22 January 2002 / Revised submission: 10 June 2002
Published online: 7 January 2003
This paper is a revised and extended version of Extending UML for Object-Relational Database Design, presented in the UML’2001
conference [17]. 相似文献
3.
Laura M. Haas Michael J. Carey Miron Livny Amit Shukla 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):241-256
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas
for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm
performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op
timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations
can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's
predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly
useful to implementors of relational query optimizers and query processing systems.
Edited by M. Adiba. Received May 1993 / Accepted April 1996 相似文献
4.
5.
Semantic heterogeneity resolution in federated databases by metadata implantation and stepwise evolution 总被引:3,自引:0,他引:3
Goksel Aslan Dennis McLeod 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):120-132
A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database
boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into
the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema
implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme
in which remote and local (meta)data are integrated in a stepwise manner over time. We introduce metadata implantation and
stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure
and semantics of database elements (classes, attributes, and individual instances). We employ a semantically rich canonical
data model, and an incremental integration and semantic heterogeneity resolution scheme. In our approach, relationships between
local and remote information units are determined whenever enough knowledge about their semantics is acquired. The metadata
folding problem is solved by implanting remote database elements into the local database, a process that imports remote database
elements into the local database environment, hypothesizes the relevance of local and remote classes, and customizes the organization
of remote metadata. We have implemented a prototype system and demonstrated its use in an experimental neuroscience environment.
Received June 19, 1998 / Accepted April 20, 1999 相似文献
6.
Managing database server performance to meet QoS requirements in electronic commerce systems 总被引:1,自引:0,他引:1
Patrick Martin Wendy Powley Hoi-Ying Li Keri Romanufa 《International Journal on Digital Libraries》2002,3(4):316-324
The performance of electronic commerce systems has a major impact on their acceptability to users. Different users also demand
different levels of performance from the system, that is, they will have different Quality of Service (QoS) requirements. Electronic commerce systems are the integration of several different types of servers and each server must
contribute to meeting the QoS demands of the users. In this paper we focus on the role, and the performance, of a database server within an electronic commerce system.
We examine the characteristics of the workload placed on a database server by an electronic commerce system and suggest a
range of QoS requirements for the database server based on this analysis of the workload. We argue that a database server
must be able to dynamically reallocate its resources in order to meet the QoS requirements of different transactions as the
workload changes. We describe Quartermaster, which is a system to support dynamic goal-oriented resource management in database
management systems, and discuss how it can be used to help meet the QoS requirements of the electronic commerce database server.
We provide an example of the use of Quartermaster that illustrates how the dynamic reallocation of memory resources can be
used to meet the QoS requirements of a set of transactions similar to transactions found in an electronic commerce workload.
We briefly describe the memory reallocation algorithms used by Quartermaster and present experiments to show the impact of
the reallocations on the performance of the transactions.
Published online: 22 August 2001 相似文献
7.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
8.
Deadlock detection in distributed database systems: a new algorithm and a comparative performance analysis 总被引:4,自引:0,他引:4
Natalija Krivokapić Alfons Kemper Ehud Gudes 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):79-100
This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant
deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards,
a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The
DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”,
i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded
to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive
representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point
out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock
detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast
to all other algorithms, is very robust with respect to differing load and access profiles.
Received December 4, 1997 / Accepted February 2, 1999 相似文献
9.
Location is one of the most important elements of context in ubiquitous computing. In this paper we describe a location model, a spatial-aware communication model and an implementation of the models that exploit location for processing and communicating context. The location model presented describes a location
tree, which contains human-readable semantic and geometric information about an organisation and a structure to describe the
current location of an object or a context. The proposed system is dedicated to work not only on more powerful devices like
handhelds, but also on small computer systems that are embedded into everyday artefact (making them a digital artefact). Model and design decisions were made on the basis of experiences from three prototype setups with several applications,
which we built from 1998 to 2002. While running these prototypes we collected experiences from designers, implementers and users and formulated them as guidelines in this paper. All the prototype applications heavily use location information for providing their functionality. We found
that location is not only of use as information for the application but also important for communicating context. In this
paper we introduce the concept of spatial-aware communication where data is communicated based on the relative location of
digital artefacts rather than on their identity.
Correspondence to: Michael Biegl, Telecooperation Office (TecO), University of Karlsruhe, Vincenz-Prieβritz-Str. 1 D-76131 Karlsruhe, Germany.
Email: michael@teco.edu 相似文献
10.
We describe a system which supports dynamic user interaction with multimedia information using content-based hypermedia navigation
techniques, specialising in a technique for navigation of musical content. The model combines the principles of open hypermedia, whereby hypermedia link information is maintained by a link service, with content-based retrieval techniques in which a database is queried based on a feature of the multimedia content; our approach could be described as
‘content-based retrieval of hypermedia links’. The experimental system focuses on temporal media and consists of a set of
component-based navigational hypermedia tools. We propose the use of melodic pitch contours in this context and we present
techniques for storing and querying contours, together with experimental results. Techniques for integrating the contour database
with open hypermedia systems are also discussed. 相似文献
11.
Effective timestamping in databases 总被引:3,自引:0,他引:3
Kristian Torp Christian S. Jensen Richard T. Snodgrass 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):267-288
Many existing database applications place various timestamps on their data, rendering temporal values such as dates and times
prevalent in database tables. During the past two decades, several dozen temporal data models have appeared, all with timestamps
being integral components. The models have used timestamps for encoding two specific temporal aspects of database facts, namely
transaction time, when the facts are current in the database, and valid time, when the facts are true in the modeled reality.
However, with few exceptions, the assignment of timestamp values has been considered only in the context of individual modification
statements.
This paper takes the next logical step: It considers the use of timestamping for capturing transaction and valid time in the
context of transactions. The paper initially identifies and analyzes several problems with straightforward timestamping, then
proceeds to propose a variety of techniques aimed at solving these problems. Timestamping the results of a transaction with
the commit time of the transaction is a promising approach. The paper studies how this timestamping may be done using a spectrum
of techniques. While many database facts are valid until now, the current time, this value is absent from the existing temporal types. Techniques that address this problem using different
substitute values are presented. Using a stratum architecture, the performance of the different proposed techniques are studied.
Although querying and modifying time-varying data is accompanied by a number of subtle problems, we present a comprehensive
approach that provides application programmers with simple, consistent, and efficient support for modifying bitemporal databases
in the context of user transactions.
Received: March 11, 1998 / Accepted July 27, 1999 相似文献
12.
Secure buffering in firm real-time database systems 总被引:2,自引:0,他引:2
Binto George Jayant R. Haritsa 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):178-198
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems
where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions,
of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue.
Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management
in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates
several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution
of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE
is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction
workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time
performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our
experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel
bandwidths of up to one bit per second (a limit specified in military standards).
Received March 1, 1999 / Accepted October 1, 1999 相似文献
13.
The present paper proposes a methodological framework for the design and evaluation of information technology systems supporting
complex cognitive tasks. The aim of the methodological framework is to permit the design of systems which: (1) address the
cognitive difficulties met by potential users in performing complex problem-solving tasks; (2) improve their potential users’
problem-solving performance; and (3) achieve compatibility with potential users’ competences and working environment. After
a short review of the weaknesses of existing systems supposed to support complex cognitive tasks, the theoretical foundations
of the proposed methodology are presented. These are the ergonomic work analysis of French ergonomists, cognitive engineering, cognitive anthropology–ethnomethodology and activity theory. The third section of the paper describes the generic ergonomic model, which constitutes a frame of reference useful for the analyst of the work situation to which the information technology
system is addressed. In the fourth section, the proposed methodology is outlined, and in the fifth a case study demonstrating
an application of the methodology is summarised. In the epilogue, the differences between the proposed methodological framework
and other more conventional approaches are discussed. Finally, directions for future developments of the problem-driven approach
are proposed. 相似文献
14.
Doug Fang Shahram Ghandeharizadeh Dennis McLeod 《The VLDB Journal The International Journal on Very Large Data Bases》1996,5(2):151-165
An approach and mechanism for the transparent sharing
of objects in an environment of interconnected (networked),
autonomous database systems is presented.
An experimental prototype system has been designed and
implemented, and an analysis of its performance conducted.
Previous approaches to sharing in this environment typically
rely on the use of a global, integrated conceptual database
schema; users and applications must pose queries at this new
level of abstraction to access remote information.
By contrast, our approach provides a mechanism that allows users to
import remote objects directly into their local database transparently;
access to remote objects is virtually the same as access to local objects.
The experimental prototype system that has been designed and implemented
is based on the Iris and Omega object-based database management systems;
this system supports the sharing of data and meta-data objects (information
units) as well as units of behavior. The results of experiments
conducted to evaluate the performance of our mechanism demonstrate
the feasibility of database transparent object sharing in
a federated environment, and provide insight into the performance overhead
and tradeoffs involved.
Edited by
Georges Gardarin. Received October 29, 1992 / Revised
May 4, 1994 / Accepted March 1, 1995 相似文献
15.
This paper looks from an ethnographic viewpoint at the case of two information systems in a multinational engineering consultancy.
It proposes using the rich findings from ethnographic analysis during requirements discovery. The paper shows how context
– organisational and social – can be taken into account during an information system development process. Socio-technical
approaches are holistic in nature and provide opportunities to produce information systems utilising social science insights,
computer science technical competence and psychological approaches. These approaches provide fact-finding methods that are
appropriate to system participants’ and organisational stakeholders’ needs.
The paper recommends a method of modelling that results in a computerised information system data model that reflects the
conflicting and competing data and multiple perspectives of participants and stakeholders, and that improves interactivity
and conflict management. 相似文献
16.
Failure detection and consensus in the crash-recovery model 总被引:2,自引:0,他引:2
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover,
and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model.
We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure
detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not.
Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those
with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system.
Received: May 1998 / Accepted: November 1999 相似文献
17.
Query processing over object views of relational data 总被引:2,自引:0,他引:2
Gustav Fahl Tore Risch 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(4):261-281
This paper presents an approach to object view management for relational databases. Such a view mechanism makes it possible for users to transparently work with data in
a relational database as if it was stored in an object-oriented (OO) database. A query against the object view is translated
to one or several queries against the relational database. The results of these queries are then processed to form an answer
to the initial query. The approach is not restricted to a ‘pure’ object view mechanism for the relational data, since the
object view can also store its own data and methods. Therefore it must be possible to process queries that combine local data
residing in the object view with data retrieved from the relational database. We discuss the key issues when object views
of relational databases are developed, namely: how to map relational structures to sub-type/supertype hierarchies in the view,
how to represent relational database access in OO query plans, how to provide the concept of object identity in the view,
how to handle the fact that the extension of types in the view depends on the state of the relational database, and how to
process and optimize queries against the object view. The results are based on experiences from a running prototype implementation.
Edited by: M.T. ?zsu. Received April 12, 1995 / Accepted April 22, 1996 相似文献
18.
R. Braumandl M. Keidl A. Kemper D. Kossmann A. Kreutz S. Seltzsam K. Stocker 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(1):48-71
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published
on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe
project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query
with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling
technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing
operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure
privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and
runtime system need to be extended, and present the results of performance experiments that assess the additional cost for
secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs
and running times of their queries.
Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001 相似文献
19.
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic
choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise
by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness
properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least
p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time
quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for
verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL
[4, 14] to obtain procedures for PBTL
under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized
distributed systems.
Received: June 1997 / Accepted: May 1998 相似文献
20.
Peter Muth Patrick O'Neil Achim Pick Gerhard Weikum 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):199-221
Numerous applications such as stock market or medical information systems require that both historical and current data be
logically integrated into a temporal database. The underlying access method must support different forms of “time-travel”
queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper
presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM)
that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps
of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously
migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged
LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree,
both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert
performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much
better.
Received March 4, 1999 / Accepted September 28, 1999 相似文献