共查询到20条相似文献,搜索用时 140 毫秒
1.
Deadlock detection in distributed database systems: a new algorithm and a comparative performance analysis 总被引:4,自引:0,他引:4
Natalija Krivokapić Alfons Kemper Ehud Gudes 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):79-100
This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant
deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards,
a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The
DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”,
i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded
to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive
representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point
out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock
detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast
to all other algorithms, is very robust with respect to differing load and access profiles.
Received December 4, 1997 / Accepted February 2, 1999 相似文献
2.
Flip Korn Alexandros Labrinidis Yannis Kotidis Christos Faloutsos 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):254-266
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing
error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the
given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values
from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess”
the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting,
answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules
in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods
which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics,
biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in
binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than
using straightforward column averages.
Received: March 15, 1999 / Accepted: November 1, 1999 相似文献
3.
Design and analysis of a video-on-demand server 总被引:6,自引:0,他引:6
The availability of high-speed networks, fast computers and improved storage technology is stimulating interest in the development
of video on-demand services that provide facilities similar to a video cassette player (VCP). In this paper, we present a
design of a video-on-demand (VOD) server, capable of supporting a large number of video requests with complete functionality
of a remote control (as used in VCPs), for each request. In the proposed design, we have used an interleaved storage method
with constrained allocation of video and audio blocks on the disk to provide continuous retrieval. Our storage scheme interleaves
a movie with itself (while satisfying the constraints on video and audio block allocation. This approach minimizes the starting delay and the
buffer requirement at the user end, while ensuring a jitter-free display for every request. In order to minimize the starting
delay and to support more non-concurrent requests, we have proposed the use of multiple disks for the same movie. Since a
disk needs to hold only one movie, an array of inexpensive disks can be used, which reduces the overall cost of the proposed
system. A scheme supported by our disk storage method to provide all the functions of a remote control such as “fast-forwarding”,
“rewinding” (with play “on” or “off”), “pause” and “play” has also been discussed. This scheme handles a user request independent
of others and satisfies it without degrading the quality of service to other users. The server design presented in this paper
achieves the multiple goals of high disk utilization, global buffer optimization, cost-effectiveness and high-quality service
to the users. 相似文献
4.
Restoring subsampled color images 总被引:1,自引:0,他引:1
In some capturing devices, such as digital cameras, there is only one color sensor at each pixel. Usually, 50% of the pixels
have only a green sensor, 25% only a red sensor, and 25% only a blue sensor. The problem is then to restore the two missing
colors at each pixel – this is called “demosaicing”, because the original samples are usually arranged in a mosaic pattern.
In this short paper, a few demosaicing algorithms are developed and compared. They all incorporate a notion of “smoothness
in chroma space”, by imposing conditions not only on the behavior of each color channel separately, but also on the correlation
between the three channels. 相似文献
5.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
6.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
7.
Xiaodong Wen Theodore D. Huffmire Helen H. Hu Adam Finkelstein 《Multimedia Systems》1999,7(5):350-358
We present several algorithms suitable for analysis of broadcast video. First, we show how wavelet analysis of frames of
video can be used to detect transitions between shots in a video stream, thereby dividing the stream into segments. Next we
describe how each segment can be inserted into a video database using an indexing scheme that involves a wavelet-based “signature.”
Finally, we show that during a subsequent broadcast of a similar or identical video clip, the segment can be found in the
database by quickly searching for the relevant signature. The method is robust against noise and typical variations in the
video stream, even global changes in brightness that can fool histogram-based techniques. In the paper, we compare experimentally
our shot transition mechanism to a color histogram implementation, and also evaluate the effectiveness of our database-searching
scheme. Our algorithms are very efficient and run in realtime on a desktop computer. We describe how this technology could
be employed to construct a “smart VCR” that was capable of alerting the viewer to the beginning of a specific program or identifying 相似文献
8.
We present a new approach to the tracking of very non-rigid patterns of motion, such as water flowing down a stream. The
algorithm is based on a “disturbance map”, which is obtained by linearly subtracting the temporal average of the previous
frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a “head” at the present
position of the motion and a historical “tail” that indicates the previous locations of that motion. These disturbances serve
as loci of attraction for “tracking particles” that are scattered throughout the image. The algorithm is very fast and can
be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving
cameras, showing a busy ant column, waterfalls, rapids and flowing streams, shoppers in a mall, and cars in a traffic intersection.
Received: 24 June 1997 / Accepted: 30 July 1998 相似文献
9.
Snakes are active contours that minimize an energy function. We present a new kind of active contours called “Sandwich Snakes”.
They are formed by two snakes, one inside and the other outside of the curve that one is looking for. They have the same number
of particles, which are connected in one-to-one correspondence. At the minimum the two snakes have the same position. We also
present here a multi-scale system, where Sandwich Snakes are adjusted at increasing resolutions, and an interactive tool that
permits one to easily specify the initial position for the Sandwich Snakes. Sandwich Snakes exhibit very good perfomance detecting
contours with complex shapes, where the traditional methods fail. They are also very robust with respect to noise.
Received: 29 January 1999 / Accepted: 20 August 2000 相似文献
10.
Jeannette M. Wing 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(3):261-265
In October 2002 I attended the Ninth Monterey Software Engineering workshop held in Venice, Italy. This year’s theme was titled
“Radical Innovations of Software and Systems Engineering in the Future.” In preparing my talk for the workshop, I thought
hard about what I could possibly say on this topic that would not sound stupid. I certainly thought it would be awfully presumptious
of me to predict how people will or should be developing software in the future. More easily, I could imagine what the systems
of tomorrow will look like and who will be developing them, though anything I would say would sound like platitudes. I could also state some strong opinions about what matters and what doesn’t in the process of software development. Stating
such attitudes would at least provoke some discussion. Hence, what follows captures some of what I said at the workshop.
Published online: 10 April 2003 相似文献
11.
Summary. A useless checkpoint is a local checkpoint that cannot be part of a consistent global checkpoint. This paper addresses the following
problem. Given a set of processes that take (basic) local checkpoints in an independent and unknown way, the problem is to design communication-induced checkpointing protocols
that direct processes to take additional local (forced) checkpoints to ensure no local checkpoint is useless.
The paper first proves two properties related to integer timestamps which are associated with each local checkpoint. The first
property is a necessary and sufficient condition that these timestamps must satisfy for no checkpoint to be useless. The second
property provides an easy timestamp-based determination of consistent global checkpoints. Then, a general communication-induced
checkpointing protocol is proposed. This protocol, derived from the two previous properties, actually defines a family of
timestamp-based communication-induced checkpointing protocols. It is shown that several existing checkpointing protocols for
the same problem are particular instances of the general protocol. The design of this general protocol is motivated by the
use of communication-induced checkpointing protocols in “consistent global checkpoint”-based distributed applications such
as the detection of stable or unstable properties and the determination of distributed breakpoints.
Received: July 1997 / Accepted: August 1999 相似文献
12.
13.
In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to compensate for
variable network delays. In this paper, we consider the problem of adaptively adjusting the playout delay in order to keep
this delay as small as possible, while at the same time avoiding excessive “loss” due to the arrival of packets at the receiver
after their playout time has already passed. The contributions of this paper are twofold. First, given a trace of packet audio
receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the
range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses
(due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks
the network delay of recently received packets and efficiently maintains delay percentile information. This information, together
with a “delay spike” detection algorithm based on (but extending) our earlier work, is used to dynamically adjust talkspurt
playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio
delay traces and performs close to the theoretical optimum over a range of parameter values of interest. 相似文献
14.
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic
choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise
by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness
properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least
p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time
quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for
verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL
[4, 14] to obtain procedures for PBTL
under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized
distributed systems.
Received: June 1997 / Accepted: May 1998 相似文献
15.
Justin E. Harlow III Franc Brglez 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(2):193-206
Traditional approaches to the measurement of performance for CAD algorithms involve the use of sets of so-called “benchmark
circuits.” In this paper, we demonstrate that current procedures do not produce results which accurately characterize the
behavior of the algorithms under study. Indeed, we show that the apparent advances in algorithms which are documented by traditional
benchmarking may well be due to chance, and not due to any new properties of the algorithms. As an alternative, we introduce
a new methodology for the characterization of CAD heuristics which employs well-studied design of experiments methods. We
show through numerous examples how such methods can be applied to evaluate the behavior of heuristics used in BDD variable
ordering.
Published online: 15 May 2001 相似文献
16.
Constantine Stephanidis Demosthenes Akoumianakis 《Universal Access in the Information Society》2002,1(3):223-226
This paper presents a brief overview of the European Commission funded Thematic Network (Working Group) “Information Society
for All”-IS4ALL (IST-1999-14101). IS4ALL aims to establish a wide, interdisciplinary and closely collaborating network of
experts to provide the European Health Telematics industry with a comprehensive code of practice on how to appropriate the
benefits of universal design. This paper outlines the project’s main objectives and technical approach in the context of universal
access.
Published online: 9 April 2002 相似文献
17.
Susanne Graf 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):139-141
Preliminary versions of the papers of this special section originally appeared in the proceedings of the 6th edition of the
conference “Tools and Algorithms for the Construction and Analysis of Systems” (Tacas) held in Berlin early April 2000 as
a constituent event of the European joint conferences on Theory and Practice of Software. All papers present tools relevant
in the context of systems validation. The first three focus on extensions or particular applications of model-checking techniques,
whereas the fourth is about integration of design tools with validation tools, in particular theorem provers and model-checkers.
Published online: 24 January 2003 相似文献
18.
Making a resource or facility “accessible” to persons with disabilities does not necessarily make it “usable” to all such
members of the population. The authors present some valuable lessons learned by National Science Foundation researchers and
educators striving to make engaging activities and inviting curricula inclusive of all students, regardless of ability.
Published online: 9 April 2002 相似文献
19.
Michiharu Kudo 《International Journal of Information Security》2002,1(2):116-130
Over the years a wide variety of access control models and policies have been proposed, and almost all the models have assumed
“grant the access request or deny it.” They do not provide any mechanism that enables us to bind authorization rules with
required operations such as logging and encryption. We propose the notion of a “provisional action” that tells the user that
his request will be authorized provided he (and/or the system) takes certain actions. The major advantage of our approach
is that arbitrary actions such as cryptographic operations can all coexist in the access control policy rules. We define a
fundamental authorization mechanism and then formalize a provision-based access control model. We also present algorithms
and describe their algorithmic complexity. Finally, we illustrate how provisional access control policy rules can be specified
effectively in practical usage scenarios.
Published online: 22 January 2002 相似文献
20.
David Gibson Jon Kleinberg Prabhakar Raghavan 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):222-236
We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical
data. By “categorical data,” we mean tables with fields that cannot be naturally ordered by a metric – e.g., the names of
producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method
for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure
arising from the co-occurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types
of non-linear dynamical systems.
Received February 15, 1999 / Accepted August 15, 1999 相似文献