共查询到20条相似文献,搜索用时 765 毫秒
1.
Ling Feng Jeffrey Xu Yu Hongjun Lu Jiawei Han 《The VLDB Journal The International Journal on Very Large Data Bases》2002,11(2):153-175
Multidimensional inter-transactional association rules extend the traditional association rules to describe more general
associations among items with multiple properties across transactions. “After McDonald and Burger King open branches, KFC will open a branch two months later and one mile away” is an example of such rules. Since the number of potential inter-transactional association rules tends to be extremely large,
mining inter-transactional associations poses more challenges on efficient processing than mining traditional intra-transactional
associations. In order to make such association rule mining truly practical and computationally tractable, in this study we
present a template model to help users declare the interesting multidimensional inter-transactional associations to be mined. With the guidance of templates, several optimization techniques, i.e., joining, converging, and speeding, are
devised to speed up the discovery of inter-transactional association rules. We show, through a series of experiments on both
synthetic and real-life data sets, that these optimization techniques can yield significant performance benefits.
Edited by M.T. ?zsu. Received: February 16, 2001 / Accepted: June 1, 2002 Published online: September 25, 2002 相似文献
2.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
3.
Michiharu Kudo 《International Journal of Information Security》2002,1(2):116-130
Over the years a wide variety of access control models and policies have been proposed, and almost all the models have assumed
“grant the access request or deny it.” They do not provide any mechanism that enables us to bind authorization rules with
required operations such as logging and encryption. We propose the notion of a “provisional action” that tells the user that
his request will be authorized provided he (and/or the system) takes certain actions. The major advantage of our approach
is that arbitrary actions such as cryptographic operations can all coexist in the access control policy rules. We define a
fundamental authorization mechanism and then formalize a provision-based access control model. We also present algorithms
and describe their algorithmic complexity. Finally, we illustrate how provisional access control policy rules can be specified
effectively in practical usage scenarios.
Published online: 22 January 2002 相似文献
4.
Design and analysis of a video-on-demand server 总被引:6,自引:0,他引:6
The availability of high-speed networks, fast computers and improved storage technology is stimulating interest in the development
of video on-demand services that provide facilities similar to a video cassette player (VCP). In this paper, we present a
design of a video-on-demand (VOD) server, capable of supporting a large number of video requests with complete functionality
of a remote control (as used in VCPs), for each request. In the proposed design, we have used an interleaved storage method
with constrained allocation of video and audio blocks on the disk to provide continuous retrieval. Our storage scheme interleaves
a movie with itself (while satisfying the constraints on video and audio block allocation. This approach minimizes the starting delay and the
buffer requirement at the user end, while ensuring a jitter-free display for every request. In order to minimize the starting
delay and to support more non-concurrent requests, we have proposed the use of multiple disks for the same movie. Since a
disk needs to hold only one movie, an array of inexpensive disks can be used, which reduces the overall cost of the proposed
system. A scheme supported by our disk storage method to provide all the functions of a remote control such as “fast-forwarding”,
“rewinding” (with play “on” or “off”), “pause” and “play” has also been discussed. This scheme handles a user request independent
of others and satisfies it without degrading the quality of service to other users. The server design presented in this paper
achieves the multiple goals of high disk utilization, global buffer optimization, cost-effectiveness and high-quality service
to the users. 相似文献
5.
This paper proposes a new hand-held device called “InfoPoint” that allows appliances to work together over a network. We
have applied the idea of “drag-and-drop” operation as provided in the GUIs of PC and workstation desktop environment. InfoPoint
provides a unified interface that gives different types of appliances “drag-and-drop”-like behaviour for the transfer of data.
Moreover, it can transfer data from/to non-appliances such as pieces of paper. As a result, InfoPoint allows appliances to
work together, in the real-world environment, in terms of data transfer. A prototype of InfoPoint has been implemented and
several experimental applications have been investigated. InfoPoint has shown its applicability in a variety of circumstances.
We believe that the idea proposed in this paper will be a significant technology in the network of the future. 相似文献
6.
Scott D. Stoller 《Distributed Computing》2000,13(2):85-98
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized
real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and
“possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting
them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin
and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms
are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to
illustrate the benefits of this approach.
Received: January 1999 / Accepted: November 1999 相似文献
7.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
8.
Erik Hollnagel’s body of work in the past three decades has molded much of the current research approach to system safety,
particularly notions of “error”. Hollnagel regards “error” as a dead-end and avoids using the term. This position is consistent
with Rasmussen’s claim that there is no scientifically stable category of human performance that can be described as “error”.
While this systems view is undoubtedly correct, “error” persists. Organizations, especially formal business, political, and
regulatory structures, use “error” as if it were a stable category of human performance. They apply the term to performances
associated with undesired outcomes, tabulate occurrences of “error”, and justify control and sanctions through “error”. Although
a compelling argument can be made for Hollnagel’s view, it is clear that notions of “error” are socially and organizationally
productive. The persistence of “error” in management and regulatory circles reflects its value as a means for social control. 相似文献
9.
We present a new approach to the tracking of very non-rigid patterns of motion, such as water flowing down a stream. The
algorithm is based on a “disturbance map”, which is obtained by linearly subtracting the temporal average of the previous
frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a “head” at the present
position of the motion and a historical “tail” that indicates the previous locations of that motion. These disturbances serve
as loci of attraction for “tracking particles” that are scattered throughout the image. The algorithm is very fast and can
be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving
cameras, showing a busy ant column, waterfalls, rapids and flowing streams, shoppers in a mall, and cars in a traffic intersection.
Received: 24 June 1997 / Accepted: 30 July 1998 相似文献
10.
Paolo Traverso Piergiorgio Bertoli 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):78-92
We present part of an industrial project where mechanized theorem proving is used for the validation of a translator which
generates safety critical software. In this project, the mechanized proof is decomposed in two parts: one is done “online”,
at each run of the translator, by a custom prover which checks automatically that the result of each translation meets some
verification conditions; the other is done “offline”, once for all, interactively with a general purpose prover; the offline
proof shows that the verification conditions checked by the online prover are sufficient to guarantee the correctness of each
translation. The provably correct verification conditions can thus be seen as specifications for the online prover. This approach
is called mechanized result verification. This paper describes the project requirements and explains the motivations to formal validation by mechanized result verification,
provides an overview of the formalization of the specifications for the online prover and discusses in detail some issues
we have addressed in the mechanized offline proof. 相似文献
11.
Andrew Fano 《Personal and Ubiquitous Computing》2001,5(1):12-15
The promise of mobile devices lies not in their capacity to duplicate the capabilities of desktop machines, but rather in
their promise of enabling location-specific tasks. One of the challenges that must be addressed if they are to be used in
this way is how intuitive interfaces for mobile devices can be designed that enable access to location-specific services usable
across locations. We are developing a prototype mobile valet application that presents location-specific services organised
around the tasks associated with a location. The basic elements of the interface exploits commonalties in the way we address
tasks at various locations just as the familiar “file” and “edit” menus in various software applications exploit regularities
in software tasks. 相似文献
12.
Yukio Itakura Masaki Hashiyada Toshio Nagashima Shigeo Tsujii 《International Journal of Information Security》2002,1(3):149-160
The individual differences in the repeat count of several bases, short tandem repeat (STR), among all of the deoxyribonucleic
acid (DNA) base sequences, can be used as unique DNA information for a personal identification (ID). We propose a method to
generate a personal identifier (hereafter referred to as a “DNA personal ID”) by specifying multiple STR locations (called
“loci”) and then sequencing the repeat count information. We also conducted a validation experiment to verify the proposed
principle based on actual DNA data.
We verified that the matching probability of DNA personal IDs becomes exponentially smaller, to about 10-n, as n stages of loci are used and that no correlation exists among the loci.
Next, we considered the various issues that will be encountered when applying DNA personal IDs to information security systems,
such as biometric personal authentication systems.
Published online: 9 April 2002 相似文献
13.
Xiaodong Wen Theodore D. Huffmire Helen H. Hu Adam Finkelstein 《Multimedia Systems》1999,7(5):350-358
We present several algorithms suitable for analysis of broadcast video. First, we show how wavelet analysis of frames of
video can be used to detect transitions between shots in a video stream, thereby dividing the stream into segments. Next we
describe how each segment can be inserted into a video database using an indexing scheme that involves a wavelet-based “signature.”
Finally, we show that during a subsequent broadcast of a similar or identical video clip, the segment can be found in the
database by quickly searching for the relevant signature. The method is robust against noise and typical variations in the
video stream, even global changes in brightness that can fool histogram-based techniques. In the paper, we compare experimentally
our shot transition mechanism to a color histogram implementation, and also evaluate the effectiveness of our database-searching
scheme. Our algorithms are very efficient and run in realtime on a desktop computer. We describe how this technology could
be employed to construct a “smart VCR” that was capable of alerting the viewer to the beginning of a specific program or identifying 相似文献
14.
Matt Kaufmann 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):13-19
The well-publicized Year 2000 problem provides interesting challenges for the remediation of noncompliant code. This paper
describes some work done at EDS CIO Services, using the ACL2 theorem prover to formally verify correctness of remediation
rules. The rules take into account the possibility of “flag” (non-date) values of date variables. Many of them have been implemented
in an in-house tool, COGEN 2000TM, that corrects for noncompliant date-related logic in COBOL programs. 相似文献
15.
S. Bernardi S. Donatelli A. Horváth 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(4):417-430
An implementation of compositionality for stochastic well-formed nets (SWN) and, consequently, for generalized stochastic
Petri nets (GSPN) has been recently included in the GreatSPN tool. Given two SWNs and a labelling function for places and
transitions, it is possible to produce a third one as a superposition of places and transitions of equal label. Colour domains
and arc functions of SWNs have to be treated appropriately. The main motivation for this extension was the need to evaluate
a library of fault-tolerant “mechanisms” that have been recently defined, and are now under implementation, in a European
project called TIRAN. The goal of the TIRAN project is to devise a portable software solution to the problem of fault tolerance
in embedded systems, while the goal of the evaluation is to provide evidence of the efficacy of the proposed solution. Modularity
being a natural “must” for the project, we have tried to reflect it in our modelling effort. In this paper, we discuss the
implementation of compositionality in the GreatSPN tool, and we show its use for the modelling of one of the TIRAN mechanisms,
the so-called local voter.
Published online: 24 August 2001 相似文献
16.
A common need in machine vision is to compute the 3-D rigid body transformation that aligns two sets of points for which correspondence
is known. A comparative analysis is presented here of four popular and efficient algorithms, each of which computes the translational
and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem.
They differ in terms of the transformation representation used and the mathematical derivation of the solution, using respectively
singular value decomposition or eigensystem computation based on the standard representation, and the eigensystem analysis of matrices derived from unit and dual quaternion forms of the transform. This
comparison presents both qualitative and quantitative results of several experiments designed to determine (1) the accuracy
and robustness of each algorithm in the presence of different levels of noise, (2) the stability with respect to degenerate
data sets, and (3) relative computation time of each approach under different conditions. The results indicate that under
“ideal” data conditions (no noise) certain distinctions in accuracy and stability can be seen. But for “typical, real-world”
noise levels, there is no difference in the robustness of the final solutions (contrary to certain previously published results).
Efficiency, in terms of execution time, is found to be highly dependent on the computer system setup. 相似文献
17.
David Gibson Jon Kleinberg Prabhakar Raghavan 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):222-236
We describe a novel approach for clustering collections of sets, and its application to the analysis and mining of categorical
data. By “categorical data,” we mean tables with fields that cannot be naturally ordered by a metric – e.g., the names of
producers of automobiles, or the names of products offered by a manufacturer. Our approach is based on an iterative method
for assigning and propagating weights on the categorical values in a table; this facilitates a type of similarity measure
arising from the co-occurrence of values in the dataset. Our techniques can be studied analytically in terms of certain types
of non-linear dynamical systems.
Received February 15, 1999 / Accepted August 15, 1999 相似文献
18.
This paper describes an approach to the problem of articulating multimedia information based on parsing and syntax-directed
translation that uses Relational Grammars. This translation is followed by a constraint-solving mechanism to create the final
layout. Grammatical rules provide the mechanism for mapping from a representation of the content and context of a presentation
to forms that specify the media objects to be realized. These realization forms include sets of spatial and temporal constraints
between elements of the presentation. Individual grammars encapsulate the “look and feel” of a presentation and can be used
as generators of such a style. By making the grammars sensitive to the requirements of the output medium, parsing can introduce
flexibility into the information realization process. 相似文献
19.
In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to compensate for
variable network delays. In this paper, we consider the problem of adaptively adjusting the playout delay in order to keep
this delay as small as possible, while at the same time avoiding excessive “loss” due to the arrival of packets at the receiver
after their playout time has already passed. The contributions of this paper are twofold. First, given a trace of packet audio
receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the
range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses
(due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks
the network delay of recently received packets and efficiently maintains delay percentile information. This information, together
with a “delay spike” detection algorithm based on (but extending) our earlier work, is used to dynamically adjust talkspurt
playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio
delay traces and performs close to the theoretical optimum over a range of parameter values of interest. 相似文献