共查询到20条相似文献,搜索用时 15 毫秒
1.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
2.
Efficient extraction of primitives from line drawings composed of horizontal and vertical lines 总被引:6,自引:0,他引:6
The performance of the algorithms for the extraction of primitives for the interpretation of line drawings is usually affected
by the degradation of the information contained in the document due to factors such as low print contrast, defocusing, skew,
etc. In this paper, we are proposing two algorithms for the extraction of primitives with good performance under degradation.
The application of the algorithms is restricted to line drawings composed of horizontal and vertical lines. The performance
of the algorithms has been evaluated by using a protocol described in the literature.
Received: 6 August 1996 / Accepted: 16 July 1997 相似文献
3.
I. Laptev H. Mayer T. Lindeberg W. Eckstein C. Steger A. Baumgartner 《Machine Vision and Applications》2000,12(1):23-31
Abstract. We propose a new approach for automatic road extraction from aerial imagery with a model and a strategy mainly based on the
multi-scale detection of roads in combination with geometry-constrained edge extraction using snakes. A main advantage of
our approach is, that it allows for the first time a bridging of shadows and partially occluded areas using the heavily disturbed
evidence in the image. Additionally, it has only few parameters to be adjusted. The road network is constructed after extracting
crossings with varying shape and topology. We show the feasibility of the approach not only by presenting reasonable results
but also by evaluating them quantitatively based on ground truth.
Received: 22 July 1999 / Accepted: 20 March 2000 相似文献
4.
基于纹理分析的精确车牌定位算法 总被引:1,自引:1,他引:0
在车牌识别(LPR)系统的实现过程中,最关键的部分就是车牌图像的提取以及车牌字符图像的分割。介绍了一种基于车牌区域字符的纹理特征和统计规律的车牌定位方法。由于光照、复杂背景等因素都会对车牌定位产生不良影响,而利用车牌字符纹理丰富的特征寻找车牌区域就可以避开这些不良影响。这种算法不仅排除了光照、复杂背景等因素的影响,而且对于拍摄到车牌的大小、车牌在图像中的位置和倾斜角度没有太多限制。实验证明这种算法具有定位准、适应性强的特点。 相似文献
5.
基于边缘与SVM的车牌自动定位与提取 总被引:5,自引:1,他引:4
提出了一种将边缘与SVM相结合的车牌定位与提取的方法。首先根据字符的边界特征进行粗筛选,获得几个车牌候选区;然后使用SVM分类器进行字符与非字符分类;最后根据车牌特征实现定位与提取。实验表明,该方法取得了良好的效果。 相似文献
6.
D. Laurent J. Lechtenbörger N. Spyratos G. Vossen 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):295-315
Views over databases have regained attention in the context of data warehouses, which are seen as materialized views. In this setting, efficient view maintenance is an important issue, for which the notion of self-maintainability has been identified as desirable. In this paper, we extend the concept of self-maintainability to (query and update) independence within a formal framework, where independence with respect to arbitrary given sets of queries and updates over the sources
can be guaranteed. To this end we establish an intuitively appealing connection between warehouse independence and view complements. Moreover, we study special kinds of complements, namely monotonic complements, and show how to compute minimal ones in the presence of keys and foreign keys in the underlying databases. Taking advantage
of these complements, an algorithmic approach is proposed for the specification of independent warehouses with respect to
given sets of queries and updates.
Received: 21 November 2000 / Accepted: 1 May 2001 Published online: 6 September 2001 相似文献
7.
Sérgio Vale Aguiar Campos Edmund Clarke 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):260-269
The task of checking if a computer system satisfies its timing specifications is extremely important. These systems are often
used in critical applications where failure to meet a deadline can have serious or even fatal consequences. This paper presents
an efficient method for performing this verification task. In the proposed method a real-time system is modeled by a state-transition
graph represented by binary decision diagrams. Efficient symbolic algorithms exhaustively explore the state space to determine
whether the system satisfies a given specification. In addition, our approach computes quantitative timing information such
as minimum and maximum time delays between given events. These results provide insight into the behavior of the system and
assist in the determination of its temporal correctness. The technique evaluates how well the system works or how seriously
it fails, as opposed to only whether it works or not. Based on these techniques a verification tool called Verus has been constructed. It has been used in the verification of several industrial real-time systems such as the robotics system
described below. This demonstrates that the method proposed is efficient enough to be used in real-world designs. The examples
verified show how the information produced can assist in designing more efficient and reliable real-time systems. 相似文献
8.
Fast image retrieval using color-spatial information 总被引:1,自引:0,他引:1
Beng Chin Ooi Kian-Lee Tan Tat Seng Chua Wynne Hsu 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(2):115-128
In this paper, we present an image retrieval system that employs both the color and spatial information of images to facilitate
the retrieval process. The basic unit used in our technique is a single-colored cluster, which bounds a homogeneous region of that color in an image. Two clusters from two images are similar if they are of the
same color and overlap in the image space. The number of clusters that can be extracted from an image can be very large, and
it affects the accuracy of retrieval. We study the effect of the number of clusters on retrieval effectiveness to determine
an appropriate value for “optimal' performance. To facilitate efficient retrieval, we also propose a multi-tier indexing
mechanism called the Sequenced Multi-Attribute Tree (SMAT). We implemented a two-tier SMAT, where the first layer is used to prune away clusters that are of different colors,
while the second layer discriminates clusters of different spatial locality. We conducted an experimental study on an image
database consisting of 12,000 images. Our results show the effectiveness of the proposed color-spatial approach, and the efficiency
of the proposed indexing mechanism.
Received August 1, 1997 / Accepted December 9, 1997 相似文献
9.
V. Vuori J. Laaksonen E. Oja J. Kangas 《International Journal on Document Analysis and Recognition》2001,3(3):150-159
This paper describes an adaptive recognition system for isolated handwritten characters and the experiments carried out with
it. The characters used in our experiments are alphanumeric characters, including both the upper- and lower-case versions
of the Latin alphabets and three Scandinavian diacriticals. The writers are allowed to use their own natural style of writing.
The recognition system is based on the k-nearest neighbor rule. The six character similarity measures applied by the system are all based on dynamic time warping.
The aim of the first experiments is to choose the best combination of the simple preprocessing and normalization operations
and the dissimilarity measure for a multi-writer system. However, the main focus of the work is on online adaptation. The
purpose of the adaptations is to turn a writer-independent system into writer-dependent and increase recognition performance.
The adaptation is carried out by modifying the prototype set of the classifier according to its recognition performance and
the user's writing style. The ways of adaptation include: (1) adding new prototypes; (2) inactivating confusing prototypes;
and (3) reshaping existing prototypes. The reshaping algorithm is based on the Learning Vector Quantization. Four different
adaptation strategies, according to which the modifications of the prototype set are performed, have been studied both offline
and online. Adaptation is carried out in a self-supervised fashion during normal use and thus remains unnoticed by the user.
Received June 30, 1999 / Revised September 29, 2000 相似文献
10.
Summary. In this paper we introduce and analyze two new cost measures related to the communication overhead and the space requirements
associated with virtual path layouts in ATM networks, that is the edge congestion and the node congestion. Informally, the edge congestion of a given edge e at an incident node u is defined as the number of VPs terminating at or starting from u and using e, while the node congestion of a node v is defined as the number of VPs having v as an endpoint. We investigate the problem of constructing virtual path layouts allowing to connect a specified root node
to all the others in at most h hops and with maximum edge or node congestion c, for two given integers h and c. We first give tight results concerning the time complexity of the construction of such layouts for both the two congestion
measures, that is we exactly determine all the tractable and intractable cases. Then, we provide some combinatorial bounds
for arbitrary networks, together with optimal layouts for specific topologies such as chains, rings and grids.
Received: December 1997 / Accepted: August 2000 相似文献
11.
Traditional digital particle image velocimetry (DPIV) methods are previously based on area-correlation. Though proven to
be very time-consuming and error prone, it has been widely adopted because it is conceptually simple, and easy to implement,
and also because there are few alternatives. This paper provides a non-correlative, conceptually new, fast and efficient approach
for DPIV which takes the nature of flow into consideration. An incompressible affine flow model (IAFM) is introduced to describe a flow that incorporates rational constraint directly into the computation. This IAFM, combining
with a modified optical flow method – named total optical flow computation, provides a linear system solution to DPIV. Experimental results on real images demonstrate our method to be a very promising
approach for DPIV.
Received: 23 March 1998 / Accepted: 1 September 1999 相似文献
12.
Ashish Mehta James Geller Yehoshua Perl Erich Neuhold 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(1):25-47
A path-method is used as a mechanism in object-oriented databases (OODBs) to retrieve or to update information relevant to one class that
is not stored with that class but with some other class. A path-method is a method which traverses from one class through
a chain of connections between classes and accesses information at another class. However, it is a difficult task for a casual
user or even an application programmer to write path-methods to facilitate queries. This is because it might require comprehensive
knowledge of many classes of the conceptual schema that are not directly involved in the query, and therefore may not even
be included in a user's (incomplete) view about the contents of the database. We have developed a system, called path-method generator (PMG), which generates path-methods automatically according to a user's database-manipulating requests. The PMG offers the
user one of the possible path-methods and the user verifies from his knowledge of the intended purpose of the request whether
that path-method is the desired one. If the path method is rejected, then the user can utilize his now increased knowledge
about the database to request (with additional parameters given) another offer from the PMG. The PMG is based on access weights attached to the connections between classes and precomputed access relevance between every pair of classes of the OODB. Specific rules for access weight assignment and algorithms for computing access
relevance appeared in our previous papers [MGPF92, MGPF93, MGPF96]. In this paper, we present a variety of traversal algorithms
based on access weights and precomputed access relevance. Experiments identify some of these algorithms as very successful
in generating most desired path-methods. The PMG system utilizes these successful algorithms and is thus an efficient tool
for aiding the user with the difficult task of querying and updating a large OODB.
Received July 19, 1993 / Accepted May 16, 1997 相似文献
13.
Approximate query processing using wavelets 总被引:7,自引:0,他引:7
Kaushik Chakrabarti Minos Garofalakis Rajeev Rastogi Kyuseok Shim 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(2-3):199-223
Approximate query processing has emerged as a cost-effective approach for dealing with the huge data volumes and stringent
response-time requirements of today's decision support systems (DSS). Most work in this area, however, has so far been limited
in its query processing scope, typically focusing on specific forms of aggregate queries. Furthermore, conventional approaches
based on sampling or histograms appear to be inherently limited when it comes to approximating the results of complex queries
over high-dimensional DSS data sets. In this paper, we propose the use of multi-dimensional wavelets as an effective tool
for general-purpose approximate query processing in modern, high-dimensional applications. Our approach is based on building
wavelet-coefficient synopses of the data and using these synopses to provide approximate answers to queries. We develop novel query processing algorithms
that operate directly on the wavelet-coefficient synopses of relational tables, allowing us to process arbitrarily complex
queries entirely in the wavelet-coefficient domain. This guarantees extremely fast response times since our approximate query execution engine
can do the bulk of its processing over compact sets of wavelet coefficients, essentially postponing the expansion into relational
tuples until the end-result of the query. We also propose a novel wavelet decomposition algorithm that can build these synopses
in an I/O-efficient manner. Finally, we conduct an extensive experimental study with synthetic as well as real-life data sets
to determine the effectiveness of our wavelet-based approach compared to sampling and histograms. Our results demonstrate
that our techniques: (1) provide approximate answers of better quality than either sampling or histograms; (2) offer query
execution-time speedups of more than two orders of magnitude; and (3) guarantee extremely fast synopsis construction times
that scale linearly with the size of the data.
Received: 7 August 2000 / Accepted: 1 April 2001 Published online: 7 June 2001 相似文献
14.
A variational way of deriving the relevant parameters of a cellular neural network (CNN) is introduced. The approach exploits
the CNN spontaneous internal-energy decrease and is applicable when a given problem can be expressed in terms of an optimisation
task. The presented approach is fully mathematical as compared with the typical heuristic search for the correct parameters
in the literature on CNNs. This method is practically employed in recovering information on the three-dimensional structure
of the environment, through the stereo vision problem. A CNN able to find the conjugate points in a stereogram is fully derived
in the proposed framework. Results of computer simulations on several test cases are provided.
Received: 1 August 1997 / Accepted: 29 September 1999 相似文献
15.
In this paper, we present an efficient approach for supporting fast-scanning (FS) operations in MPEG-based video-on-demand
(VOD) systems. This approach is based on storing multiple, differently encoded versions of the same movie at the server. A
normal version is used for normal playback, while several scan versions are used for FS. Each scan version supports forward and backward FS at a given speedup. The server responds to an FS request
by switching from the normal version to an appropriate scan version. Scanning versions are produced by encoding a sample of
the raw frames using the same GOP pattern of the normal version. When a scanning version is decoded and played back at the
normal frame rate, it gives a perceptual motion speedup. By being able to control the traffic envelopes of the scan versions,
our approach can be integrated into a previously proposed framework for distributing archived, MPEG-coded video streams. FS
operations are supported using no or little extra network bandwidth beyond what is already allocated for normal playback.
Mechanisms for controlling the traffic envelopes of the scan versions are presented. The actions taken by the server and the
client's decoder in response to various types of interactive requests are described in detail. The latency incurred in implementing
various interactive requests is shown to be within an acceptable range. Striping and disk-scheduling strategies for storing
various versions at the server are presented. Issues related to the implementation of our approach are discussed. 相似文献
16.
Rita Cucchiara 《Machine Vision and Applications》1998,11(1):1-6
The paper presents a genetic algorithm for clustering objects in images based on their visual features. In particular, a
novel solution code (named Boolean Matching Code) and a correspondent reproduction operator (the Single Gene Crossover) are defined specifically for clustering and are compared with other standard genetic approaches. The paper describes the
clustering algorithm in detail, in order to show the suitability of the genetic paradigm and underline the importance of
effective tuning of algorithm parameters to the application. The algorithm is evaluated on some test sets and an example of
its application in automated visual inspection is presented.
Received: 6 August 1996 / Accepted: 11 November 1997 相似文献
17.
Bing Wang 《International Journal on Digital Libraries》1999,2(2-3):91-110
A digital library (DL) consists of a database which contains library information and a user interface which provides a visual
window for users to search relevant information stored in the database. Thus, an abstract structure of a digital library can
be defined as a combination of a special purpose database and a user-friendly interface. This paper addresses one of the fundamental aspects of such
a combination. This is the formal data structure for linking an object oriented database with hypermedia to support digital
libraries. It is important to establish a formal structure for a digital library in order to efficiently maintain different
types of library information. This article discusses how to build an object oriented hybrid system to support digital libraries.
In particular, we focus on the discussion of a general purpose data model for digital libraries and the design of the corresponding
hypermedia interface. The significant features of this research are, first, a formalized data model to define a digital library
system structure; second, a practical approach to manage the global schema of a library system; and finally, a design strategy
to integrate hypermedia with databases to support a wide range of application areas.
Received: 15 December 1997 / Revised: June 1999 相似文献
18.
Ajay D. Kshemkalyani 《Distributed Computing》1998,11(4):169-189
Summary. In a distributed system, high-level actions can be modeled by nonatomic events. This paper proposes causality relations between
distributed nonatomic events and provides efficient testing conditions for the relations. The relations provide a fine-grained
granularity to specify causality relations between distributed nonatomic events. The set of relations between nonatomic events
is complete in first-order predicate logic, using only the causality relation between atomic events. For a pair of distributed
nonatomic events X and Y, the evaluation of any of the causality relations requires integer comparisons, where and , respectively, are the number of nodes on which the two nonatomic events X and Y occur. In this paper, we show that this polynomial complexity of evaluation can by simplified to a linear complexity using
properties of partial orders. Specifically, we show that most relations can be evaluated in integer comparisons, some in integer comparisons, and the others in integer comparisons. During the derivation of the efficient testing conditions, we also define special system execution prefixes
associated with distributed nonatomic events and examine their knowledge-theoretic significance.
Received: July 1997 / Accepted: May 1998 相似文献
19.
A. Schulte 《Cognition, Technology & Work》2002,4(3):146-159
This paper describes an approach to cognitive and cooperative operator assistance in the field of tactical flight mission
management. A framework for a generic functional concept is derived from general considerations of human performance and cognitive
engineering. A system built according to these human-centred design principles will be able to keep up with the change of
situation parameters, in order to provide situational adapted operator assistance. Such a cognitive assistant system represents an approach to ensure the highest degree possible of situation awareness of the flight deck crew as well as a
satisfactory workload level. This generic approach to mission management and crew assistance for military aircraft has been
realised in different application domains such as military transport and air-to-ground attack. The Crew Assistant Military Aircraft is a functional prototype for the air transport application. Even applications in the domain of uninhabited aerial vehicles
(UAV) are in reach. This paper mainly covers one state-of-the-art research and development activity in the domain of combat
aircraft: the TMM – Tactical Mission Management System is an experimental solution for the air-to-ground attack role. The TMM has been implemented as a functional prototype in
the mission avionics experimental cockpit (MAXC), a development flight simulator at ESG and evaluated with German Air Force
pilots as subjects in simulator trials. Therefore, the TMM has been compared with a reference cockpit avionics configuration
in terms of task performance, workload, situation awareness and operator acceptance. After giving an overview of the system
concepts this paper reports on the experimental design and results of the simulator trial campaign. 相似文献
20.
Deadlock detection in distributed database systems: a new algorithm and a comparative performance analysis 总被引:4,自引:0,他引:4
Natalija Krivokapić Alfons Kemper Ehud Gudes 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):79-100
This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant
deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards,
a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The
DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”,
i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded
to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive
representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point
out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock
detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast
to all other algorithms, is very robust with respect to differing load and access profiles.
Received December 4, 1997 / Accepted February 2, 1999 相似文献