Recently several generalizations to higher dimension of the Fourier transform using Clifford algebra have been introduced,
including the Clifford-Fourier transform by the authors, defined as an operator exponential with a Clifford algebra-valued
kernel.
In this paper an overview is given of all these generalizations and an in depth study of the two-dimensional Clifford-Fourier
transform of the authors is presented. In this special two-dimensional case a closed form for the integral kernel may be obtained,
leading to further properties, both in the L1 and in the L2 context. Furthermore, based on this Clifford-Fourier transform Clifford-Gabor filters are introduced.
AMS subject classification numbers: 42B10, 30G35
Fred Brackx received a diploma degree in mathematics from Ghent University, Belgium, in 1970 and a Ph.D. degree in mathematics from the
same university in 1973. Since 1984 he is professor for mathematical analysis at Ghent University and currently he is leading
the Clifford Research Group. His main interests are function theory and functional analysis for functions with values in quaternion
and Clifford algebras. The research covers Clifford distributions, generalized Fourier, Radon and Hilbert transforms, orthogonal
polynomials and multi-dimensional wavelets.
Nele De Schepper received a diploma degree in mathematics from Ghent University, Belgium, in 2001. Since then she holds an assistantship at
the Department of Mathematical Analysis of Ghent University and is a member of the Clifford Research Group. Her main interests
are function theory and functional analysis for functions with values in Clifford algebras. The research covers generalized
Fourier transforms, orthogonal polynomials and multi-dimensional wavelets.
Frank Sommen received a diploma degree in mathematics from Ghent University, Belgium, in 1978, a Ph.D. degree in mathematics from the
same university in 1980, and a habilitation degree in mathematical analysis in 1984. From 1978 until 1999 he was at the National
Fund for Scientific Research (Flanders). Since 2000 he holds a Research professorship at Ghent University. His main interests
are function theory and functional analysis for functions with values in quaternion and Clifford algebras. The research covers
Clifford distributions, generalized Fourier, Radon and Hilbert transforms, orthogonal polynomials and multi-dimensional wavelets,
algebraic analysis, hyperfunctions and radial algebra. 相似文献
An important feature of database technology of the nineties is the use of parallelism for speeding up the execution of complex queries. This technology is being tested in several experimental database architectures and a few commercial systems for conventional select-project-join queries. In particular, hash-based fragmentation is used to distribute data to disks under the control of different processors in order to perform selections and joins in parallel. With the development of new query languages, and in particular with the definition of transitive closure queries and of more general logic programming queries, the new dimension of recursion has been added to query processing. Recursive queries are complex; at the same time, their regular structure is particularly suited for parallel execution, and parallelism may give a high efficiency gain. We survey the approaches to parallel execution of recursive queries that have been presented in the recent literature. We observe that research on parallel execution of recursive queries is separated into two distinct subareas, one focused on the transitive closure of Relational Algebra expressions, the other one focused on optimization of more general Datalog queries. Though the subareas seem radically different because of the approach and formalism used, they have many common features. This is not surprising, because most typical Datalog queries can be solved by means of the transitive closure of simple algebraic expressions. We first analyze the relationship between the transitive closure of expressions in Relational Algebra and Datalog programs. We then review sequential methods for evaluating transitive closure, distinguishing iterative and direct methods. We address the parallelization of these methods, by discussing various forms of parallelization. Data fragmentation plays an important role in obtaining parallel execution; we describe hash-based and semantic fragmentation. Finally, we consider Datalog queries, and present general methods for parallel rule execution; we recognize the similarities between these methods and the methods reviewed previously, when the former are applied to linear Datalog queries. We also provide a quantitative analysis that shows the impact of the initial data distribution on the performance of methods.
Recommended by: Patrick Valduriez 相似文献
We introduce a semantic data model to capture the hierarchical, spatial, temporal, and evolutionary semantics of images in pictorial databases. This model mimics the user's conceptual view of the image content, providing the framework and guidelines for preprocessing to extract image features. Based on the model constructs, a spatial evolutionary query language (SEQL), which provides direct image object manipulation capabilities, is presented. With semantic information captured in the model, spatial evolutionary queries are answered efficiently. Using an object-oriented platform, a prototype medical-image management system was implemented at UCLA to demonstrate the feasibility of the proposed approach. 相似文献
LDL is one of the recently proposed logical query languages, which incorporate set, for data and knowledge base systems. Since LDL programs can simulate negation, they are not monotonic in general. On the other hand, there are monotonic LDL programs. This paper addresses the natural question of “When are the generally nonmonotonic LDL programs monotonic?” and investigates related topics such as useful applications for monotonicity. We discuss four kinds of monotonicity, and examine two of them in depth. The first of the two, called “ω-monotonicity”, is shown to be undecidable even when limited to single-stratum programs. The second, called “uniform monotonicity”, is shown to implyω-monotonicity. We characterize the uniform monotonicity of a program (i) by a relationship between its Bancilhon-Khoshafian semantics and its LDL semantics, and (ii) with a useful property called subset completion independence. Characterization (ii) implies that uniformly monotonie programs can be evaluated more efficiently by discarding dominated facts. Finally, we provide some necessary and/or sufficient, syntactic conditions for uniform monotonicity. The conditions pinpoint (a) enumerated set terms, (b) negations of membership and inclusion, and (c) sharing of set terms as the main source for nonuniform monotonicity. 相似文献
This paper concerns the following problem: given a set of multi-attribute records, a fixed number of buckets and a two-disk system, arrange the records into the buckets and then store the buckets between the disks in such a way that, over all possible orthogonal range queries (ORQs), the disk access concurrency is maximized. We shall adopt the multiple key hashing (MKH) method for arranging records into buckets and use the disk modulo (DM) allocation method for storing buckets onto disks. Since the DM allocation method has been shown to be superior to any other allocation methods for allocating an MKH file onto a two-disk system for answering ORQs, the real issue is knowing how to determine an optimal way for organizing the records into buckets based upon the MKH concept.
A performance formula that can be used to evaluate the average response time, over all possible ORQs, of an MKH file in a two-disk system using the DM allocation method is first presented. Based upon this formula, it is shown that our design problem is related to a notoriously difficult problem, namely the Prime Number Problem. Then a performance lower bound and an efficient algorithm for designing optimal MKH files in certain cases are presented. It is pointed out that in some cases the optimal MKH file for ORQs in a two-disk system using the DM allocation method is identical to the optimal MKH file for ORQs in a single-disk system and the optimal average response time in a two-disk system is slightly greater than one half of that in a single-disk system. 相似文献
The challenge of saturating all phases of pervasive service provision with context-aware functionality lies in coping with
the complexity of maintaining, retrieving and distributing context information. To efficiently represent and query context
information a sophisticated modelling scheme should exist. To distribute and synchronise context knowledge in various context
repositories across a multitude of administrative domains, streamlined mechanisms are needed. This paper elaborates on an
innovative context management framework that has been designed to cope with free-text and location based context retrieval
and efficient context consistency control. The proposed framework has been incorporated in a multi-functional pervasive services
platform, while most of the mechanisms it employs have been empirically evaluated. 相似文献
Query processing in data grids is a difficult issue due to the heterogeneous, unpredictable and volatile behaviors of the grid resources. Applying join operations on remote relations in data grids is a unique and interesting problem. However, to the best of our knowledge, little is done to date on multi-join query processing in data grids. An approach for processing multi-join queries is proposed in this paper. Firstly, a relation-reduction algorithm for reducing the sizes of operand relations is presented in order to minimize data transmission cost among grid nodes. Then, a method for scheduling computer nodes in data grids is devised to parallel process multi-join queries. Thirdly, an innovative method is developed to efficiently execute join operations in a pipeline fashion. Finally, a complete algorithm for processing multi-join queries is given. Analytical and experimental results show the effectiveness and efficiency of the proposed approach. 相似文献