共查询到20条相似文献,搜索用时 15 毫秒
1.
We focus on the quantitative and local topological properties of range images. We consider the spaces Mm of m × m high-contrast patches of range images for m=?3, 5, 7, 9, 11. Using computational topological tools to analyze range image patches, we detect that M3 (M9, M11) has core subsets with the topology of a circle, M3, M5, M7, M9 and that M11 have some subspaces with the topology of a Klein bottle. We also discover that the largest subspace with the Klein bottle’s topology decreases as the measurements of patches increase, which generalizes the results in the paper of H. Adams and G. Carlsson, and demonstrates properties among optical images and range image patches, which are more similar than those established by Lee et al. 相似文献
2.
In this paper, we consider the symmetric interior penalty discontinuous Galerkin (SIPG) method with piecewise polynomials
of degree r≥1 for a class of quasi-linear elliptic problems in Ω⊂ℝ 2. We propose a two-grid approximation for the SIPG method which can be thought of as a type of linearization of the nonlinear
system using a solution from a coarse finite element space. With this technique, solving a quasi-linear elliptic problem on
the fine finite element space is reduced into solving a linear problem on the fine finite element space and solving the quasi-linear
elliptic problem on a coarse space. Convergence estimates in a broken H
1-norm are derived to justify the efficiency of the proposed two-grid algorithm. Numerical experiments are provided to confirm
our theoretical findings. As a byproduct of the technique used in the analysis, we derive the optimal pointwise error estimates
of the SIPG method for the quasi-linear elliptic problems in ℝ
d
, d=2,3 and use it to establish the convergence of the two-grid method for problems in Ω⊂ℝ 3. 相似文献
3.
A new mutation operator, ℳ
ijn
, capable of operating on a set of adjacent bits in one single step, is introduced. Its features are examined and compared
against those of the classical bit–flip mutation. A simple Evolutionary Algorithm, ℳ–EA, based only on selection and ℳ
ijn
, is described. This algorithm is used for the solution of an industrial problem, the Inverse Airfoil Design optimization,
characterized by high search time to achieve satisfying solutions, and its performance is compared against that offered by
a classical binary Genetic Algorithm. The experiments show for our algorithm a noticeable reduction in the time needed to
reach a solution of acceptable quality, thus they prove the effectiveness of the proposed operator and its superiority to
GAs for the problem at hand. 相似文献
4.
In this paper, we revisit topological-like features in the extended Temperley–Lieb diagrammatical representation for quantum
circuits including the teleportation, dense coding and entanglement swapping. We describe these quantum circuits and derive
characteristic equations for them with the help of topological-like operations. Furthermore, we comment on known diagrammatical
approaches to quantum information phenomena from the perspectives of both tensor categories and topological quantum field
theories. Moreover, we remark on the proposal for categorical quantum physics and information as described by dagger ribbon
categories.
The main results in this paper had been presented at the workshop “Cats, Kets and Cloisters” (Computing Laboratory, Oxford
University, July 17–23, 2006). Categorical quantum physics and information is proposed as described by dagger ribbon categories.
The functor between Abramsky and Coecke’s categorical quantum mechanics and the extended Temperley–Lieb categorical approach
is recognized as the same type as those defining topological quantum field theories. On the one hand, fundamental objects
in the physical world may be string-like (even brane-like) and satisfy the braid statistics. On the other hand, fundamental
particles at the Planck energy scale (or quasi-particles of many-body systems) may obey the braid statistics and have an effective
(or a new internal) degree of freedom called the “twist spin”. The name “categorical quantum physics and information” hereby
refers to quantum physics and information which can be recast in terms of the language of categories. This is a simple and
intuitional generalization of the name “categorical quantum mechanics”. The latter does not yet recognize conformal field
theories, topological quantum field theories, quantum gravity and string theories which have been already described in the
categorical framework by different research groups. Besides, the proposal categorical quantum physics and information is strongly motivated by the present study in quantum information phenomena and theory. It is aimed at setting up a theoretical
platform on which both categorical quantum mechanics and topological quantum computing by Freedman, Larsen and Wang are allowed
to stand. 相似文献
5.
We observe that if R: = ( I, ρ, J) is an incidence structure, viewed as a matrix, then the topological closure of the set of columns is the Stone space of
the Boolean algebra generated by the rows. As a consequence, we obtain that the topological closure of the collection of principal
initial segments of a poset P is the Stone space of the Boolean algebra Tailalg ( P) generated by the collection of principal final segments of P, the so-called tail-algebra of P. Similar results concerning Priestley spaces and distributive lattices are given. A generalization to incidence structures
valued by abstract algebras is considered.
相似文献
6.
A circular-arc model ℳ is a circle C together with a collection
A\mathcal{A}
of arcs of C. If
A\mathcal{A}
satisfies the Helly Property then ℳ is a Helly circular-arc model. A (Helly) circular-arc graph is the intersection graph
of a (Helly) circular-arc model. Circular-arc graphs and their subclasses have been the object of a great deal of attention
in the literature. Linear-time recognition algorithms have been described both for the general class and for some of its subclasses.
However, for Helly circular-arc graphs, the best recognition algorithm is that by Gavril, whose complexity is O( n
3). In this article, we describe different characterizations for Helly circular-arc graphs, including a characterization by
forbidden induced subgraphs for the class. The characterizations lead to a linear-time recognition algorithm for recognizing
graphs of this class. The algorithm also produces certificates for a negative answer, by exhibiting a forbidden subgraph of
it, within this same bound. 相似文献
7.
We consider weakly singular integral equations of the first kind on open surface pieces Γ in ℝ 3. To obtain approximate solutions we use the h-version Galerkin boundary element method. Furthermore we introduce two-level additive Schwarz operators for non-overlapping
domain decompositions of Γ and we estimate the conditions numbers of these operators with respect to the mesh size. Based
on these operators we derive an a posteriori error estimate for the difference between the exact solution and the Galerkin
solution. The estimate also involves the error which comes from an approximate solution of the Galerkin equations. For uniform
meshes and under the assumption of a saturation condition we show reliability and efficiency of our estimate. Based on this
estimate we introduce an adaptive multilevel algorithm with easily computable local error indicators which allows direction
control of the local refinements. The theoretical results are illustrated by numerical examples for plane and curved surfaces.
Supported by the German Research Foundation (DFG) under grant Ste 238/25-9. 相似文献
8.
A common requirement in speech technology is to align two different symbolic representations of the same linguistic ‘message’.
For instance, we often need to align letters of words listed in a dictionary with the corresponding phonemes specifying their
pronunciation. As dictionaries become ever bigger, manual alignment becomes less and less tenable yet automatic alignment
is a hard problem for a language like English. In this paper, we describe the use of a form of the expectation-maximization
(EM) algorithm to learn alignments of English text and phonemes, starting from a variety of initializations. We use the British
English Example Pronunciation (BEEP) dictionary of almost 200,000 words in this work. The quality of alignment is difficult
to determine quantitatively since no ‘gold standard’ correct alignment exists. We evaluate the success of our algorithm indirectly
from the performance of a pronunciation by analogy system using the aligned dictionary data as a knowledge base for inferring
pronunciations. We find excellent performance—the best so far reported in the literature. There is very little dependence
on the start point for alignment, indicating that the EM search space is strongly convex. Since the aligned BEEP dictionary
is a potentially valuable resource, it is made freely available for research use. 相似文献
9.
Current analyses of complex biological networks focus either on their global statistical connectivity properties (e.g. topological
path lengths and nodes connectivity ranks) or the statistics of specific local connectivity circuits (motifs). Here we present
a different approach – Functional Topology, to enable identification of hidden topological and geometrical fingerprints of
biological computing networks that afford their functioning – the form-function fingerprints. To do so we represent the network
structure in terms of three matrices: 1. Topological connectivity matrix – each row ( i) is the shortest topological path lengths of node i with all other nodes; 2. Topological correlation matrix – the element ( i, j) is the correlation between the topological connectivity of nodes ( i) and ( j); and 3. Weighted graph matrix – in this case the links represent the conductance between nodes that can be simply one over
the geometrical length, the synaptic strengths in case of neural networks or other quantity that represents the strengths
of the connections. Various methods (e.g. clustering algorithms, random matrix theory, eigenvalues spectrum etc.), can be
used to analyze these matrices, here we use the newly developed functional holography approach which is based on clustering
of the matrices following their collective normalization. We illustrate the approach by analyzing networks of different topological
and geometrical properties: 1. Artificial networks, including – random, regular 4-fold and 5-fold lattice and a tree-like
structure; 2. Cultured neural networks: A single network and a network composed of three linked sub-networks; and 3. Model
neural network composed of two overlapping sub-networks. Using these special networks, we demonstrate the method’s ability
to reveal functional topology features of the networks. 相似文献
10.
Many software systems are developed in a number of consecutive releases. In each release not only new code is added but also existing code is often modified. In this study we show that the modified code can be an important source of faults. Faults are widely recognized as one of the major cost drivers in software projects. Therefore, we look for methods that improve the fault detection in the modified code. We propose and evaluate a number of prediction models that increase the efficiency of fault detection. To build and evaluate our models we use data collected from two large telecommunication systems produced by Ericsson. We evaluate the performance of our models by applying them both to a different release of the system than the one they are built on and to a different system. The performance of our models is compared to the performance of the theoretical best model, a simple model based on size, as well as to analyzing the code in a random order (not using any model). We find that the use of our models provides a significant improvement over not using any model at all and over using a simple model based on the class size. The gain offered by our models corresponds to 38-57% of the theoretical maximum gain. 相似文献
11.
In 2003, Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270, 2003) published a paper describing an algorithm that computes the exact distance transform in linear time (with respect to image
size) for the rectangular binary images in the k-dimensional space ℝ
k
and distance measured with respect to L
p
-metric for 1≤ p≤∞, which includes Euclidean distance L
2. In this paper we discuss this algorithm from theoretical and practical points of view. On the practical side, we concentrate
on its Euclidean distance version, discuss the possible ways of implementing it as signed distance transform, and experimentally
compare implemented algorithms. We also describe the parallelization of these algorithms and discuss the computational time
savings associated with them. All these implementations will be made available as a part of the CAVASS software system developed
and maintained in our group (Grevera et al. in J. Digit. Imaging 20:101–118, 2007). On the theoretical side, we prove that our version of the signed distance transform algorithm, GBDT, returns the exact value of the distance from the geometrically defined object boundary. We provide a complete proof (which was not given of Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270, 2003) that all these algorithms work correctly for L
p
-metric with 1< p<∞. We also point out that the precise form of the algorithm from Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270,
2003) is not well defined for L
1 and L
∞ metrics. In addition, we show that the algorithm can be used to find, in linear time, the exact value of the diameter of an object, that is, the largest possible distance between any two of its elements. 相似文献
12.
In this paper the notion of autoregressive systems over an integral domain ? is introduced, as a generalization of AR-systems
over the rings ℝ[s] and ℝ[ s, s
−1]. The interpretation of the dynamics represented by a matrix over ? is fixed by the choice of a module ℳ over ?, consisting
of all time-trajectories under consideration. In this setup the problem of system equivalence is studied: when do two different
AR-representations characterize the same behavior? This problem is solved using a ring extension of ?, that explicitly depends
on the choice of the module ℳ of all time-trajectories. In this way the usual divisibility conditions on the system defining
matrices can be recovered. The results apply to the class of delay-differential systems with (in)commensurable delays. In
this particular application, the ring extension of ? is characterized explicitly.
Date received: May 13, 1998. Date revised: December 22, 1998. 相似文献
13.
Linear combinations of translates of a given basis function have long been successfully used to solve scattered data interpolation
and approximation problems. We demonstrate how the classical basis function approach can be transferred to the projective
space ℙ
d−1. To be precise, we use concepts from harmonic analysis to identify positive definite and strictly positive definite zonal
functions on ℙ
d−1. These can then be applied to solve problems arising in tomography since the data given there consists of integrals over
lines. Here, enhancing known reconstruction techniques with the use of a scattered data interpolant in the “space of lines”,
naturally leads to reconstruction algorithms well suited to limited angle and limited range tomography. In the medical setting
algorithms for such incomplete data problems are desirable as using them can limit radiation dosage. 相似文献
14.
The increasing demand for higher resolution images and higher frame rate videos will always pose a challenge to computational
power when real-time performance is required to solve the stereo-matching problem in 3D reconstruction applications. Therefore,
the use of asymptotic analysis is necessary to measure the time and space performance of stereo-matching algorithms regardless
of the size of the input and of the computational power available. In this paper, we survey several classic stereo-matching
algorithms with regard to time–space complexity. We also report running time experiments for several algorithms that are consistent
with our complexity analysis. We present a new dense stereo-matching algorithm based on a greedy heuristic path computation
in disparity space. A procedure which improves disparity maps in depth discontinuity regions is introduced. This procedure
works as a post-processing step for any technique that solves the dense stereo-matching problem. We prove that our algorithm
and post-processing procedure have optimal O( n) time–space complexity, where n is the size of a stereo image. Our algorithm performs only a constant number of computations per pixel since it avoids a
brute force search over the disparity range. Hence, our algorithm is faster than “real-time” techniques while producing comparable
results when evaluated with ground-truth benchmarks. The correctness of our algorithm is demonstrated with experiments in
real and synthetic data. 相似文献
15.
In this paper, two significant weaknesses of locally linear embedding (LLE) applied to computer vision are addressed: “intrinsic dimension” and “eigenvector meanings”. “Topological embedding” and “multi-resolution nonlinearity capture” are introduced based on mathematical analysis of topological manifolds and LLE. The manifold topological analysis (MTA) method is described and is based on “topological embedding”. MTA is a more robust method to determine the “intrinsic dimension” of a manifold with typical topology, which is important for tracking and perception understanding. The manifold multi-resolution analysis (MMA) method is based on “multi-resolution nonlinearity capture”. MMA defines LLE eigenvectors as features for pattern recognition and dimension reduction. Both MTA and MMA are proved mathematically, and several examples are provided. Applications in 3D object recognition and 3D object viewpoint space partitioning are also described. 相似文献
16.
In this paper, we introduce a simple and original algorithm to compute a three-dimensional simplicial complex topologically
equivalent to a 3D digital object V, according to the 26-adjacency. The use of this adjacency generates issues like auto-intersecting triangles that unnecessarily
increase the dimensionality of the associated simplicial complex. To avoid these problems, we present an approach based on
a modified Delaunay tetrahedralization of the digital object, that preserves its topological characteristics. Considering
the resulting complex as an input in algebraic-topological format (fixing a ground ring for the coefficients), we develop
propositions regardless of the adjacency considered. These potential applications are related to topological analysis like
thinning, homology computation, topological characterization and control. Moreover, our technique is susceptible to be extended
to higher dimensions.
The article is published in the original.
Jean-Luc Mari received his PhD degree in 2002. He has been an Associate Professor since 2003 in the Department of Computer Science at the
Faculté des Sciences de Luminy (University of Marseilles). He is also a member of the Information and System Science Laboratory
(LSIS), in the team “Image and Models” (Computer Graphics group). His research interests include geometrical modeling, model
representation, implicit and subdivision surfaces, meshes, multiresolution, skeleton based objects and reconstruction.
Pedro Real received his PhD degree in 1993. He has been an Associate Professor since 1995 in the Department of Applied Mathematics I
at Higher Technical School of Computer Engineering (University of Seville, Spain). He is the main responsible of the andalusian
research group “Computational Topology and Applied Mathematics.” His research interests include computational algebraic topology,
topological analysis of digital images, algebraic pattern recognition and computational algebra. 相似文献
17.
We give a new account of the relationships among varieties of regular languages, varieties of finite semigroups, and their
characterization in terms of “implicit identities.” Our development, which is essentially topological in character, is based
on the duality (established by Stone) between Boolean algebras and certain topological spaces (which are now called “Stone
spaces”). This duality does not seem to have been recognized in the literature on regular languages, even though it is well
known that the regular languages over a fixed alphabet form a Boolean algebra and that the “implicit operations” with a fixed
number of operands form a Stone space.
This research was partially supported by an NSERC Operating Grant. 相似文献
18.
Synopses construction algorithms have been found to be of interest in query optimization, approximate query answering and
mining, and over the last few years several good synopsis construction algorithms have been proposed. These algorithms have
mostly focused on the running time of the synopsis construction vis-a-vis the synopsis quality. However the space complexity
of synopsis construction algorithms has not been investigated as thoroughly. Many of the optimum synopsis construction algorithms
are expensive in space. For some of these algorithms the space required to construct the synopsis is significantly larger
than the space required to store the input. These algorithms rely on the fact that they require a smaller “working space”
and most of the data can be resident on disc. The large space complexity of synopsis construction algorithms is a handicap
in several scenarios. In the case of streaming algorithms, space is a fundamental constraint. In case of offline optimal or
approximate algorithms, a better space complexity often makes these algorithms much more attractive by allowing them to run
in main memory and not use disc, or alternately allows us to scale to significantly larger problems without running out of
space. In this paper, we propose a simple and general technique that reduces space complexity of synopsis construction algorithms.
As a consequence we show that the notion of “working space” proposed in these contexts is redundant. This technique can be
easily applied to many existing algorithms for synopsis construction problems. We demonstrate the performance benefits of
our proposal through experiments on real-life and synthetic data. We believe that our algorithm also generalizes to a broader
range of dynamic programs beyond synopsis construction.
Sudipto Guha’s research supported in part by an Alfred P. Sloan Research Fellowship and by NSF Awards CCF-0430376, CCF-0644119.A
preliminary version of the paper appeared as “Space efficiency in synopsis construction algorithms”, VLDB Conference 2005,
Trondheim, [19]. 相似文献
19.
In this paper we use imprecise probabilities, based on a concept of generalized coherence ( g-coherence), for the management of uncertain knowledge and vague information. We face the problem of reducing the computational difficulties
in g-coherence checking and propagation of lower conditional probability bounds. We examine a procedure, based on linear systems
with a reduced number of unknowns, for the checking of g-coherence. We propose an iterative algorithm to determine the reduced
linear systems. Based on the same ideas, we give an algorithm for the propagation of lower probability bounds. We also give
some theoretical results that allow, by suitably modifying our algorithms, the g-coherence checking and propagation by working
with a reduced set of variables and/or with a reduced set of constraints. Finally, we apply our algorithms to some examples.
RID="*"
ID="*" This paper is a revised and substantially extended version of a previous paper by the same authors, appeared in the
Proc. of the 5th Workshop on Uncertainty Processing (WUPES'2000), Jindřichu̇v Hradec, Czech Republic, June 21–24, 1–13, 2000. 相似文献
20.
Alarm management has been around for decades in telecom solutions. We have seen various efforts to define standardised alarm
interfaces. The research community has focused on various alarms correlation strategies. Still, after years of effort in industry
and research alike, network administrators are flooded with alarms; alarms are suffering from poor information quality; and
the costs of alarm integration have not decreased. In this paper, we explore the concept of ‘alarm’. We define ‘alarm’ and
alarm-type concepts by investigating the different definitions currently in use in standards and research efforts. Based on
statistical alarm data from a mobile operator we argue that operational and capital expenditures would decrease if alarm sources
would apply to our alarm model. 相似文献
|