首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper, we use crossing number and wire area arguments to find lower bounds on the layout area and maximum edge length of a variety of new and computationally useful networks. In particular, we describe
  1. anN-node planar graph which has layout area θ(NlogN) and maximum edge length θ(N 1/2/log1/2 N),
  2. anN-node graph with anO(x 1/2)-separator which has layout area θ(Nlog2 N) and maximum edge length θ(N 1/2logN/loglogN), and
  3. anN-node graph with anO(x 1?1/r )-separator which has maximum edge length θ(N 1?1/r ) for anyr ≥ 3.
  相似文献   

2.
We provide optimal parallel solutions to several link-distance problems set in trapezoided rectilinear polygons. All our main parallel algorithms are deterministic and designed to run on the exclusive read exclusive write parallel random access machine (EREW PRAM). LetP be a trapezoided rectilinear simple polygon withn vertices. InO(logn) time usingO(n/logn) processors we can optimally compute:
  1. Minimum réctilinear link paths, or shortest paths in theL 1 metric from any point inP to all vertices ofP.
  2. Minimum rectilinear link paths from any segment insideP to all vertices ofP.
  3. The rectilinear window (histogram) partition ofP.
  4. Both covering radii and vertex intervals for any diagonal ofP.
  5. A data structure to support rectilinear link-distance queries between any two points inP (queries can be answered optimally inO(logn) time by uniprocessor).
Our solution to 5 is based on a new linear-time sequential algorithm for this problem which is also provided here. This improves on the previously best-known sequential algorithm for this problem which usedO(n logn) time and space.5 We develop techniques for solving link-distance problems in parallel which are expected to find applications in the design of other parallel computational geometry algorithms. We employ these parallel techniques, for example, to compute (on a CREW PRAM) optimally the link diameter, the link center, and the central diagonal of a rectilinear polygon.  相似文献   

3.
Efficiently extendible mappings for balanced data distribution   总被引:2,自引:0,他引:2  
In data storage applications, a large collection of consecutively numbered data “buckets” are often mapped to a relatively small collection of consecutively numbered storage “bins.” For example, in parallel database applications, buckets correspond to hash buckets of data and bins correspond to database nodes. In disk array applications, buckets correspond to logical tracks and bins correspond to physical disks in an array. Measures of the “goodness” of a mapping method include:
  1. Thetime (number of operations) needed to compute the mapping.
  2. Thestorage needed to store a representation of the mapping.
  3. Thebalance of the mapping, i.e., the extent to which all bins receive the same number of buckets.
  4. The cost ofrelocation, that is, the number of buckets that must be relocated to a new bin if a new mapping is needed due to an expansion of the number of bins or the number of buckets.
One contribution of this paper is to give a new mapping method, theInterval-Round-Robin (IRR) method. The IRR method has optimal balance and relocation cost, and its time complexity and storage requirements compare favorably with known methods. Specifically, ifm is the number of times that the number of bins and/or buckets has increased, then the time complexity isO(logm) and the storage isO(m 2). Another contribution of the paper is to identify the concept of ahistory-independent mapping, meaning informally that the mapping does not “remember” the past history of expansions to the number of buckets and bins, but only the current number of buckets and bins. Thus, such mappings require very little information to be stored. Assuming that balance and relocation are optimal, we prove that history-independent mappings are possible if the number of buckets is fixed (so only the number of bins can increase), but not possible if the number of bins and buckets can both increase.  相似文献   

4.
We develop constant-time algorithms to compute the Hough transform on a processor array with a reconfigurable bus system (abbreviated to PARBS). The PARBS is a comptuation model which consists of a processor array and a reconfigurable bus system. It is a very powerful computation model in that many problems can be solved efficiently. In this paper, we introduce the concept of iterative-PARBS which is similar to the FOR-loop construct in sequential programming languages. The iterative-PARBS is a building block in which the processing data can be routed through it several times. We can think it as a “hardware subroutine”. Based on this scheme, we are able to explore constant-time Hough transform algorithms on PARBS. The following new results are derived in this study:
  1. The sum ofn bits can be computed in O(1) times on a PARBS with O(n 1+?) processors for any fixed ?>0.
  2. The weights of each simple path of ann*n image can be computed in O(1) time on a 3-D PARBS with O(n 2+?) processors for any fixed ?>0.
  3. Thep angle Hough transform of ann*n image can be computed in O(1) time on a PARBS with O(p*n 2+?) processors for any fixed ?>0 withp copies of the image pretiled.
  4. Thep angle Hough transform of ann*n image can be computed in O(1) time on a PARBS with O(p*n 3) processors.
  相似文献   

5.
We use algorithmic tools for graphs of small treewidth to address questions in complexity theory. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size n O(1) and treewidth k can be simulated by bounded fan-in arithmetic formulas of depth O(k 2logn). From this we derive an analogous statement for syntactically multilinear arithmetic circuits, which strengthens the central theorem of M. Mahajan and B.V.R. Rao (Proc. 33rd International Symposium on Mathematical Foundations of Computer Science, vol. 5162, pp. 455–466, 2008). We show our main construction has the following three applications:
  • Bounded width arithmetic circuits of size n O(1) can be balanced to depth O(logn), provided chains of iterated multiplication in the circuit are of length O(1).
  • Boolean bounded fan-in circuits of size n O(1) and treewidth k can be simulated by bounded fan-in formulas of depth O(k 2logn). This strengthens in the non-uniform setting the known inclusion that SC0?NC1.
  • We demonstrate treewidth restricted cases of Directed-Reachability and Circuit Value Problem that can be solved in LogDCFL.
We also give a construction showing, for both arithmetic and Boolean circuits, that any circuit of size n O(1) and treewidth O(log i n) can be simulated by a circuit of width O(log i+1 n) and size n c , where c=O(1), if i=0, and c=O(loglogn) otherwise.  相似文献   

6.
M. Pellegrini 《Algorithmica》1997,17(4):380-398
We describe a method for decomposing planar sets of segments and points. Using this method we obtain new efficientdeterministic algorithms for counting pairs of intersecting segments, and for answering off-line triangle range queries. In particular we obtain the following results:
  1. Givenn segments in the plane, the number of pairs of intersecting segments is counted in timeO(n 1+?+K 1/3 n 2/3+?), whereK is the number of intersection points among the segments, and ?>0 is an arbitrarily small constant.
  2. Givenn segments in the plane which are colored with two colors, the number of pairs ofbichromatic intersecting segments is counted in timeO(n 1+?+K m 1/3 n 2/3+?), whereK m is the number ofmonochromatic intersection points, and ?>0 is an arbitrarily small constant.
  3. Givenn weighted points andn triangles on a plane, the sum of weights of points in each triangle is computed in timeO(n 1+ε+?1/3 n 2/3+ε), where ? is the number of vertices in the arrangement of the triangles, and ?>0 is an arbitrarily small constant.
The above bounds depend sublinearly on the number of intersections among input segmentsK (resp.K m , ?), which is desirable sinceK (resp.K m , ?) can range from zero toO(n 2). All of the above algorithms use optimal Θ(n) storage. The constants of proportionality in the big-Oh notation increase as ? decreases. These results are based on properties of the sparse nets introduced by Chazelle [Cha3].  相似文献   

7.
8.
Distance transforms are an important computational tool for the processing of binary images. For ann ×n image, distance transforms can be computed in time \(\mathcal{O}\) (n) on a mesh-connected computer and in polylogarithmic time on hypercube related structures. We investigate the possibilities of computing distance transforms in polylogarithmic time on the pyramid computer and the mesh of trees. For the pyramid, we obtain a polynomial lower bound using a result by Miller and Stout, so we turn our attention to the mesh of trees. We give a very simple \(\mathcal{O}\) (logn) algorithm for the distance transform with respect to theL 1-metric, an \(\mathcal{O}\) (log2 n) algorithm for the transform with respect to theL -metric, and find that the Euclidean metric is much more difficult. Based on evidence from number theory, we conjecture the impossibility of computing the Euclidean distance transform in polylogarithmic time on a mesh of trees. Instead, we approximate the distance transform up to a given error. This works for anyL k -metric and takes time \(\mathcal{O}\) (log3 n).  相似文献   

9.
A. Bertoni  G. Mauri  M. Torelli 《Calcolo》1980,17(2):163-174
This paper is intended to show that an algebraic approach can give useful suggestions to design efficient algorithms solving combinatorial problems. The problems we discusses in the paper are:
  1. Counting strings of given length generated by a regular grammar. For this problem, we give an exact algorithm whose complexity is 0 (logn) (with respect to the number of executed operations), and an approximate algorithm which however still has the same order of complexity;
  2. counting trees recognized by a tree automaton. For this problem, we give an exact algorithm of complexity 0(n) and an approximate one of complexity 0 (logn). For this approximate algorithm the relative error is shown to be 0 (1/n).
  相似文献   

10.
Every Boolean function may be represented as a real polynomial. In this paper, we characterize the degree of this polynomial in terms of certain combinatorial properties of the Boolean function. Our first result is a tight lower bound of Ω(logn) on the degree needed to represent any Boolean function that depends onn variables. Our second result states that for every Boolean functionf, the following measures are all polynomially related:
  • o The decision tree complexity off.
  • o The degree of the polynomial representingf.
  • o The smallest degree of a polynomialapproximating f in theL max norm.
  •   相似文献   

    11.
    In this paper we investigate the expected complexityE(C) of distributive (“bucket”) sorting algorithms on a sampleX 1, ...,X n drawn from a densityf onR 1. Assuming constant time bucket membership determination and assuming the use of an average timeg(n) algorithm for subsequent sorting within each bucket (whereg is convex,g(n)/n↑∞,g(n)/n 2 is nonincreasing andg is independent off), the following is shown:
    1. Iff has compact support, then ∫g(f(x))dx<∞ if and only ifE(C)=0(n).
    2. Iff does not have compact support, then \(E(C)/n\xrightarrow{n}\infty \) .
    No additional restrictions are put onf.  相似文献   

    12.
    To model association fields that underly perceptional organization (gestalt) in psychophysics we consider the problem P curve of minimizing $\int _{0}^{\ell} \sqrt{\xi^{2} +\kappa^{2}(s)} {\rm d}s $ for a planar curve having fixed initial and final positions and directions. Here κ(s) is the curvature of the curve with free total length ?. This problem comes from a model of geometry of vision due to Petitot (in J. Physiol. Paris 97:265–309, 2003; Math. Inf. Sci. Humaines 145:5–101, 1999), and Citti & Sarti (in J. Math. Imaging Vis. 24(3):307–326, 2006). In previous work we proved that the range $\mathcal{R} \subset\mathrm{SE}(2)$ of the exponential map of the underlying geometric problem formulated on SE(2) consists of precisely those end-conditions (x fin,y fin,θ fin) that can be connected by a globally minimizing geodesic starting at the origin (x in,y in,θ in)=(0,0,0). From the applied imaging point of view it is relevant to analyze the sub-Riemannian geodesics and $\mathcal{R}$ in detail. In this article we
    • show that $\mathcal{R}$ is contained in half space x≥0 and (0,y fin)≠(0,0) is reached with angle π,
    • show that the boundary $\partial\mathcal{R}$ consists of endpoints of minimizers either starting or ending in a cusp,
    • analyze and plot the cones of reachable angles θ fin per spatial endpoint (x fin,y fin),
    • relate the endings of association fields to $\partial\mathcal {R}$ and compute the length towards a cusp,
    • analyze the exponential map both with the common arc-length parametrization t in the sub-Riemannian manifold $(\mathrm{SE}(2),\mathrm{Ker}(-\sin\theta{\rm d}x +\cos\theta {\rm d}y), \mathcal{G}_{\xi}:=\xi^{2}(\cos\theta{\rm d}x+ \sin\theta {\rm d}y) \otimes(\cos\theta{\rm d}x+ \sin\theta{\rm d}y) + {\rm d}\theta \otimes{\rm d}\theta)$ and with spatial arc-length parametrization s in the plane $\mathbb{R}^{2}$ . Surprisingly, s-parametrization simplifies the exponential map, the curvature formulas, the cusp-surface, and the boundary value problem,
    • present a novel efficient algorithm solving the boundary value problem,
    • show that sub-Riemannian geodesics solve Petitot’s circle bundle model (cf. Petitot in J. Physiol. Paris 97:265–309, [2003]),
    • show a clear similarity with association field lines and sub-Riemannian geodesics.
      相似文献   

    13.
    In this paper we study parallel algorithms for the Mesh-of-Processors architecture to solve visibility and related separability problems for sets of simple polygons in the plane. In particular, we present the following algorithms:
  • - AnO( \(\sqrt N\) time algorithm for computing on a Mesh-of-Processors of size N the visibility polygon from a point located in anN-vertex polygon, possibly with holes.
  • -O( \(\sqrt N\) ) time algorithms for computing on a Mesh-of-Processors of sizeN the set of all points on the boundary of anN-vertex polygonP which are visible in a given directiond as well as the visibility hull ofP for a given directiond.
  • - AnO( \(\sqrt N\) ) time algorithm for detecting on a Mesh-of-Processors of size 2N whether twoN-vertex polygons are separable in a given direction and anO( \(\sqrt {MN}\) ) time algorithm for detecting on a Mesh-of-Processors of sizeMN whetherM N-vertex polygons are sequentially separable in a given direction.
  • All proposed algorithms are asymptotically optimal (for the Mesh-of-Processors) with respect to time and number of processors.  相似文献   

    14.
    We revisit various string indexing problems with range reporting features, namely, position-restricted substring searching, indexing substrings with gaps, and indexing substrings with intervals. We obtain the following main results.
    • We give efficient reductions for each of the above problems to a new problem, which we call substring range reporting. Hence, we unify the previous work by showing that we may restrict our attention to a single problem rather than studying each of the above problems individually.
    • We show how to solve substring range reporting with optimal query time and little space. Combined with our reductions this leads to significantly improved time-space trade-offs for the above problems. In particular, for each problem we obtain the first solutions with optimal time query and O(nlog O(1) n) space, where n is the length of the indexed string.
    • We show that our techniques for substring range reporting generalize to substring range counting and substring range emptiness variants. We also obtain non-trivial time-space trade-offs for these problems.
    Our bounds for substring range reporting are based on a novel combination of suffix trees and range reporting data structures. The reductions are simple and general and may apply to other combinations of string indexing with range reporting.  相似文献   

    15.
    We present a uniform approach to problems involving lines in 3-space. This approach is based on mapping lines inR 3 into points and hyperplanes in five-dimensional projective space (Plücker space). We obtain new results on the following problems:
    1. Preprocessn triangles so as to answer efficiently the query: “Given a ray, which is the first triangle hit?” (Ray- shooting problem). We discuss the ray-shooting problem for both disjoint and nondisjoint triangles.
    2. Construct the intersection of two nonconvex polyhedra in an output sensitive way with asubquadratic overhead term.
    3. Construct the arrangement ofn intersecting triangles in 3-space in an output-sensitive way, with asubquadratic overhead term.
    4. Efficiently detect the first face hit by any ray in a set of axis-oriented polyhedra.
    5. Preprocessn lines (segments) so as to answer efficiently the query “Given two lines, is it possible to move one into the other without crossing any of the initial lines (segments)?” (Isotopy problem). If the movement is possible produce an explicit representation of it.
      相似文献   

    16.
    We consider nonlinear boundary value problems with arbitrarily many solutionsuεC 2 [a, b]. In this paper an Algorithm will be established for a priori bounds \(\bar u,\bar d \in C[a,b]\) with the following properties:
    1. For every solutionu of the nonlinear problem we obtain $$\bar u(x) \leqslant u(x) \leqslant \bar u(x), - \bar d(x) \leqslant u'(x) \leqslant \bar d(x)$$ for any,xε[a, b].
    2. The bounds \(\bar u\) and % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGabmizayaara% aaaa!36EE!\[\bar d\] are defined by the use of the functions exp, sin and cos.
    3. We use neither the knowledge of solutions nor the number of solutions.
      相似文献   

    17.
    In this paper, we study two interprocedural program-analysis problems—interprocedural slicing and interprocedural dataflow analysis— and present the following results:
  • ? Interprocedural slicing is log-space complete forP.
  • ? The problem of obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems isP-hard.
  • ? Obtaining “meet-over-all-valid-paths” solutions to interprocedural versions of distributive dataflow-analysis problems that involve finite sets of dataflow facts (such as the classical “gen/kill” problems) is log-space complete forP.
  • These results provide evidence that there do not exist fast (N?-class) parallel algorithms for interprocedural slicing and precise interprocedural dataflow analysis (unlessP =N?). That is, it is unlikely that there are algorithms for interprocedural slicing and precise interprocedural dataflow analysis for which the number of processors is bounded by a polynomial in the size of the input, and whose running time is bounded by a polynomial in the logarithm of the size of the input. This suggests that there are limitations on the ability to use parallelism to overcome compiler bottlenecks due to expensive interprocedural-analysis computations.  相似文献   

    18.
    Stable semantics for disjunctive programs   总被引:1,自引:0,他引:1  
    We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties:
  • ? For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics.
  • ? For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics.
  • ? For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics.
  • ? The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs.
  • ? Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation.
  • ? After translation of the programP into a suitable autoepistemic theory \( \hat P \) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of \( \hat P \) .
  •   相似文献   

    19.
    Trial and error     
    A pac-learning algorithm isd-space bounded, if it stores at mostd examples from the sample at any time. We characterize thed-space learnable concept classes. For this purpose we introduce the compression parameter of a concept classb and design our Trial and Error Learning Algorithm. We show: b isd-space learnable if and only if the compression parameter ofb is at mostd. This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes:
  • - all intersection closed concept classes with finite VC-dimension.
  • - convexn-gons in ?2.
  • - halfspaces in ?n.
  • - unions of triangles in ?2.
  • We further relate the compression parameter to the VC-dimension, and discuss variants of this parameter.  相似文献   

    20.
    Anatomic snapshot memory object in shared memory systems enables a set of processes, calledscanners, to obtain a consistent picture of the shared memory while other processes, calledupdaters, keep updating memory locations concurrently. In this paper we present two conversion methods of snapshot implementations. Using the first conversion method we obtain a new snapshot implementation in which the scan operation has linear time complexity and the time complexity of the update operation becomes the sum of the time complexities of the original implementation. Applying the second conversion method yields similar results, where in this case the time complexity of the update protocol becomes linear. Although our conversion methods use unbounded space, their space complexity can be bounded using known techniques. One of the most intriguing open problems in distributed wait-free computing is the existence of a linear-time implementation of this object. Using our conversion methods and known constructions we obtain the following results:
  • ?Consider a system ofn processes, each an updater and a scanner. We present an implementation in which the time complexity of either the update or the scan operation is linear, while the time complexity of the second operation isO(n logn).
  • ?We present an implementation with linear time complexity when the number of either updaters or scanners isO(n/logn), wheren is the total number of processes.
  • ?We present an implementation with amortized linear time complexity when one of the protocols (either upate or scan) is executed significantly more often than the other protocol.
  •   相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号