首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study two problems related to planar motion planning for robots with imperfect control, where, if the robot starts a linear movement in a certain commanded direction, we only know that its actual movement will be confined in a cone of angle centered around the specified direction.

First, we consider a single goal region, namely the “region at infinity”, and a set of polygonal obstacles, modeled as a set S of n line segments. We are interested in the region from where we can reach infinity with a directional uncertainty of . We prove that the maximum complexity of is O(n/5). Second, we consider a collection of k polygonal goal regions of total complexity m, but without any obstacles. Here we prove an O(k3m) bound on the complexity of the region from where we can reach a goal region with a directional uncertainty of . For both situations we also prove lower bounds on the maximum complexity, and we give efficient algorithms for computing the regions.  相似文献   


2.
In this paper, we study the problem of finding routing algorithms on the multirate rearrangeable Clos networks which use as few number of middle-stage switches as possible. We propose a new routing algorithm called the “grouping algorithm”. This is a simple algorithm which uses fewer middle-stage switches than all known strategies, given that the number of input-stage switches and output-stage switches are relatively small compared to the size of input and output switches. In particular, the grouping algorithm implies that m = 2n+(n−1)/2k is a sufficient number of middle-stage switches for the symmetric three-stage Clos network C(n,m,r) to be multirate rearrangeable, where k is any positive integer and rn/(2k−1).  相似文献   

3.
A parallel two-list algorithm for the knapsack problem   总被引:10,自引:0,他引:10  
An n-element knapsack problem has 2n possible solutions to search over, so a task which can be accomplished in 2″ trials if an exhaustive search is used. Due to the exponential time in solving the knapsack problem, the problem is considered to be very hard. In the past decade, much effort has been done in order to find techniques which could lead to practical algorithms with reasonable running time. In 1994, Chang et al. proposed a brilliant parallel algorithm, which needs O(2n/8) processors to solve the knapsack problem in O(2n/2) time; that is, the cost of Chang et al.'s parallel algorithm is O(25n/8). In this paper, we propose a parallel algorithm to improve Chang et al.'s parallel algorithm by reducing the time complexity to be O(23n/8) under the same O(2n/8) processors available. Thus, the proposed parallel algorithm has a cost of O(2n/2). It is an improvement over previous literature. We believe that the proposed parallel algorithm is pragmatically feasible at the moment when multiprocessor systems become more and more popular.  相似文献   

4.
The problem of planning a path for a point robot from a source point s to a destination point d so as to avoid a set of polygonal obstacles in plane is considered. Using well-known methods, a shortest path from s to d can be computed with a time complexity of O(n2) where n is the total number of obstacle vertices. The focus here is in

1. (a) planning paths faster at the expense of setting for suboptimal path lengths and

2. (b) performance analysis of simple and/or well-known suboptimal methods.

A method that enables a hierarchical implementation of any path planning algorithm with no increase in the worst-case time complexity, is presented; this implementation enables fast planning of simple paths. Then methods are presented based on the Voronoi diagrams, trapezoidal decomposition and triangulation, which compute (suboptimal) paths in O(nlog n) time with the preprocessing costs of O(n log n), O(n2) and O(n log n), respectively. Using existing navigational algorithms for unknown terrains, algorithms that run in O(n log n) time (after preprocessing) and yield suboptimal paths, are presented. For all these algorithms, upper bounds on the path lengths are estimated in terms of the shortest of the obstacles, etc.  相似文献   


5.
In this paper, we propose some algorithms to solve the topological ordering problem, the breadth-first search problem and the connected component problem under the broadcast communication model. The basic idea of our algorithms is to divide a graph into several layers. Only after all vertices in one layer are processed, we begin to process the vertices in another layer. Thus, the number of broadcast conflicts is reduced. We also propose a randomized conflict resolution scheme to resolve conflicts. We show that the average time complexity of our algorithms is Θ(n), where n is the number of available processors and also the number of vertices in the graph.  相似文献   

6.
尤洁  李劲    张赛  李婷 《智能系统学报》2019,14(4):761-768
针对已有链路预测算法复杂度高,不适于在大规模图上进行链接预测的问题,本文基于图勾勒近似技术对已有链路预测方法进行优化,提出了基于图勾勒的链路预测方法。该方法将链路预测算法的计算复杂度由On3)降低至On2k2log2n)。为进一步提高链接预测效率,给出了基于Spark的并行化链路预测实现方法。在真实图数据集上进行测试,实验结果表明本文方法在保证链接预测精度的前提下,可有效提升算法效率。  相似文献   

7.
The calculation of a perspective image of an object involves the position and orientation of the viewer. In many graphics applications the viewpoint and object are fixed, and an orientation is sought in which the object is “centered” in the field of view. Previous work has proposed that the viewing direction be the axis of the narrowest circular cone emanating from the viewpoint and containing the object, and has shown how this direction can be calculated based on the viewpoint and a set of n object points which includes the object's extreme points. This paper presents algorithms which efficiently accomplish this task, in O(n) average and O(n log(n)) worst-case time. The orientation problem is converted into a problem in spherical geometry, and the proposed algorithms are based on existing algorithms for the analogous plane geometry problems.  相似文献   

8.
Yi Pan  Keqin Li 《Information Sciences》1999,120(1-4):209-221
The computation of Euclidean distance maps (EDM), also called Euclidean distance transform, is a basic operation in computer vision, pattern recognition, and robotics. Fast computation of the EDM is needed since most of the applications using the EDM require real-time computation. It is shown in L. Chen and H.Y.H. Chuang [Information Processing Letters, 51, pp. 25–29 (1994)] that a lower bound Ω(n2) is required for any sequential EDM algorithm due to the fact that in any EDM algorithm each of the n2 pixels has to be scanned at least once. Recently, many parallel EDM algorithms have been proposed to speedup its computation. Chen and Chuang proposed an algorithm for computing the EDM on an n×n mesh in O(n) time [L. Chen and H.Y.H. Chuang Parallel Computing, 21, pp. 841–852 (1995)]. Clearly, the VLSI complexities of both the sequential and the mesh algorithm described in L. Chen and H.Y.H. Chuang [Parallel Computing, 21, pp. 841–852 (1995)] are AT2=O(n4), where A is the VLSI layout area of the design and T is the computation time using area A when implemented in VLSI. In this paper, we propose a new and faster parallel algorithm for computing the EDM problem on the reconfigurable VLSI mesh model. For the same problem, our algorithm runs in O(1) time on a two-dimensional n2×n2 reconfigurable mesh. We show that the VLSI complexity of our algorithm is the same as those of the above sequential algorithm and the mesh algorithm, while it uses much less time. To our best knowledge, this is the first constant-time EDM algorithm on any parallel computational model.  相似文献   

9.
For a least-squares problem of m degree polynomial regression with n measured values (nm + 1), the traditional methods need O(n2m) arithmetic operations. We prove that the arithmetic complexity of this problem does not exceed O(nlog22m).  相似文献   

10.
This paper outlines an algorithm for optimum linear ordering (OLO) of a weighted parallel graph with O(n log k) worst-case time complexity, and O(n + k log(n/k) log k) expected-case time complexity, where n is the total number of nodes and k is the number of chains in the parallel graph. Next, the two-layer OLO problem is considered, where the goal is to place the nodes linearly in two routing layers minimizing the total wire length. The two-layer problem is shown to subsume the maxcut problem and a befitting heuristic algorithm is proposed. Experimental results on randomly generated samples show that the heuristic algorithm runs very fast and outputs optimum solutions in more than 90% instances.  相似文献   

11.
The complexity of performing matrix computations, such as solving a linear system, inverting a nonsingular matrix or computing its rank, has received a lot of attention by both the theory and the scientific computing communities. In this paper we address some “nonclassical” matrix problems that find extensive applications, notably in control theory. More precisely, we study the matrix equations AX + XAT = C and AXXB = C, the “inverse” of the eigenvalue problem (called pole assignment), and the problem of testing whether the matrix [B ABAn−1 B] has full row rank. For these problems we show two kinds of PRAM algorithms: on one side very fast, i.e. polylog time, algorithms and on the other side almost linear time and processor efficient algorithms. In the latter case, the algorithms rely on basic matrix computations that can be performed efficiently also on realistic machine models.  相似文献   

12.
We substantially improve the known algorithms for approximating all the complex zeros of an nth degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of Schönhage [1], Pan [2], and Neff and Reif [3]. In parallel (NC) implementation, we dramatically decrease the number of processors, versus the parallel algorithm of Neff [4], which was the only NC algorithm known for this problem so far. Specifically, under the simple normalization assumption that the variable x has been scaled so as to confine the zeros of p(x) to the unit disc x : |x| ≤ 1, our algorithms (which promise to be practically effective) approximate all the zeros of p(x) within the absolute error bound 2b, by using order of n arithmetic operations and order of (b + n)n2 Boolean (bitwise) operations (in both cases up to within polylogarithmic factors). The algorithms allow their optimal (work preserving) NC parallelization, so that they can be implemented by using polylogarithmic time and the orders of n arithmetic processors or (b + n)n2 Boolean processors. All the cited bounds on the computational complexity are within polylogarithmic factors from the optimum (in terms of n and b) under both arithmetic and Boolean models of computation (in the Boolean case, under the additional (realistic) assumption that n = O(b)).  相似文献   

13.
The computational complexity of general change of basis algorithms from one bivariate polynomial basis of degree n to another bivariate polynomial basis of degree n using matrix multiplication is O(n4). Applying blossoming and duality, we derive change of basis algorithms with computational complexity O(n3) between two important classes of polynomial bases used for representing surfaces in CAGD: B-bases and L-bases. Change of basis algorithms for B-bases follow from their blossoming property; change of basis algorithms for L-bases follow from the duality between L-bases and B-bases. The Bézier and multinomial bases are special cases of both B-bases and L-bases, so these algorithms can be used to convert between the Bézier and multinomial forms. We also show that the bivariate Horner evaluation algorithm for the multinomial basis is dual to the bivariate de Boor evaluation algorithm for B-patches.  相似文献   

14.
王帅  杨晓东 《计算机应用》2018,38(11):3287-3292
为解决现有标签数量估计算法中估计精度与复杂度之间的矛盾,在分析比较现有算法的基础上,提出一种基于序贯线性贝叶斯的射频识别(RFID)标签数量估计算法。首先,基于线性贝叶斯理论,充分利用空闲、成功和碰撞时隙数量观测值及相关性,建立了标签数量估计问题的线性模型;然后,推导了标签数量估计值的闭式表达式,给出了表达式各阶统计量的序贯式求解方法;最后,对序贯式贝叶斯算法的计算复杂度进行了分析和对比。仿真结果表明,所提算法通过序贯贝叶斯方法提高了估计精度和识别效率,当观测时隙数为帧长一半时估计误差仅为4%。该算法以线性解析式形式更新标签数量估计值,避免了穷举搜索,与高精度的最大后验概率和马氏距离算法相比,计算复杂度由On2)和On)下降为O(1)。经理论分析和仿真验证,基于序贯线性贝叶斯的RFID标签数量估计算法兼具高精度和低复杂度的特性,能很好地满足硬件资源受限应用场景下对标签数量的估计需求。  相似文献   

15.
Constrained multibody system dynamics an automated approach   总被引:1,自引:0,他引:1  
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. Specifically, the “closed loop” problem of multibody chain systems is addressed.

The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. This modification, which is based upon a solution of the constraint equations obtained through a “zero eigenvalues theorem,” is, in effect, a contraction of the dynamical equations.

It is observed that, for a system with n generalized coordinates and m constraint equations, the coefficients in the constraint equations may be viewed as “constraint vectors” in n-dimensional space. Then, in this setting the system itself is free to move in the nm directions which are “orthogonal” to the constraint vectors.  相似文献   


16.
The spectrum, Sp(), of a sentence is the set of cardinalities of finite structures which satisfy . We prove that any set of integers which is in Func1, i.e. in the class of spectra of first-order sentences of type containing only unary function symbols, is also in BIN1, i.e. in the class of spectra of first-order sentences of type involving only a single binary relation.

We give similar results for generalized spectra and some corollaries: in particular, from the fact that the large complexity class cNTIMERAM(cn) is included in Func1 for unary languages (n denotes the input integer), we deduce that the set of primes and many “natural” sets belong to BIN1.

We also give some consequences for the image of spectra under polynomials of .  相似文献   


17.
We describe algorithms for constructing optimal binary search trees, in which the access cost of a key depends on the k preceding keys which were reached in the path to it. This problem has applications to searching on secondary memory and robotics. Two kinds of optimal trees are considered, namely optimal worst case trees and weighted average case trees. The time and space complexities of both algorithms are O(nk+2) and O(nk+1), respectively. The algorithms are based on a convenient decomposition and characterizations of sequences of keys which are paths of special kinds in binary search trees. Finally, using generating functions, we present an exact analysis of the number of steps performed by the algorithms.  相似文献   

18.
针对当前的基因序列比对协议普遍要求一个可信赖的第三方,可能因此造成大范围的隐私数据泄漏的问题,提出了一种基于线性扫描的基因比对方案。首先对两方的基因序列进行基于混淆电路(GC)的编码,然后线性扫描整个基因组数据库并用混淆电路实现客户的基因序列与库中所有基因序列的比对。上述方案可以在保护双方用户隐私的前提下,实现基因比对。不过该方案需要扫描整个基因组数据库,时间复杂度为On),在基因组数据库较大时效率较低。为了提高基因比对的效率,进一步提出了基于不经意随机存取(ORAM)的基因比对方案,先将基因数据存储在ORAM上,然后只需把目标路径上的数据项取出并用混淆电路进行基因比对。该方案的比对次数和数据库的大小呈亚线性关系,时间复杂度为O(log n)。实验结果表明,基于ORAM的基因比对方案在实现隐私保护的同时,把比对次数由On)减小到了O(log n),明显降低了比对操作的时间复杂度,可以用来进行疾病诊断,尤其适用于基因组数据库较大的场景。  相似文献   

19.
We show that the elementary theory of Boolean algebras is log-complete for the Berman complexity class c<ω STA(*, 2cn, n), the class of sets accepted by alternating Turing machines running in time 2cn for some constant c and making at most n alternations on inputs of length n; thus the theory is computationally equivalent to the theory of real addition with order. We extend the completeness results to various subclasses of Boolean algebras, including the finite, free, atomic, atomless, and complete Boolean algebras. Finally we show that the theory of any finite collection of finite Boolean algebras is complete for PSPACE, while the theory of any other collection is log-hard for c<ω STA(*, 2cn, n).  相似文献   

20.
We consider the problem of inferring the evolutionary tree of a set of n species. We propose a quartet reconstruction method which specifically produces trees whose edges have strong combinatorial evidence. Let Q be a set of resolved quartets defined on the studied species, the method computes the unique maximum subset Q* of Q which is equivalent to a tree and outputs the corresponding tree as an estimate of the species’ phylogeny. We use a characterization of the subset Q* due to Bandelt and Dress (Adv. Appl. Math. 7 (1986) 309–343) to provide an O(n4) incremental algorithm for this variant of the NP-hard quartet consistency problem. Moreover, when chosing the resolution of the quartets by the four-point method (FPM) and considering the Cavender–Farris model of evolution, we show that the convergence rate of the Q* method is at worst polynomial when the maximum evolutive distance between two species is bounded. We complete these theoretical results by an experimental study on real and simulated data sets. The results show that (i) as expected, the strong combinatorial constraints it imposes on each edge leads the Q* method to propose very few incorrect edges; (ii) more surprisingly; the method infers trees with a relatively high degree of resolution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号