首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present parallel algorithms for some fundamental problems in computational geometry which have a running time ofO(logn) usingn processors, with very high probability (approaching 1 asn ). These include planar-point location, triangulation, and trapezoidal decomposition. We also present optimal algorithms for three-dimensional maxima and two-set dominance counting by an application of integer sorting. Most of these algorithms run on a CREW PRAM model and have optimal processor-time product which improve on the previously best-known algorithms of Atallah and Goodrich [5] for these problems. The crux of these algorithms is a useful data structure which emulates the plane-sweeping paradigm used for sequential algorithms. We extend some of the techniques used by Reischuk [26] and Reif and Valiant [25] for flashsort algorithm to perform divide and conquer in a plane very efficiently leading to the improved performance by our approach.This is a substantially revised version of the paper that appeared as Optimal Randomized Parallel Algorithms for Computational Geometry in theProceedings of the 16th International Conference on Parallel Processing, St. Charles, Illinois, August 1987.This research was supported by DARPA/ARO Contract DAAL03-88-K-0195, Air Force Contract AFOSR-87-0386, DARPA/ISTO Contracts N00014-88-K-0458 and N00014-91-J-1985, and by NASA Subcontract 550-63 of Primecontract NAS5-30428.  相似文献   

2.
We present a technique that can be used to obtain efficient parallel geometric algorithms in the EREW PRAM computational model. This technique enables us to solve optimally a number of geometric problems in O(log n) time using O(n/log n) EREW PRAM processors, where n is the input size of a problem. These problems include: computing the convex hull of a set of points in the plane that are given sorted, computing the convex hull of a simple polygon, computing the common intersection of half-planes whose slopes are given sorted, finding the kernel of a simple polygon, triangulating a set of points in the plane that are given sorted, triangulating monotone polygons and star-shaped polygons, and computing the all dominating neighbors of a sequence of values. PRAM algorithms for these problems were previously known to be optimal (i.e., in O(log n) time and using O(n/log n) processors) only on the CREW PRAM, which is a stronger model than the EREW PRAM  相似文献   

3.
The interest for many-objective optimization has grown due to the limitations of Pareto dominance based Multi-Objective Evolutionary Algorithms when dealing with problems of a high number of objectives. Recently, some many-objective techniques have been proposed to avoid the deterioration of these algorithms' search ability. At the same time, the interest in the use of Particle Swarm Optimization (PSO) algorithms in multi-objective problems also grew. The PSO has been found to be very efficient to solve multi-objective problems (MOPs) and several Multi-Objective Particle Swarm Optimization (MOPSO) algorithms have been proposed. This work presents a study of the behavior of MOPSO algorithms in many-objective problems. The many-objective technique named control of dominance area of solutions (CDAS) is used on two Multi-Objective Particle Swarm Optimization algorithms. An empirical analysis is performed to identify the influence of the CDAS technique on the convergence and diversity of MOPSO algorithms using three different many-objective problems. The experimental results are compared applying quality indicators and statistical tests.  相似文献   

4.
A species conserving genetic algorithm for multimodal function optimization   总被引:26,自引:0,他引:26  
This paper introduces a new technique called species conservation for evolving parallel subpopulations. The technique is based on the concept of dividing the population into several species according to their similarity. Each of these species is built around a dominating individual called the species seed. Species seeds found in the current generation are saved (conserved) by moving them into the next generation. Our technique has proved to be very effective in finding multiple solutions of multimodal optimization problems. We demonstrate this by applying it to a set of test problems, including some problems known to be deceptive to genetic algorithms.  相似文献   

5.
Recent metamodel-based global optimization algorithms are very promising for box-constrained expensive optimization problems. However, few of them can tackle constrained optimization problems. This article presents an improved constrained optimization algorithm, called eDIRECT-C, for expensive constrained optimization problems. In the eDIRECT-C algorithm, we present a novel DIRECT-type constraint-handling technique that separately handles feasible and infeasible cells. This technique has no user-defined parameter and is beneficial for exploring the undetected feasible regions and boundary of feasible regions. We also employ an adaptive metamodeling strategy to build appropriate metamodel types for objective and constraints respectively. This strategy yields more accurate predictions and therefore significantly speeds up the convergence. To assess the performance of eDIRECT-C, we compare it with some state-of-the-art metamodel-based constrained optimization algorithms and the original DIRECT algorithm on 13 benchmark problems and 4 engineering examples. The comparative results imply that the proposed algorithm is very promising for constrained problems in terms of the convergence speed, quality of final solutions and success rate.  相似文献   

6.
The impulse signal is an instant change signal in very short time. It is widely used in signal processing, electronic technique, communication and system identification. This paper considers the parameter estimation problems for dynamical systems by means of the impulse response measurement data. Since the cost function is highly nonlinear, the nonlinear optimization methods are adopted to derive the parameter estimation algorithms to enhance the estimation accuracy. By using the iterative scheme, the Newton iterative algorithm and the gradient iterative algorithm are proposed for estimating the parameters of dynamical systems. Also, a damping factor is introduced to improve the algorithm stability. Finally, using simulation examples, this paper analyzes and compares the merit and weakness of the proposed algorithms.  相似文献   

7.
Problem solvers are computational systems which make use of different search algorithms for solving problems. Sometimes, while employing such search algorithms, problem solvers may prove to be inefficient and take too great an effort so as to showing that the problem has no solution. For such cases, in this paper we explain a technique which provides a quick proof that finding a solution is actually impossible. This technique results in reducing the number and simplifying the topology of the states which shape a problem space. Hence, we show and prove efficient new techniques intended to find such reductions which may result to be very useful for many problems.  相似文献   

8.
A sequence of exact algorithms to solve the Vertex Cover and Maximum Independent Set problems have been proposed in the literature. All these algorithms appeal to a very conservative analysis that considers the size of the search tree, under a worst-case scenario, to derive an upper bound on the running time of the algorithm. In this paper we propose a different approach to analyze the size of the search tree. We use amortized analysis to show how simple algorithms, if analyzed properly, may perform much better than the upper bounds on their running time derived by considering only a worst-case scenario. This approach allows us to present a simple algorithm of running time O(1.194kk2 + n) for the parameterized Vertex Cover problem on degree-3 graphs, and a simple algorithm of running time O(1.1255n) for the Maximum Independent Set problem on degree-3 graphs. Both algorithms improve the previous best algorithms for the problems.  相似文献   

9.
Implementors of graphics application programming interfaces (APIs) and algorithms are often required to support a plethora of options for several stages at the back end of the geometry and rasterization pipeline. Implementing these options in high-level programming languages such as C leads to code with many branches and large object modules due to indiscriminate duplication of very similar code. This reduces the speed of execution of the program. This paper examines the problems of branches and code size in detail and presents techniques for transforming typical code sequences in graphics programs to reduce the number of branches and to reduce to the size of the program. A set of branch-free basis functions is defined first. The application of these functions to common geometric queries, geometry pipeline computations, rasterization, and pixel processing algorithms is then discussed.  相似文献   

10.
Clustering is a popular data analysis and data mining technique. It is the unsupervised classification of patterns into groups. Many algorithms for large data sets have been proposed in the literature using different techniques. However, conventional algorithms have some shortcomings such as slowness of the convergence, sensitive to initial value and preset classed in large scale data set etc. and they still require much investigation to improve performance and efficiency. Over the last decade, clustering with ant-based and swarm-based algorithms are emerging as an alternative to more traditional clustering techniques. Many complex optimization problems still exist, and it is often very difficult to obtain the desired result with one of these algorithms alone. Thus, robust and flexible techniques of optimization are needed to generate good results for clustering data. Some algorithms that imitate certain natural principles, known as evolutionary algorithms have been used in a wide variety of real-world applications. Recently, much research has been proposed using hybrid evolutionary algorithms to solve the clustering problem. This paper provides a survey of hybrid evolutionary algorithms for cluster analysis.  相似文献   

11.
Nowadays, large scale optimisation problems arise as a very interesting field of research, because they appear in many real-world problems (bio-computing, data mining, etc.). Thus, scalability becomes an essential requirement for modern optimisation algorithms. In a previous work, we presented memetic algorithms based on local search chains. Local search chain concerns the idea that, at one stage, the local search operator may continue the operation of a previous invocation, starting from the final configuration reached by this one. Using this technique, it was presented a memetic algorithm, MA-CMA-Chains, using the CMA-ES algorithm as its local search component. This proposal obtained very good results for continuous optimisation problems, in particular with medium-size (with up to dimension 50). Unfortunately, CMA-ES scalability is restricted by several costly operations, thus MA-CMA-Chains could not be successfully applied to large scale problems. In this article we study the scalability of memetic algorithms based on local search chains, creating memetic algorithms with different local search methods and comparing them, considering both the error values and the processing cost. We also propose a variation of Solis Wets method, that we call Subgrouping Solis Wets algorithm. This local search method explores, at each step of the algorithm, only a random subset of the variables. This subset changes after a certain number of evaluations. Finally, we propose a new memetic algorithm based on local search chains for high dimensionality, MA-SSW-Chains, using the Subgrouping Solis Wets’ algorithm as its local search method. This algorithm is compared with MA-CMA-Chains and different reference algorithms, and it is shown that the proposal is fairly scalable and it is statistically very competitive for high-dimensional problems.  相似文献   

12.
In this paper we describe a technique for finding efficient parallel algorithms for problems on directed graphs that involve checking the existence of certain kinds of paths in the graph. This technique provides efficient algorithms for finding dominators in flow graphs, performing interval and loop analysis on reducible flow graphs, and finding the feedback vertices of a digraph. Each of these algorithms takesO(log2 n) time using the same number of processors needed for fast matrix multiplication. All of these bounds are for an EREW PRAM.  相似文献   

13.
Measure & Conquer (M&C) is a prominent technique for analyzing exact algorithms for computationally hard problems, in particular, graph problems. It tries to balance worse and better situations within the algorithm analysis. This has led, e.g., to algorithms for Minimum Vertex Cover with a running time of $\mathcal{O}(c^{n})$ for some constant c??1.2, where n is the number of vertices in the graph. Several obstacles prevent the application of this technique in parameterized algorithmics, making it rarely applied in this area. However, these difficulties can be handled in some situations. We will exemplify this with two problems related to Vertex Cover, namely Connected Vertex Cover and Edge Dominating Set. For both problems, several parameterized algorithms have been published, all based on the idea of first enumerating minimal vertex covers. Using M&C in this context will allow us to improve on the hitherto published running times. In contrast to some of the earlier suggested algorithms, ours will use polynomial space.  相似文献   

14.
In this paper we describe a technique for finding efficient parallel algorithms for problems on directed graphs that involve checking the existence of certain kinds of paths in the graph. This technique provides efficient algorithms for finding dominators in flow graphs, performing interval and loop analysis on reducible flow graphs, and finding the feedback vertices of a digraph. Each of these algorithms takesO(log2 n) time using the same number of processors needed for fast matrix multiplication. All of these bounds are for an EREW PRAM.  相似文献   

15.
Techniques for solving linear equations on a single instruction multiple data (SIMD) computer such as the ICL DAP have so far been confined to simple methods such as the Successive Overrelaxation and Alternating Direction Implicit algorithms. While these techniques are adequate for simple finite difference problems require more complex algorithms. Preconditioned conjugate gradient methods have solved difficult problems successfully on serial machines. This paper describes a preconditioning technique suitable for parallel machines and numerical results obtained from a series of problems of varying degrees of difficulty.  相似文献   

16.
In this paper we give parallel algorithms for a number of problems defined on point sets and polygons. All our algorithms have optimalT(n) * P(n) products, whereT(n) is the time complexity andP(n) is the number of processors used, and are for the EREW PRAM or CREW PRAM models. Our algorithms provide parallel analogues to well-known phenomena from sequential computational geometry, such as the fact that problems for polygons can oftentimes be solved more efficiently than point-set problems, and that nearest-neighbor problems can be solved without explicitly constructing a Voronoi diagram.The research of R. Cole was supported in part by NSF Grants CCR-8702271, CCR-8902221, and CCR-8906949, by ONR Grant N00014-85-K-0046, and by a John Simon Guggenheim Memorial Foundation fellowship. M. T. Goodrich's research was supported by the National Science Foundation under Grant CCR-8810568 and by the National Science Foundation and DARPA under Grant CCR-8908092.  相似文献   

17.
Spectral grouping using the Nyström method   总被引:9,自引:0,他引:9  
Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation. However, due to the computational demands of these approaches, applications to large problems such as spatiotemporal data and high resolution imagery have been slow to appear. The contribution of this paper is a method that substantially reduces the computational requirements of grouping algorithms based on spectral partitioning making it feasible to apply them to very large grouping problems. Our approach is based on a technique for the numerical solution of eigenfunction problems known as the Nystrom method. This method allows one to extrapolate the complete grouping solution using only a small number of samples. In doing so, we leverage the fact that there are far fewer coherent groups in a scene than pixels.  相似文献   

18.
We examine the worst-case performance of a class of heuristic scheduling algorithms commonly referred to as priority-driven or list-scheduling algorithms. It is well known that these algorithms have anomalous, unpredictable performance when used to schedule nonpreemptive tasks with precedence constraints. We present a general method for deriving the worst-case performance of these algorithms. This method is easy to use, yet powerful enough to yield tight performance bounds for many classes of scheduling problems. We demonstrate the method for several problems to show it has wide applicability. We also present several task systems for which list-scheduling algorithms exhibit unavoidable worst-case performance and discuss the general characteristics of these task systems. These task systems are sometimes overlooked in simulation studies; consequently, the results of these studies may be overly optimistic.This work was partially supported by the US Navy. ONR Contract No. NVY N0001489 J1181  相似文献   

19.
Finding the cells intersected by a ray in a uniformly subdivided space is a technique used in accelerated ray tracing and also other computer graphics applications. A related problem is that of finding the sequence of grid points in a 3D voxel space, which best approximates a given line. This paper examines earch of these problems, introduces a unifying approach and provides a generalisation of previous algorithms.Written while on sabbatical leave from Department of Computer Science, QMW University of London, Mile End Road, London E1 4NS, UK  相似文献   

20.
Five scheduling problems are considered, concerning unit-length independent tasks and uniform machines. New improved optimal algorithms are presented that can solve these problems in at most O( n log n) time, where n is the number of tasks. The existing algorithms solve most of these problems in O( n 3 ) time. Proofs of optimality of the present algorithms are included, and simple representative examples are provided that illustrate the type of results obtained  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号