首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
94035基于pipelinel一类动态规划并行算法//计算机杂志。一1993(3).一7~16动态规划是解决组合优化问题最有效的方法之一。本文基于pipeline结构提出并分析了三个相似的动态规划并行算法(求简单最短路径问题,求最长公共子串问题和解...  相似文献   

2.
动态规划算法是一种研究多阶段决策问题的算法.用动态规划方法求最短路问题,要求所求问题具有明显的阶段。该文以动态规划理论为指导,研究了动态规划算法求解最短路径的基本原理及步骤,编写了基于动态规划算法的C语言程序,辅助完成最短路径的求解。  相似文献   

3.
赵蔚  吴沧浦 《自动化学报》1994,20(6):694-701
提出了一种新的求解多指标动态规划问题的算法,它是由多目标静态规划的交互式满意 置换率法[1]推广得到的.通过增加附加状态变量进行数学模型转换,将单指标动态规划问题 转化为静态规划问题,再进行迭代.这样既减少了计算量,又使各指标间的置换关系易于求 得.所提方法在人机交互过程中对决策者的要求不高,对于一类常见的多指标动态规划问题, 可以迅速获得满意的解.  相似文献   

4.
提出一个基于神经元动态规划解决可重入生产系统调度问题的仿真框架.根据可重入生产系统的特点建立状态集,并将调度问题表示成相应的马尔可夫决策过程.选择合理的性能指标,采用神经元动态规划产生每一步的调度,并在仿真中优化策略.仿真算例验证了该方法的有效性,三种调度策略的结果比较表明了神经元动态规划方法的优越性.本仿真框架还可拓展至其他类型的生产调度问题.  相似文献   

5.
动态规划实际上是研究一类最优化问题的方法,在经济、工程技术、企业管理、工农业生产及军事等领域中都有广泛的应用。近年来,在ACM/ICPC中,使用动态规划(或部分应用动态规划思维)求解的题不仅常见,而且形式也多种多样。而在与此相近的各类信息学竞赛中,应用动态规划解题已经成为一种趋势,这和动态规划的优势不无关系。 与其说动态规划是一种算法,不如说是一种思维方法来得贴切。因为动态规划没有固定的框架,即便是应用到同一道题上,也可以建立多种形式的求解算法。许多隐式图上的算法,例如求单源最短路径的 Dijks…  相似文献   

6.
模型法求规划是一种通用的规划方法。它从给定问题的模型中抽取出规划。本文综述了模型法求规划的基本方法。首先介绍了SATPLAN和CSP,以讨论模型法求规划的通用框架。然后介绍了新近开发的模型法求规划系统,BLACKBOX和GP-CSP。在将模型法与演绎法和CBP法进行比较后,给出了模型法求规划的未来研完方向。  相似文献   

7.
在基本火为规划模型的基础上,建立了一种大规模火力规划问题的递阶模型,并运用大系统的递阶优化算法和动态优化算法,提出了一种新的求解模型的递阶动态规则算法。该方法层次清晰,降低了计算复杂程度,并且适合并行计算,能迅速找到火力规划问题的最优火力分配方案和最优解,仿真算例表明了该方法的实用性。  相似文献   

8.
连广宇  赵清杰  孙增圻 《机器人》2002,24(6):550-553
路径约束最优轨迹规划的关键是引入标量路径参数来降低优化问题的维数 .当路径穿越奇异点时,由于关节位移难以表示为任务空间路径参数的解析函数,给常规的 路径参数化方法带来困难.本文引入一种新的参数化方法,采用路径跟踪方程解曲线的弧长 为参数,解决了奇异点邻域的路径跟踪问题,把奇异路径轨迹规划转化为常规规划问题,并 采用动态规划方法求解轨迹规划问题.仿真表明,本文提出的参数化方法与动态规划结合起 来,是解决奇异路径最优轨迹规划的有效途径.   相似文献   

9.
针对二维动态场景下的移动机器人路径规划问题,提出了一种新颖的路径规划方法——连续动态运动基元(continuous dynamic movement primitives, CDMPs).该方法将传统的单一动态运动基元推广到连续动态运动基元,通过对演示运动轨迹的学习,获得各运动基元的权重序列,利用相位变量的更新,实现对未知动态目标的追踪.该方法克服了移动机器人对环境模型的依赖,解决了动态场景下追踪运动目标和躲避动态障碍物的路径规划问题.最后通过一系列仿真实验,验证了算法的可行性.仿真实验结果表明,对于动态场景下移动机器人路径规划问题, CDMPs算法比传统的DMPs方法在连续性能和规划效率上具有更好的表现.  相似文献   

10.
提出了一种新的基于动态信息模型的LPN路径规划算法。在规划方法中结合障碍物的动态信息在动态环境中能表现出更好的性能。针对原有动态信息模型的不足进行了分析和改进,提出了新的动态信息模型,并结合LPN梯度算法进行路径规划。通过仿真实验与在RoboCup中型组机器人上的测试表明了该方法的有效性。  相似文献   

11.
Dynamic programming is an important paradigm that has been widely used to solve problems in various areas such as control theory, operation research, biology and computer science. We generalize the finite automaton formal model for dynamic programming deriving pipeline parallel algorithms. The optimality of these algorithms is established for the new class of non‐decreasing finite automata. As an intermediate step for the construction of a skeleton for the automatic parallelization of dynamic programming, we have developed a tool for the implementation of pipeline algorithms. The tool maps the processes in the pipeline in the target architecture following a mix of block and cyclic policies adapted to the grain of the machine. Based on the former tool, the automatic parallelization of dynamic programming is straightforward. The use of the model and its associated tools is illustrated with the Single Resource Allocation Problem. The performance and portability of these tools is compared with specific ‘hand made’ code written by experienced programmers. The experimental results on distributed memory and shared distributed memory architectures prove the scalability of the proposed paradigm and its associated tools. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, a design methodology for synthesizing efficient parallel algorithms and VLSI architectures is presented. A design process starts with a problem definition specified in the parallel programming language Crystal and is followed by a series of program transformations in Crystal, each aiming at optimizing the target design for a specific purpose. To illustrate the design methodology, a set of design methods for deriving systolic algorithms and architectures is given and the use of these methods in the design of a dynamic programming solver is described. The design methodology, together with this particular set of design methods, can be viewed as a general theory of systolic designs (or multidimensional pipelines). The fact that Crystal is a general purpose language for parallel programming allows new design methods and synthesis techniques, properties and theorems about problems in specific application domains, and new insights into any given problem to be integrated readily within the existing design framework.  相似文献   

13.
New standards in signal, multimedia, and network processing for embedded electronics are characterized by computationally intensive algorithms, high flexibility due to the swift change in specifications. In order to meet demanding challenges of increasing computational requirements and stringent constraints on area and power consumption in fields of embedded engineering, there is a gradual trend towards coarse-grained parallel embedded processors. Furthermore, such processors are enabled with dynamic reconfiguration features for supporting time- and space-multiplexed execution of the algorithms. However, the formidable problem in efficient mapping of applications (mostly loop algorithms) onto such architectures has been a hindrance in their mass acceptance. In this paper we present (a) a highly parameterizable, tightly coupled, and reconfigurable parallel processor architecture together with the corresponding power breakdown and reconfiguration time analysis of a case study application, (b) a retargetable methodology for mapping of loop algorithms, (c) a co-design framework for modeling, simulation, and programming of such architectures, and (d) loosely coupled communication with host processor.  相似文献   

14.
15.
We study the parallel computation of dynamic programming. We consider four important dynamic programming problems which have wide application, and that have been studied extensively in sequential computation: (1) the 1D problem, (2) the gap problem, (3) the parenthesis problem, and (4) the RNA problem. The parenthesis problem has fast parallel algorithms; almost no work has been done for parallelizing the other three. We present a unifying framework for the parallel computation of dynamic programming recurrences with more than O(1) dependency. We use two well-known methods, the closure method and the matrix product method, as general paradigms for developing parallel algorithms. Combined with various techniques, they lead to a number of new results. Our main results are optimal sublinear-time algorithms for the 1D, parenthesis, and RNA problems.  相似文献   

16.
Parallel programming is elusive. The relative performance of different parallel implementations varies with machine architecture, system and problem size. How to compare different implementations over a wide range of machine architectures and problem sizes has not been well addressed due to its difficulty. Scalability has been proposed in recent years to reveal scaling properties of parallel algorithms and machines. In this paper, the relation between scalability and execution time is carefully studied. The concepts of crossing point analysis and range comparison are introduced. Crossing point analysis finds slow/fast performance crossing points of parallel algorithms and machines. Range comparison compares performance over a wide range of ensemble and problem size via scalability and crossing point analysis. Three algorithms from scientific computing are implemented on an Intel Paragon and an IBM SP2 parallel computer. Experimental and theoretical results show how the combination of scalability, crossing point analysis, and range comparison provides a practical solution for scalable performance evaluation and prediction. While our testings are conducted on homogeneous parallel computers, the proposed methodology applies to heterogeneous and network computing as well.  相似文献   

17.
The standard DP (dynamic programming) algorithms are limited by the substantial computational demands they put on contemporary serial computers. In this work, the theory behind the solution to serial monadic dynamic programming problems highlights the theory and application of parallel dynamic programming on a general-purpose architecture (cluster or network of workstations). A simple and well-known technique, message passing, is considered. Several parallel serial monadic DP algorithms are proposed, based on the parallelization in the state variables and the parallelization in the decision variables. Algorithms with no interpolation are also proposed. It is demonstrated how constraints introduce load unbalance which affect scalability and how this problem is inherent to DP.  相似文献   

18.
The field of parallel metaheuristics is continuously evolving as a result of new technologies and needs that researchers have been encountering. In the last decade, new models of algorithms, new hardware for parallel execution/communication, and new challenges in solving complex problems have been making advances in a fast manner. We aim to discuss here on the state of the art, in a summarized manner, to provide a solution to deal with some of the growing topics. These topics include the utilization of classic parallel models in recent platforms (such as grid/cloud architectures and GPU/APU). However, porting existing algorithms to new hardware is not enough as a scientific goal, therefore researchers are looking for new parallel optimization and learning models that are targeted to these new architectures. Also, parallel metaheuristics, such as dynamic optimization and multiobjective problem resolution, have been applied to solve new problem domains in past years. In this article, we review these recent research areas in connection to parallel metaheuristics, as well as we identify future trends and possible open research lines for groups and PhD students.  相似文献   

19.
Parallel architectures and algorithms for image component labeling   总被引:1,自引:0,他引:1  
A survey and a characterization of the various parallel algorithms and architectures developed for the problem of labeling digitized images over the last two decades are presented. It is shown that four basic parallel techniques underly the various parallel algorithms for this problem. However, because most of these techniques have been developed at a theoretical level, it is still not clear which techniques are most efficient in practical terms. Parallel architectures and parallel models of computation that implement these techniques are also studied  相似文献   

20.
Dynamic programming techniques are well-established and employed by various practical algorithms, including the edit-distance algorithm or the dynamic time warping algorithm. These algorithms usually operate in an iteration-based manner where new values are computed from values of the previous iteration. The data dependencies enforce synchronization which limits possibilities for internal parallel processing. In this paper, we investigate parallel approaches to processing matrix-based dynamic programming algorithms on modern multicore CPUs, Intel Xeon Phi accelerators, and general purpose GPUs. We address both the problem of computing a single distance on large inputs and the problem of computing a number of distances of smaller inputs simultaneously (e.g., when a similarity query is being resolved). Our proposed solutions yielded significant improvements in performance and achieved speedup of two orders of magnitude when compared to the serial baseline.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号