首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The subset‐sum problem is a well‐known NP‐complete combinatorial problem that is solvable in pseudo‐polynomial time, that is, time proportional to the number of input objects multiplied by the sum of their sizes. This product defines the size of the dynamic programming table used to solve the problem. We show how this problem can be parallelized on three contemporary architectures, that is, a 128‐processor Cray Extreme Multithreading (XMT) massively multithreaded machine, a 16‐processor IBM x3755 shared memory machine, and a 240‐core NVIDIA FX 5800 graphics processing unit (GPU). We show that it is straightforward to parallelize this algorithm on the Cray XMT primarily because of the word‐level locking that is available on this architecture. For the other two machines, we present an alternating word algorithm that can implement an efficient solution. Our results show that the GPU performs well for problems whose tables fit within the device memory. Because GPUs typically have memories in the order of 10GB, such architectures are best for small problem sizes that have tables of size approximately 1010. The IBM x3755 performs very well on medium‐sized problems that fit within its 64‐GB memory but has poor scalability as the number of processors increases and is unable to sustain performance as the problem size increases. This machine tends to saturate for problem sizes of 1011 bits. The Cray XMT shows very good scaling for large problems and demonstrates sustained performance as the problem size increases. However, this machine has poor scaling for small problem sizes; it performs best for problem sizes of 1012 bits or more. The results in this paper illustrate that the subset‐sum problem can be parallelized well on all three architectures, albeit for different ranges of problem sizes. The performance of these three machines under varying problem sizes show the strengths and weaknesses of the three architectures. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
This work presents a new algorithm for solving the explicit/multi-parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step.  相似文献   

3.
Identifying structures in a given combinatorial problem is often a key step for designing efficient search heuristics or for understanding the inherent complexity of the problem. Several Operations Research approaches apply decomposition or relaxation strategies upon such a structure identified within a given problem. The next step is to design algorithms that adaptively integrate that kind of information during search. We claim in this paper, inspired by previous work on impact-based search strategies for constraint programming, that using an explanation-based constraint solver may lead to collect invaluable information on the intimate dynamically revealed and static structures of a problem instance. Moreover, we discuss how dedicated OR solving strategies (such as Benders decomposition) could be adapted to constraint programming when specific relationships between variables are exhibited.  相似文献   

4.
Although ordering-based pruning algorithms possess relatively high efficiency, there remains room for further improvement. To this end, this paper describes the combination of a dynamic programming technique with the ensemble-pruning problem. We incorporate dynamic programming into the classical ordering-based ensemble-pruning algorithm with complementariness measure (ComEP), and, with the help of two auxiliary tables, propose a reasonably efficient dynamic form, which we refer to as ComDPEP. To examine the performance of the proposed algorithm, we conduct a series of simulations on four benchmark classification datasets. The experimental results demonstrate the significantly higher efficiency of ComDPEP over the classic ComEP algorithm. The proposed ComDPEP algorithm also outperforms two other state-of-the-art ordering-based ensemble-pruning algorithms, which use uncertainty weighted accuracy and reduce-error pruning, respectively, as their measures. It is noteworthy that, the effectiveness of ComDPEP is just the same with that of the classical ComEP algorithm.  相似文献   

5.
Due to the huge size of patterns to be searched,multiple pattern searching remains a challenge to several newly-arising applications like network intrusion detection.In this paper,we present an attempt to design efficient multiple pattern searching algorithms on multi-core architectures.We observe an important feature which indicates that the multiple pattern matching time mainly depends on the number and minimal length of patterns.The multi-core algorithm proposed in this paper leverages this feature to decompose pattern set so that the parallel execution time is minimized.We formulate the problem as an optimal decomposition and scheduling of a pattern set,then propose a heuristic algorithm,which takes advantage of dynamic programming and greedy algorithmic techniques,to solve the optimization problem.Experimental results suggest that our decomposition approach can increase the searching speed by more than 200% on a 4-core AMD Barcelona system.  相似文献   

6.
本文透彻分析了动态规划算法的基本原理、基本要素、适用条件和解题步骤。并通过达尔文芯片问题详细讲解了动态规划算法在实际中的运用,最后总结了使用动态规划算法的重点难点和技巧。  相似文献   

7.
《国际计算机数学杂志》2012,89(12):1447-1454
A model is developed for allocating tables in a distributed database system. The model considers memory cost, transmission cost, table size and request rates, as well as updating rates of tables, the maximum allowable expected access times to tables at each computer and the memory capacity of each computer. The objective function is concerned with overall operating cost optimality. In this regard, the model is formulated as a non-linear integer zero–one programming problem, which can be converted into a linear zero–one programming model.  相似文献   

8.
In a distributed database, maintaining large table replicas with frequent asynchronous insertions is a challenging problem that requires carefully managing a tradeoff between consistency and availability. With that motivation in mind, we propose efficient algorithms to repair and measure replica consistency. Specifically, we adapt, extend and optimize distributed set reconciliation algorithms to efficiently compute the symmetric difference between replicated tables in a distributed relational database. Our novel algorithms enable fast synchronization of replicas being updated with small sets of new records, measuring obsolence of replicas having many insertions and deciding when to update a replica, as each table replica is being continuously updated in an asynchronous manner. We first present an algorithm to repair and measure distributed consistency on a large table continuously updated with new records at several sites when the number of insertions is small. We then present a complementary algorithm that enables fast synchronization of a summarization table based on foreign keys when the number of insertions is large, but happening on a few foreign key values. From a distributed systems perspective, in the first algorithm the large table with data is reconciled, whereas in the second case, its summarization table is reconciled. Both distributed database algorithms have linear communication complexity and cubic time complexity in the size of the symmetric difference between the respective table replicas they work on. That is, they are effective when the network speed is smaller than CPU speed at each site. A performance experimental evaluation with synthetic and real databases shows our algorithms are faster than a previous state-of-the art algorithm as well as more efficient than transferring complete tables, assuming large replicated tables and sporadic asynchronous insertions.  相似文献   

9.
In this paper, we propose the approach of using multiple hash tables for lock requests with different data access patterns to minimize the number of false contentions in a data sharing environment. We first derive some theoretical results on using multiple hash tables. Then, in light of these derivations, a two-step procedure to design multiple hash tables is developed. In the first step, data items are partitioned into a given number of groups. Each group of data items is associated with the use of a hash table in such a way that lock requests to data items in the same group will be hashed into the same hash table. In the second step, given an aggregate hash table size, the hash table size for each individual data group is optimally determined so as to minimize the number of false contentions. Some design examples and remarks on the proposed method are given. It is observed from real database systems that different data sets usually have their distinct data access patterns, thus resulting in an environment where this approach can offer significant performance improvement  相似文献   

10.
Dense stereo correspondence is a challenging research problem in computer vision field. To address the poor accuracy behavior of stereo matching, we propose a novel stereo matching algorithm based on guided image filter and modified dynamic programming. Firstly, we suggest a combined matching cost by incorporating the absolute difference and improved color census transform (ICCT). Secondly, we use the guided image filter to filter the cost volume, which can aggregate the costs fast and efficiently. Then, in the disparity computing step, we design a modified dynamic programming algorithm, which can weaken the scanning line effect. At last, final disparity maps are gained after post-processing. The experimental results are evaluated on Middlebury Stereo Datasets, showing that our approach can achieve good results both in low texture and depth discontinuity areas with an average error rate of 5.14 % and strong robustness.  相似文献   

11.
We study a single-machine sequencing problem with both release dates and deadlines to minimize the total weighted completion time. We propose a branch-and-bound algorithm for this problem. The algorithm exploits an effective lower bound and a dynamic programming dominance technique. As a byproduct of the lower bound, we have developed a new algorithm for the generalized isotonic regression problem; the algorithm can also be used as an O(nlogn)-time timetabling routine in earliness-tardiness scheduling. Extensive computational experiments indicate that the proposed branch-and-bound algorithm competes favorably with a dynamic programming procedure. Note to Practitioners-Real-life production systems usually involve multiple machines and resources. The configurations of such systems may be complex and subject to change over time. Therefore, model-based solution approaches, which aim to solve scheduling problems for specific configurations, will inevitably run into difficulties. By contrast, decomposition methods are much more expressive and extensible. The single-machine problem and its solution procedure studied in this paper will prove useful to a decomposition method that decomposes multiple-machine, multiple-resource scheduling problems into a number of single-machine problems. The total weighted completion time objective is relevant to production environments where inventory levels and manufacturing cycle times are key concerns. Future research can be pursued along two directions. First, it seems to be necessary to further generalize the problem to consider also negative job weights. Second, the solution procedure developed here is ready to be incorporated into a machine-oriented decomposition method such as the shifting bottleneck procedure.  相似文献   

12.
摘 要: 针对区间图的最小罗马控制函数和罗马控制数求解的困难性,本文提出了一个动态规划算法。从区间图的顶点排序开始,结合区间图的某些性质,采用逐步搜索的方法,不断扩大搜索的顶点集合范围,最终求出最优的罗马控制集和罗马控制数。为保证算法的正确性和科学性,对算法进行了严格的数学推理和证明。最后还给出了一个典型的区间图求解过程的演示示例,增强了算法的可读性和可操作性。结果表明该算法不仅运算速度快,而且简单易行。 关键词: 区间图;罗马控制函数;罗马控制数;权重;动态规划算法  相似文献   

13.
A data warehouse can store very large amounts of data that should be processed in parallel in order to achieve reasonable query execution times. The MapReduce programming model is a very convenient way to process large amounts of data in parallel on commodity hardware clusters. A very popular query used in data warehouses is star‐join. In this paper, we present a fast and efficient star‐join query execution algorithm built on top of a MapReduce framework called Hadoop. By using dynamic filters against dimension tables, the algorithm needs a single scan of the fact table, which means a significant reduction of input/output operations and computational complexity. Also, the algorithm requires only two MapReduce iterations in total–one to build the filters against dimension tables and one to scan the fact table. Our experiments show that the proposed algorithm performs much better than the existing solutions in terms of execution time and input/output. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
In this study, we present an optimization based solution to the simultaneous localization and mapping (SLAM) problem. In the proposed algorithm, the SLAM problem is considered as two optimization problems. These problems are solved using forward dynamic programming. In the first problem, it is assumed that map is known perfectly and the robot path is estimated. In the second problem, the estimated robot path with their corresponding measurements is used to identify map. As optimization problem in each step of dynamic programming have high nonlinearity and also differential evolution (DE) tends to find the globally optimal solution without being trapped at local maxima, DE is developed to solve dynamic programming in each step of time. Some simulations and experiments are presented to illustrate the proposed algorithm and exhibit its performance.  相似文献   

15.
针对原有基于判决方程的子区间消除算法中所存在的判决结果与决策表不相符,以及当子区间划分规模增大时,运行时间呈平方次增长的问题,本文提出了一种全新的基于动态规划的子区间消除算法。新算法充分利用动态规划在多阶段决策问题中的卓越性能,将子区间的消除问题划分为合理性判断和新区间生成两部分,这两个部分均可以利用动态规划中子问题分割的思想来解决。文中证明了通过解决这些子问题可以构造得到原问题的最优解,分析了算法的时间复杂度和空间复杂度。为了检验新算法的性能,本文从理论和实验两种维度,进行了新旧两种算法的对比。实验结果表明,该方法大大降低了算法的时间复杂度,有效克服了子区间规模增大所导致的问题,提高了算法的灵活性和运行速度。  相似文献   

16.
Branch & Reduce and dynamic programming on graphs of bounded treewidth are among the most common and powerful techniques used in the design of moderately exponential time exact algorithms for NP hard problems. In this paper we discuss the efficiency of simple algorithms based on combinations of these techniques. The idea behind these algorithms is very natural: If a parameter like the treewidth of a graph is small, algorithms based on dynamic programming perform well. On the other side, if the treewidth is large, then there must be vertices of high degree in the graph, which is good for branching algorithms. We give several examples of possible combinations of branching and programming which provide the fastest known algorithms for a number of NP hard problems. All our algorithms require non-trivial balancing of these two techniques. In the first approach the algorithm either performs fast branching, or if there is an obstacle for fast branching, this obstacle is used for the construction of a path decomposition of small width for the original graph. Using this approach we give the fastest known algorithms for Minimum Maximal Matching and for counting all 3-colorings of a graph. In the second approach the branching occurs until the algorithm reaches a subproblem with a small number of edges (and here the right choice of the size of subproblems is crucial) and then dynamic programming is applied on these subproblems of small width. We exemplify this approach by giving the fastest known algorithm to count all minimum weighted dominating sets of a graph. We also discuss how similar techniques can be used to design faster parameterized algorithms. A preliminary version of this paper appeared as Branching and Treewidth Based Exact Algorithms in the Proceedings of the 17th International Symposium on Algorithms and Computation (ISAAC 2006) [15]. Additional support by the Research Council of Norway.  相似文献   

17.
一种基于跳表的DDM相交区域快速查询算法   总被引:3,自引:0,他引:3  
HLA是为提高仿真系统的互操作性和可重用性而提出的新型分布仿真标准。HLA中数据分发管理服务的目的是通过邦员对数据公布和订购区域的表达来过滤一个邦员所接收到的数据,同时减少对网络带宽的消耗。DDM的核心是多维矩形区域动态相交查找的问题,其算法性能的好坏对DDM的效率是至关重要的。现有的一些动态区域查找算法还不能同时满足高效、精确且易于实现等要求,为此设计出一种基于跳表数据结构的用于快速相交区域查找的区问跳表,同时分析了区间跳表的构造原则以及基本操作的流程,并在此基础上提出了基于区间跳表的相交区域查找算法。该算法的匹配精度高,时间复杂度低,且具有编程实现简单的特点。  相似文献   

18.
表格广泛存在于科技文献、财务报表、报纸杂志等各类文档中,用于紧凑地存储和展现数据,蕴含着大量有用信息。表格识别是表格信息再利用的基础,具有重要的应用价值,也一直是模式识别领域的研究热点之一。随着深度学习的发展,针对表格识别的新研究和新方法纷纷涌现。然而,由于表格应用场景广泛、样式众多、图像质量参差不齐等因素,表格识别领域仍然存在着大量问题亟需解决。为了更好地总结前人工作,为后续研究提供支持,本文围绕表格区域检测、结构识别和内容识别等3个表格识别子任务,从传统方法、深度学习方法等方面,综述该领域国内外的发展历史和最新进展。梳理了表格识别相关数据集及评测标准,并基于主流数据集和标准,分别对表格区域检测、结构识别、表格信息抽取的典型方法进行了性能比较。然后,对比分析了国内相对于国外,在表格识别方面的研究进展与水平。最后,结合表格识别领域目前面临的主要困难与挑战,对未来的研究趋势和技术发展目标进行了展望。  相似文献   

19.
We present a new method for predicting RNA secondary structure based on a genetic algorithm. The algorithm is designed to run on a massively parallel SIMD computer. Statistical analysis shows that the program performs well when compared to a dynamic programming algorithm used to solve the same problem. The program has also pointed out a long-standing simplification in the implementation of the original dynamic programming algorithm that sometimes causes it not to find the optimal secondary structure.  相似文献   

20.
在分析中小学的排课问题并研究相关算法的基础上,提出了把动态规划算法和模拟退火算法相结合的一种新的排课算法.动态规划算法求出满足约束条件的一个解,作为模拟退火算法的初始解,用模拟退火算法对初始解优化,得到一个优化后的课表.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号