首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Branch-and-bound algorithms are organized and intelligently structured searches of solutions in a combinatorially large problem space. In this paper, we propose an approximate stochastic model of branch-and-bound algorithms with a best-first search. We have estimated the average memory space required and have predicted the average number of subproblems expanded before the process terminates. Both measures are exponentials of sublinear exponent. In addition, we have also compared the number of subproblems expanded in a best-first search to that expanded in a depth-first search. Depth-first search has been found to have computational complexity comparable to best-first search when the lower-bound function is very accurate or very inaccurate; otherwise, best-fit search is usually better. The results obtained are useful in studying the efficient evaluation of branch-and-bound algorithms in a virtual memory environment. They also confirm that approximations are very effective in reducing the total number of iterations.  相似文献   

2.
Weighted heuristic search (best-first or depth-first) refers to search with a heuristic function multiplied by a constant w [31]. The paper shows, for the first time, that for optimization queries in graphical models the weighted heuristic best-first and weighted heuristic depth-first branch and bound search schemes are competitive energy-minimization anytime optimization algorithms. Weighted heuristic best-first schemes were investigated for path-finding tasks. However, their potential for graphical models was ignored, possibly because of their memory costs and because the alternative depth-first branch and bound seemed very appropriate for bounded depth. The weighted heuristic depth-first search has not been studied for graphical models. We report on a significant empirical evaluation, demonstrating the potential of both weighted heuristic best-first search and weighted heuristic depth-first branch and bound algorithms as approximation anytime schemes (that have sub-optimality bounds) and compare against one of the best depth-first branch and bound solvers to date.  相似文献   

3.
We propose and evaluate a parallel “decomposite best-first” search branch-and-bound algorithm (dbs) for MIN-based multiprocessor systems. We start with a new probabilistic model to estimate the number of evaluated nodes for a serial best-first search branch-and-bound algorithm. This analysis is used in predicting the parallel algorithm speed-up. The proposed algorithm initially decomposes a problem into N subproblems, where N is the number of processors available in a multiprocessor. Afterwards, each processor executes the serial best-first search to find a local feasible solution. Local solutions are broadcasted through the network to compute the final solution. A conflict-free mapping scheme, known as the step-by-step spread, is used for subproblem distribution on the MIN. A speedup expression for the parallel algorithm is then derived using the serial best-first search node evaluation model. Our analysis considers both computation and communication overheads for providing realistic speed-up. Communication modeling is also extended for the parallel global best-first search technique. All the analytical results are validated via simulation. For large systems, when communication overhead is taken into consideration, it is observed that the parallel decomposite best-first search algorithm provides better speed-up compared to other reported schemes  相似文献   

4.
Similarity searching often reduces to finding the k nearest neighbors to a query object. Finding the k nearest neighbors is achieved by applying either a depth- first or a best-first algorithm to the search hierarchy containing the data. These algorithms are generally applicable to any index based on hierarchical clustering. The idea is that the data is partitioned into clusters which are aggregated to form other clusters, with the total aggregation being represented as a tree. These algorithms have traditionally used a lower bound corresponding to the minimum distance at which a nearest neighbor can be found (termed MinDist) to prune the search process by avoiding the processing of some of the clusters as well as individual objects when they can be shown to be farther from the query object q than all of the current k nearest neighbors of q. An alternative pruning technique that uses an upper bound corresponding to the maximum possible distance at which a nearest neighbor is guaranteed to be found (termed MaxNearestDist) is described. The MaxNearestDist upper bound is adapted to enable its use for finding the k nearest neighbors instead of just the nearest neighbor (i.e., k=1) as in its previous uses. Both the depth-first and best-first k-nearest neighbor algorithms are modified to use MaxNearestDist, which is shown to enhance both algorithms by overcoming their shortcomings. In particular, for the depth-first algorithm, the number of clusters in the search hierarchy that must be examined is not increased thereby potentially lowering its execution time, while for the best-first algorithm, the number of clusters in the search hierarchy that must be retained in the priority queue used to control the ordering of processing of the clusters is also not increased, thereby potentially lowering its storage requirements.  相似文献   

5.
The memory requirements of best-first graph search algorithms such as A* often prevent them from solving large problems. The best-known approach for coping with this issue is iterative deepening, which performs a series of bounded depth-first searches. Unfortunately, iterative deepening only performs well when successive cost bounds visit a geometrically increasing number of nodes. While it happens to work acceptably for the classic sliding tile puzzle, IDA* fails for many other domains. In this paper, we present an algorithm that adaptively chooses appropriate cost bounds on-line during search. During each iteration, it learns a model of the search tree that helps it to predict the bound to use next. Our search tree model has three main benefits over previous approaches: (1) it will work in domains with real-valued heuristic estimates, (2) it can be trained on-line, and (3) it is able to make more accurate predictions with only a small number of training examples. We demonstrate the power of our improved model by using it to control an iterative-deepening A* search on-line. While our technique has more overhead than previous methods for controlling iterative-deepening A*, it can give more robust performance by using its experience to accurately double the amount of search effort between iterations.  相似文献   

6.
The best-first search algorithm A* allows search graphs that are trees, directed acyclic graphs or directed graphs with cycles. In real life applications of A* the search graph is generally implemented as a tree. It is shown here that for certain well known one-machine job sequencing problems that arise in job shops, graph search is much faster than best-first tree search when problem instances are of small and medium size. Moreover, graph search uses less memory and so is able to solve larger problems. Depth-first search needs little memory, and is therefore capable in principle of solving problems of arbitrary size, but is slower than graph search by orders of magnitude for the examples that were studied  相似文献   

7.
Enhanced iterative-deepening search   总被引:1,自引:0,他引:1  
Iterative-deepening searches mimic a breadth-first node expansion with a series of depth-first searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of best-first searches like A* from exponential to linear in the search depth. But there is more to iterative-deepening than just a reduction of storage space. As the authors show, the search efficiency can be greatly improved by exploiting previously gained node information. The information management techniques considered here owe much to their counterparts from the domain of two-player games, namely the use of fast-execution memory functions to guide the search. The authors' methods not only save node expansions, but are also faster and easier to implement than previous proposals  相似文献   

8.
Many real world problems involve several, usually conflicting, objectives. Multiobjective analysis deals with these problems locating trade-offs between different optimal solutions. Regarding graph search problems, several algorithms based on best-first and depth-first approaches have been proposed to return the set of all Pareto optimal solutions. This article presents a detailed comparison between two representatives of multiobjective depth-first algorithms, PIDMOA* and MO-DF-BnB. Both of them extend previous single-objective search algorithms with linear-space requirements to the multiobjective case. Experimental analyses on their time performance over tree-shaped search spaces are presented. The results clarify the fitness of both algorithms to parameters like the number or depth of goal nodes.  相似文献   

9.
武继刚  计永昶  陈国良 《软件学报》2000,11(12):1572-1580
分枝界限算法是求解组合优化问题的技术之一,它被广泛地应用在埃运筹学与组合数学中.对共享存储的最优优先一般并行分枝界限算法给出了运行时间复杂度下界Ω(m/p+hlogp),其中p为可用处理器数,h为扩展的结点数,m为状态空间中的活结点数.通过将共享存器设计成p个立体堆,提出了PRAM-EREW上一个新的一般并行分枝界限算法,理论上证明了对于h<p2p,该算法为最快且渐近最优的并行分枝界限算法.最后对0-r背包问题给出了模拟实验结果.  相似文献   

10.
The paper presents a new proof-number (PN) search algorithm, called PDS–PN. It is a two-level search (like PN2), which performs at the first level a depth-first proof-number and disproof-number search (PDS), and at the second level a best-first PN search. Hence, PDS–PN selectively exploits the power of both PN2 and PDS. Experiments in the domain of Lines of Action are performed. They show that within an acceptable time frame PDS–PN is more effective for really hard endgame positions than αβ and any other PN-search algorithm.  相似文献   

11.
递归调节算法作为一种贝叶斯网络精确推理算法由于存储结果的个数是任意的,存储所占用的内存大小是可变的,所以该算法在空间上是自由的。然而当内存空间有限时,又希望节省计算时间,存储哪些计算结果就成了关键问题。针对此问题本文提出了应用深度优先分支定界法寻找存储占用空间的所有可能性,然后通过任意空间下推理时间的求解公式得到相应的推理时间,进而构造贝叶斯网络推理的时间—空间曲线,通过所构造的曲线找出最优离散存储策略,得到时间和空间之间的最佳的结合点,以最小时间代价换取了最大的存储空间。  相似文献   

12.
We model a deterministic parallel program by a directed acyclic graph of tasks, where a task can execute as soon as all tasks preceding it have been executed. Each task can allocate or release an arbitrary amount of memory (i.e., heap memory allocation can be modeled). We call a parallel schedule “space efficient” if the amount of memory required is at most equal to the number of processors times the amount of memory required for some depth-first execution of the program by a single processor. We describe a simple, locally depth-first scheduling algorithm and show that it is always space efficient. Since the scheduling algorithm is greedy, it will be within a factor of two of being optimal with respect to time. For the special case of a program having a series-parallel structure, we show how to efficiently compute the worst case memory requirements over all possible depth-first executions of a program. Finally, we show how scheduling can be decentralized, making the approach scalable to a large number of processors when there is sufficient parallelism  相似文献   

13.
Seven algorithms used to search for solutions in dynamic planning and execution problems are compared. The specific problem is endgame moves for the board game RISK. This paper concentrates on comparison of search methods for the best plan using a fixed evaluation function, fixed time to plan, and randomly generated situations that correspond to endgames in RISK with eight remaining players. The search strategies compared are depth-first, breadth-first, best-first, random walk, gradient ascent, simulated annealing, and evolutionary computation. The approaches are compared for each example based on the number of opponents eliminated, plan completion probability, and value of ending position (if the moves do not complete the game). Simulation results indicate that the evolutionary approach is superior to the other methods in 85% of the cases considered. Among the other algorithms, simulated annealing is the most suitable for this problem.  相似文献   

14.
Consider the situation where an intelligent agent accepts as input a complete plan, i.e., a sequence of states (or operators) that should be followed in order to achieve a goal. For some reason, the given plan cannot be implemented by the agent, who then goes about trying to find an alternative plan that is as close as possible to the original. To achieve this, a search algorithm that will find similar alternative plans is required, as well as some sort of comparison function that will determine which alternative plan is closest to the original. In this paper, we define a number of distance metrics between plans, and characterize these functions and their respective attributes and advantages. We then develop a general algorithm based on best-first search that helps an agent efficiently find the most suitable alternative plan. We also propose a number of heuristics for the cost function of this best-first search algorithm. To explore the generality of our idea, we provide three different problem domains where our approach is applicable: physical roadmap path finding, the blocks world, and task scheduling. Experimental results on these various domains support the efficiency of our algorithm for finding close alternative plans. A preliminary version of this paper appeared in the International Joint Conference on Autonomous Agents and Multiagent Systems (8). An erratum to this article can be found at  相似文献   

15.
T. Matsui 《Algorithmica》1997,18(4):530-543
In this paper we propose an algorithm for generating all the spanning trees in undirected graphs. The algorithm requires O (n+m+ τ n) time where the given graph has n vertices, m edges, and τ spanning trees. For outputting all the spanning trees explicitly, this time complexity is optimal. Our algorithm follows a special rooted tree structure on the skeleton graph of the spanning tree polytope. The rule by which the rooted tree structure is traversed is irrelevant to the time complexity. In this sense, our algorithm is flexible. If we employ the depth-first search rule, we can save the memory requirement to O (n+m). A breadth-first implementation requires as much as O (m+ τ n) space, but when a parallel computer is available, this might have an advantage. When a given graph is weighted, the best-first search rule provides a ranking algorithm for the minimum spanning tree problem. The ranking algorithm requires O (n+ m + τ n) time and O (m+ τ n) space when we have a minimum spanning tree. Received January 21, 1995; revised February 19, 1996.  相似文献   

16.
基于深度优先搜索的一般图匹配算法   总被引:1,自引:0,他引:1       下载免费PDF全文
对于一般图的匹配问题,Edmonds算法以Berge定理为基础,采用广度优先搜索增广路,图中可能存在“花”。遇到这种情况,要对它进行缩减“花”处理,再进行搜索。当找到增广路时,要将缩减图恢复,算法显得复杂。Gabow等算法使用先给固的顶点和边编号,并使用了不同数组和虚拟顶点,避免了处理花。算法的复杂性为O(n^3),但增加了空间复杂性。本文提出的基于深度优先搜索算法,在搜索增广路时不会出现“花”的情况,算法相对简单;同时,算法时间效率为O(n*degree(n)),degree(n)为顶顶点的平均度数。另外,当图的边动态增减时,使用该算法可以很快调整最大匹配,并且该算法空间复杂性在同一数量级也可以推广到广度优先搜索。  相似文献   

17.
《Artificial Intelligence》2006,170(8-9):714-738
Branch-and-bound and branch-and-cut use search trees to identify optimal solutions to combinatorial optimization problems. In this paper, we introduce an iterative search strategy which we refer to as cut-and-solve and prove optimality and termination for this method. This search is different from traditional tree search as there is no branching. At each node in the search path, a relaxed problem and a sparse problem are solved and a constraint is added to the relaxed problem. The sparse problems provide incumbent solutions. When the constraining of the relaxed problem becomes tight enough, its solution value becomes no better than the incumbent solution value. At this point, the incumbent solution is declared to be optimal. This strategy is easily adapted to be an anytime algorithm as an incumbent solution is found at the root node and continuously updated during the search.Cut-and-solve enjoys two favorable properties. Since there is no branching, there are no “wrong” subtrees in which the search may get lost. Furthermore, its memory requirement is negligible. For these reasons, it has potential for problems that are difficult to solve using depth-first or best-first search tree methods.In this paper, we demonstrate the cut-and-solve strategy by implementing a generic version of it for the Asymmetric Traveling Salesman Problem (ATSP). Our unoptimized implementation outperformed state-of-the-art solvers for five out of seven real-world problem classes of the ATSP. For four of these classes, cut-and-solve was able to solve larger (sometimes substantially larger) problems. Our code is available at our websites.  相似文献   

18.
Keld Helsgaun 《Software》1995,25(8):905-934
Backtrack programming is such a powerful technique for problem solving that a number of languages, especially in the area of artificial intelligence, have built-in facilities for backtrack programming. This paper describes CBack, a simple, but general tool for backtrack programming in the programming language C. The use of the tool is illustrated through examples of a tutorial nature. In addition to the usual depth-first search strategy, CBack provides for the more general heuristic best-first search strategy. The implementation of CBack is described in detail. The source code, shown in its full length, is entirely written in ANSI C and highly portable across diverse computer architectures and C compilers.  相似文献   

19.
一种改进的SPIHT图像编码方法   总被引:1,自引:0,他引:1  
对小波图像压缩的SHHT算法进行了改进,改进算法不再使用链表而是使用两个简单的位图,其占用内存和LZC算法相同而易于硬件实现,改进算法克服了LZC的深度优先搜索的缺点而具有和SPIHT一样的广度优先搜索策略。在同压缩比下,其重构峰值信噪比PSNR比LZC高0.70左右,和SPIHT相当。  相似文献   

20.
Frontier search is a best-first graph search technique that allows significant memory savings over previous best-first algorithms. The fundamental idea is to remove from memory already explored nodes, keeping only open nodes in the search frontier. However, once the goal node is reached, additional techniques are needed to recover the solution path. This paper describes and analyzes a path recovery procedure for frontier search applied to multiobjective shortest path problems. Differences with the scalar case are outlined, and performance is evaluated over a random problem set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号