首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Branch-and-bound algorithms in a system with a two-level memory hierarchy were evaluated. An efficient implementation depends on the disparities in the numbers of subproblems expanded between the depth-first and best-first searches as well as the relative speeds of the main and secondary memories. A best-first search should be used when it expands a much smaller number of subproblems than that of a depth-first search, and the secondary memory is relatively slow. In contrast, a depth-first search should be used when the number of expanded subproblems is close to that of a best-first search. The choice is not as clear for cases in between these cases are studied. Two strategies are proposed and analyzed: a specialized virtual-memory system that matches the architectural design with the characteristics of the existing algorithm, and a modified branch-and-bound algorithm that can be tuned to the characteristic of the problem and the architecture. The latter strategy illustrates that designing a better algorithm is sometimes more effective that tuning the architecture alone  相似文献   

2.
We propose and evaluate a parallel “decomposite best-first” search branch-and-bound algorithm (dbs) for MIN-based multiprocessor systems. We start with a new probabilistic model to estimate the number of evaluated nodes for a serial best-first search branch-and-bound algorithm. This analysis is used in predicting the parallel algorithm speed-up. The proposed algorithm initially decomposes a problem into N subproblems, where N is the number of processors available in a multiprocessor. Afterwards, each processor executes the serial best-first search to find a local feasible solution. Local solutions are broadcasted through the network to compute the final solution. A conflict-free mapping scheme, known as the step-by-step spread, is used for subproblem distribution on the MIN. A speedup expression for the parallel algorithm is then derived using the serial best-first search node evaluation model. Our analysis considers both computation and communication overheads for providing realistic speed-up. Communication modeling is also extended for the parallel global best-first search technique. All the analytical results are validated via simulation. For large systems, when communication overhead is taken into consideration, it is observed that the parallel decomposite best-first search algorithm provides better speed-up compared to other reported schemes  相似文献   

3.
《Artificial Intelligence》2006,170(4-5):385-408
Recent work shows that the memory requirements of A* and related graph-search algorithms can be reduced substantially by only storing nodes that are on or near the search frontier, using special techniques to prevent node regeneration, and recovering the solution path by a divide-and-conquer technique. When this approach is used to solve graph-search problems with unit edge costs, we show that a breadth-first search strategy can be more memory-efficient than a best-first strategy. We also show that a breadth-first strategy allows a technique for preventing node regeneration that is easier to implement and can be applied more widely. The breadth-first heuristic search algorithms introduced in this paper include a memory-efficient implementation of breadth-first branch-and-bound search and a breadth-first iterative-deepening A* algorithm that is based on it. Computational results show that they outperform other systematic search algorithms in solving a range of challenging graph-search problems.  相似文献   

4.
武继刚  计永昶  陈国良 《软件学报》2000,11(12):1572-1580
分枝界限算法是求解组合优化问题的技术之一,它被广泛地应用在埃运筹学与组合数学中.对共享存储的最优优先一般并行分枝界限算法给出了运行时间复杂度下界Ω(m/p+hlogp),其中p为可用处理器数,h为扩展的结点数,m为状态空间中的活结点数.通过将共享存器设计成p个立体堆,提出了PRAM-EREW上一个新的一般并行分枝界限算法,理论上证明了对于h<p2p,该算法为最快且渐近最优的并行分枝界限算法.最后对0-r背包问题给出了模拟实验结果.  相似文献   

5.
Weighted heuristic search (best-first or depth-first) refers to search with a heuristic function multiplied by a constant w [31]. The paper shows, for the first time, that for optimization queries in graphical models the weighted heuristic best-first and weighted heuristic depth-first branch and bound search schemes are competitive energy-minimization anytime optimization algorithms. Weighted heuristic best-first schemes were investigated for path-finding tasks. However, their potential for graphical models was ignored, possibly because of their memory costs and because the alternative depth-first branch and bound seemed very appropriate for bounded depth. The weighted heuristic depth-first search has not been studied for graphical models. We report on a significant empirical evaluation, demonstrating the potential of both weighted heuristic best-first search and weighted heuristic depth-first branch and bound algorithms as approximation anytime schemes (that have sub-optimality bounds) and compare against one of the best depth-first branch and bound solvers to date.  相似文献   

6.
Frontier search is a best-first graph search technique that allows significant memory savings over previous best-first algorithms. The fundamental idea is to remove from memory already explored nodes, keeping only open nodes in the search frontier. However, once the goal node is reached, additional techniques are needed to recover the solution path. This paper describes and analyzes a path recovery procedure for frontier search applied to multiobjective shortest path problems. Differences with the scalar case are outlined, and performance is evaluated over a random problem set.  相似文献   

7.
Efficient processing of distance-based queries (DBQs) is of great importance in spatial databases due to the wide area of applications that may address such queries. The most representative and known DBQs are the K Nearest Neighbors Query (KNNQ), ρ Distance Range Query (ρDRQ), K Closest Pairs Query (KCPQ) and ρ Distance Join Query (ρDJQ). In this paper, we propose new pruning mechanism to apply them in the design of new Recursive Best-First Search (RBFS) algorithms for DBQs between spatial objects indexed in R-trees. RBFS is a general search algorithm that runs in linear space and expands nodes in best-first order, but it can suffer from node re-expansion overhead (i.e. to expand nodes in best-first order, some nodes can be considered more than once). The R-tree and its variations are commonly cited spatial access methods that can be used for answering such spatial queries. Moreover, an exhaustive experimental study was also included using R-trees, which resulted to several conclusions about the efficiency of proposed RBFS algorithm and its comparison with respect to other search algorithms (Best-First Search (BFS) and Depth-First Branch-and-Bound (DFBnB)), in terms of disk accesses, response time and main memory requirements, taking into account several important parameters as maximum branching factor (Cmax), cardinality of the final query result (K), distance threshold (ρ) and size of a global LRU buffer (B). In general RBFS is competitive for KNNQ and KCPQ where the maximum branching factor (Cmax) is large enough (even better than DFBnB and very close to BFS), and it is a good alternative when we have main memory limitations in our computer due to high process overload in our system, since it is linear space consuming with respect to the height of the R-trees. Nevertheless, RBFS is the worst alternative for ρDRQ and ρDJQ. DFBnB is also a linear space algorithm and it obtains the same behavior as BFS for ρDRQ and ρDJQ; and it is the best when an LRU buffer was included. Finally, we have been able to check experimentally that BFS is the best for all DBQs, but it can consume many main memory resources to perform spatial queries.  相似文献   

8.
Similarity searching often reduces to finding the k nearest neighbors to a query object. Finding the k nearest neighbors is achieved by applying either a depth- first or a best-first algorithm to the search hierarchy containing the data. These algorithms are generally applicable to any index based on hierarchical clustering. The idea is that the data is partitioned into clusters which are aggregated to form other clusters, with the total aggregation being represented as a tree. These algorithms have traditionally used a lower bound corresponding to the minimum distance at which a nearest neighbor can be found (termed MinDist) to prune the search process by avoiding the processing of some of the clusters as well as individual objects when they can be shown to be farther from the query object q than all of the current k nearest neighbors of q. An alternative pruning technique that uses an upper bound corresponding to the maximum possible distance at which a nearest neighbor is guaranteed to be found (termed MaxNearestDist) is described. The MaxNearestDist upper bound is adapted to enable its use for finding the k nearest neighbors instead of just the nearest neighbor (i.e., k=1) as in its previous uses. Both the depth-first and best-first k-nearest neighbor algorithms are modified to use MaxNearestDist, which is shown to enhance both algorithms by overcoming their shortcomings. In particular, for the depth-first algorithm, the number of clusters in the search hierarchy that must be examined is not increased thereby potentially lowering its execution time, while for the best-first algorithm, the number of clusters in the search hierarchy that must be retained in the priority queue used to control the ordering of processing of the clusters is also not increased, thereby potentially lowering its storage requirements.  相似文献   

9.
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time.  相似文献   

10.
Implementation techniques for geometric branch-and-bound matching methods   总被引:1,自引:0,他引:1  
Algorithms for geometric matching and feature extraction that work by recursively subdividing transformation space and bounding the quality of match have been proposed in a number of different contexts and become increasingly popular over the last few years. This paper describes matchlist-based branch-and-bound techniques and presents a number of new applications of branch-and-bound methods, among them, a method for globally optimal partial line segment matching under bounded or Gaussian error, point matching under a Gaussian error model with subpixel accuracy and precise orientation models, and a simple and robust technique for finding multiple distinct object instances. It also contains extensive reference information for the implementation of such matching methods under a wide variety of error bounds and transformations. In addition, the paper contains a number of benchmarks and evaluations that provide new information about the runtime behavior of branch-and-bound matching algorithms in general, and that help choose among different implementation strategies, such as the use of point location data structures and space/time tradeoffs involving depth-first search.  相似文献   

11.
Metric-space similarity search has proven suitable in a number of application domains such as multimedia retrieval and computational biology to name a few. These applications usually work on very large databases that are often indexed to speed-up on-line searches. To achieve efficient throughput, it is essential to exploit the intrinsic parallelism in the respective search query processing algorithms. Many strategies have been proposed in the literature to parallelize these algorithms either on shared or distributed memory multiprocessor systems. Lately, GPUs have been used to implement brute-force parallel search strategies instead of using index data structures. Indexing poses difficulties when it comes to achieve efficient exploitation of GPU resources. In this paper we propose single and multi GPU metric space techniques that efficiently exploit GPU tailored index data structures for parallel similarity search in large databases. The experimental results show that our proposal outperforms previous index-based sequential and OpenMP search strategies.  相似文献   

12.
Enhanced iterative-deepening search   总被引:1,自引:0,他引:1  
Iterative-deepening searches mimic a breadth-first node expansion with a series of depth-first searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of best-first searches like A* from exponential to linear in the search depth. But there is more to iterative-deepening than just a reduction of storage space. As the authors show, the search efficiency can be greatly improved by exploiting previously gained node information. The information management techniques considered here owe much to their counterparts from the domain of two-player games, namely the use of fast-execution memory functions to guide the search. The authors' methods not only save node expansions, but are also faster and easier to implement than previous proposals  相似文献   

13.
Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of the values of a number of parameters (e.g., the kernel and regularization parameters). In the current work, we propose the combination of meta-learning and search algorithms to deal with the problem of SVM parameter selection. In this combination, given a new problem to be solved, meta-learning is employed to recommend SVM parameter values based on parameter configurations that have been successfully adopted in previous similar problems. The parameter values returned by meta-learning are then used as initial search points by a search technique, which will further explore the parameter space. In this proposal, we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space (i.e. they are closer to optimum solutions). Hence, the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution. In this work, we investigate the combination of meta-learning with two search algorithms: Particle Swarm Optimization and Tabu Search. The implemented hybrid algorithms were used to select the values of two SVM parameters in the regression domain. These combinations were compared with the use of the search algorithms without meta-learning. The experimental results on a set of 40 regression problems showed that, on average, the proposed hybrid methods obtained lower error rates when compared to their components applied in isolation.  相似文献   

14.
The best-first search algorithm A* allows search graphs that are trees, directed acyclic graphs or directed graphs with cycles. In real life applications of A* the search graph is generally implemented as a tree. It is shown here that for certain well known one-machine job sequencing problems that arise in job shops, graph search is much faster than best-first tree search when problem instances are of small and medium size. Moreover, graph search uses less memory and so is able to solve larger problems. Depth-first search needs little memory, and is therefore capable in principle of solving problems of arbitrary size, but is slower than graph search by orders of magnitude for the examples that were studied  相似文献   

15.
In this paper we present work on trail improvement and partial-order reduction in the context of directed explicit-state model checking. Directed explicit-state model checking employs directed heuristic search algorithms such as A* or best-first search to improve the error-detection capabilities of explicit-state model checking. We first present the use of directed explicit-state model checking to improve the length of already established error trails. Second, we show that partial-order reduction, which aims at reducing the size of the state space by exploiting the commutativity of concurrent transitions in asynchronous systems, can coexist well with directed explicit-state model checking. Finally, we illustrate how to mitigate the excessive length of error trails produced by partial-order reduction in explicit-state model checking. In this context we also propose a combination of heuristic search and partial-order reduction to improve the length to already provided counterexamples.  相似文献   

16.
A new search strategy, called depth-m search, is proposed for branch-and-bound algorithms, wherem is a parameter to be set by the user. In particular, depth-1 search is equivalent to the conventional depth-first search, and depth- search is equivalent to the general heuristic search (including best-bound search as a special case). It is confirmed by computational experiment that the performance of depth-m search continuously changes from that, of depth-first search to that of heuristic search, whenm is changed from 1 to . The exact upper bound on the size of the required memory space is derived and shown to be bounded byO(nm), wheren is the problem size. Some methods for controllingm during computation are also proposed and compared, to carry out the entire computation within a given memory space bound.  相似文献   

17.
Scheduling tasks onto the processors of a parallel system is a crucial part of program parallelisation. Due to the NP-hard nature of the task scheduling problem, scheduling algorithms are based on heuristics that try to produce good rather than optimal schedules. Nevertheless, in certain situations it is desirable to have optimal schedules, for example for time-critical systems or to evaluate scheduling heuristics. This paper investigates the task scheduling problem using the A* search algorithm which is a best-first state space search. The adaptation of the A* search algorithm for the task scheduling problem is referred to as the A* scheduling algorithm. The A* scheduling algorithm can produce optimal schedules in reasonable time for small to medium sized task graphs with several tens of nodes. In comparison to a previous approach, the here presented A* scheduling algorithm has a significantly reduced search space due to a much improved consistent and admissible cost function f(s) and additional pruning techniques. Experimental results show that the cost function and the various pruning techniques are very effective for the workload. Last but not least, the results show that the proposed A* scheduling algorithm significantly outperforms the previous approach.  相似文献   

18.
The class of NP-hard problems contains many problems of considerable practical importance. The complexity of these problems is so overwhelming that exhaustive search of the solution space is not possible even using massively parallel search techniques, particularly for problems of super-exponential complexity such as flow-shop and job-shop scheduling. This two-part paper discusses a novel, inherently parallel heuristic solution technique. Part I presents this technique, known as parallel dynamic interaction (PDI), as a general solution framework that is applicable to a number of computationally intractable problems, and gives details of its general methodology. From a parallel processing standpoint, PDI is interesting because it is inherently based upon the dynamic interplay between simultaneously executing subproblems. As such, PDI has no direct serial analog, and is not directly amenable to conventional parallel processing speedup analysis. This may provide an indication that parallel processing can offer opportunities beyond simply speeding up the solution of sequentially specified algorithms. From a practical problem-solving perspective, PDI shows promise as a method capable of generating high quality solutions to exponentially and super-exponentially hard problems. For large problems, PDI is often able to find a near-optimal solution many orders of magnitude faster than the time taken for a conventional parallel branch-and-bound search to find a solution of comparable quality  相似文献   

19.
In this paper, we have considered a class of single machine job scheduling problems where the objective is to minimize the weighted sum of earliness–tardiness penalties of jobs. The weights are job-independent but they depend on whether a job is early or tardy. The restricted version of the problem where the common due date is smaller than a critical value, is known to be NP-complete. While dynamic programming formulation runs out of memory for large problem instances, depth-first branch-and-bound formulation runs slow for large problems since it uses a tree search space. In this paper, we have suggested an algorithm to optimally solve large instances of the restricted version of the problem. The algorithm uses a graph search space. Unlike dynamic programming, the algorithm can output optimal solutions even when available memory is limited. It has been found to run faster than dynamic programming and depth-first branch-and-bound formulations and can solve much larger instances of the problem in reasonable time. New upper and lower bounds have been proposed and used. Experimental findings are given in detail.Scope and purposeA class of single machine problems arising out of scheduling jobs in JIT environment has been considered in this paper. The objective is to minimize the total weighted earliness–tardiness penalties of jobs. In this paper, we have presented a new algorithm and conducted extensive empirical runs to show that the new algorithm performs much better than the existing approaches in solving large instances of the problem.  相似文献   

20.
In this paper we discuss the problem of packing a set of small rectangles (pieces) in an enclosing final rectangle. We present first a best-first branch-and-bound exact algorithm and second a heuristic approach in order to solve exactly and approximately this problem. The performances of the proposed approaches are evaluated on several randomly generated problem instances. Computational results show that the proposed exact algorithm is able to solve small and medium problem instances within reasonable execution time. The derived heuristic performs very well in the sense that it produces high-quality solutions within small computational time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号