首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
In this paper we explore the impact of caching during search in the context of the recent framework of AND/OR search in graphical models. Specifically, we extend the depth-first AND/OR Branch-and-Bound tree search algorithm to explore an AND/OR search graph by equipping it with an adaptive caching scheme similar to good and no-good recording. Furthermore, we present best-first search algorithms for traversing the same underlying AND/OR search graph and compare both algorithms empirically. We focus on two common optimization problems in graphical models: finding the Most Probable Explanation (MPE) in belief networks and solving Weighted CSPs (WCSP). In an extensive empirical evaluation we demonstrate conclusively the superiority of the memory intensive AND/OR search algorithms on a variety of benchmarks.  相似文献   

2.
Theoretical comparisons of search strategies in branch-and-bound algorithms   总被引:1,自引:0,他引:1  
Four known search strategies used in branch-and-bound algorithms-heuristic search, depth-first search, best-bound search, and breadth-first search-are theoretically compared from the viewpoint of the performance of the resulting algorithms. Heuristic search includes the other three as special cases. Since heuristic search is determined by a heuristic functionh, we first investigate how the performance of the resulting algorithms depends onh. In particular, we show that heuristic search is stable in the sense that a slight change inh causes only a slight change in its performance. The best and the worst heurstic functions are clarified, and also discussed is how the heuristic functionh should be modified to obtain a branch-and-bound algorithm with an improved performance. Finally, properties and limitations of depth-first search, best-bound search, and breadth-first search viewed as special cases of heuristic search are considered. In particular, it is shown that the stability observed for heuristic search no longer holds for depth-first search.  相似文献   

3.
A new search strategy, called depth-m search, is proposed for branch-and-bound algorithms, wherem is a parameter to be set by the user. In particular, depth-1 search is equivalent to the conventional depth-first search, and depth- search is equivalent to the general heuristic search (including best-bound search as a special case). It is confirmed by computational experiment that the performance of depth-m search continuously changes from that, of depth-first search to that of heuristic search, whenm is changed from 1 to . The exact upper bound on the size of the required memory space is derived and shown to be bounded byO(nm), wheren is the problem size. Some methods for controllingm during computation are also proposed and compared, to carry out the entire computation within a given memory space bound.  相似文献   

4.
This is the first of two papers presenting and evaluating the power of a new framework for combinatorial optimization in graphical models, based on AND/OR search spaces. We introduce a new generation of depth-first Branch-and-Bound algorithms that explore the AND/OR search tree using static and dynamic variable orderings. The virtue of the AND/OR representation of the search space is that its size may be far smaller than that of a traditional OR representation, which can translate into significant time savings for search algorithms. The focus of this paper is on linear space search which explores the AND/OR search tree. In the second paper we explore memory intensive AND/OR search algorithms. In conjunction with the AND/OR search space we investigate the power of the mini-bucket heuristics in both static and dynamic setups. We focus on two most common optimization problems in graphical models: finding the Most Probable Explanation in Bayesian networks and solving Weighted CSPs. In extensive empirical evaluations we demonstrate that the new AND/OR Branch-and-Bound approach improves considerably over the traditional OR search strategy and show how various variable ordering schemes impact the performance of the AND/OR search scheme.  相似文献   

5.
The success of model checking is largely based on its ability to efficiently locate errors in software designs. If an error is found, a model checker produces a trail that shows how the error state can be reached, which greatly facilitates debugging. However, while current model checkers find error states efficiently, the counterexamples are often unnecessarily lengthy, which hampers error explanation. This is due to the use of naive search algorithms in the state space exploration.In this paper we present approaches to the use of heuristic search algorithms in explicit-state model checking. We present the class of A* directed search algorithms and propose heuristics together with bitstate compression techniques for the search of safety property violations. We achieve great reductions in the length of the error trails, and in some instances render problems analyzable by exploring a much smaller number of states than standard depth-first search. We then suggest an improvement of the nested depth-first search algorithm and show how it can be used together with A* to improve the search for liveness property violations. Our approach to directed explicit-state model checking has been implemented in a tool set called HSF-SPIN. We provide experimental results from the protocol validation domain using HSF-SPIN.  相似文献   

6.
The discovery of patterns that accurately discriminate one class label from another remains a challenging data mining task. Subgroup discovery (SD) is one of the frameworks that enables to elicit such interesting patterns from labeled data. A question remains fairly open: How to select an accurate heuristic search technique when exhaustive enumeration of the pattern space is infeasible? Existing approaches make use of beam-search, sampling, and genetic algorithms for discovering a pattern set that is non-redundant and of high quality w.r.t. a pattern quality measure. We argue that such approaches produce pattern sets that lack of diversity: Only few patterns of high quality, and different enough, are discovered. Our main contribution is then to formally define pattern mining as a game and to solve it with Monte Carlo tree search (MCTS). It can be seen as an exhaustive search guided by random simulations which can be stopped early (limited budget) by virtue of its best-first search property. We show through a comprehensive set of experiments how MCTS enables the anytime discovery of a diverse pattern set of high quality. It outperforms other approaches when dealing with a large pattern search space and for different quality measures. Thanks to its genericity, our MCTS approach can be used for SD but also for many other pattern mining tasks.  相似文献   

7.
Fuzzy time series models that have been developed have been widely applied to many applications of forecasting future stock prices or weighted indexes in the financial field. Three interesting problems have been identified in relation to the associated time series methods, as follows: (1) conventional time series models that consider single variables on associated problems only, (2) fuzzy time series models that determine the interval length of the linguistic values subjectively, and (3) selected variables that depend on personal experience and opinion subjectively. In light of the above limitations, this study constitutes a hybrid seven-step procedure that proposes three integrated fuzzy time series models that are based on fitting functions to forecast weighted indexes of the stock market. First, the proposed models employ Pearson correlation coefficients to objectively select important technical indicators. Second, this study utilizes an objective algorithm to determine the lower bound and upper bound of the universe of discourse automatically. Third, the proposed models use the spread-partition algorithm to automatically determine linguistic intervals. Finally, they combine the transformed variables to build three fuzzy time series models using the criterion of the minimal root mean square error (RMSE). Furthermore, this study provides all of the necessary justifying information for using a linear process to select the inputs for the given non-linear data. To further evaluate the performance of the proposed models, the transaction records of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Hang Seng Indexes) from 1998/01/03 to 2006/12/31 are used to illustrate the methodology with two experimental data sets. Chen’s (Fuzzy Sets Syst. 81:311–319, 1996) model, Yu’s (Physica A 349:609–624, 2005) model, support vector regression (SVR), and partial least square regression (PLSR) are used as models to be compared with the proposed model when given the same data sets. The analytical results show that the proposed models outperform the listed models under the evaluation criteria of the RMSE (in contrast to the forecasting accuracy) for forecasting a weighted stock index in both the Taiwan and Hong Kong stock markets.  相似文献   

8.
Algorithms for Nearest Neighbor Search on Moving Object Trajectories   总被引:2,自引:1,他引:1  
Nearest Neighbor (NN) search has been in the core of spatial and spatiotemporal database research during the last decade. The literature on NN query processing algorithms so far deals with either stationary or moving query points over static datasets or future (predicted) locations over a set of continuously moving points. With the increasing number of Mobile Location Services (MLS), the need for effective k-NN query processing over historical trajectory data has become the vehicle for data analysis, thus improving existing or even proposing new services. In this paper, we investigate mechanisms to perform NN search on R-tree-like structures storing historical information about moving object trajectories. The proposed (depth-first and best-first) algorithms vary with respect to the type of the query object (stationary or moving point) as well as the type of the query result (historical continuous or not), thus resulting in four types of NN queries. We also propose novel metrics to support our search ordering and pruning strategies. Using the implementation of the proposed algorithms on two members of the R-tree family for trajectory data (namely, the TB-tree and the 3D-R-tree), we demonstrate their scalability and efficiency through an extensive experimental study using large synthetic and real datasets.
Yannis Theodoridis (Corresponding author)Email: URL: http://dke.cti.gr http://isl.cs.unipi.gr/db
  相似文献   

9.
In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms.  相似文献   

10.
Many real world problems involve several, usually conflicting, objectives. Multiobjective analysis deals with these problems locating trade-offs between different optimal solutions. Regarding graph search problems, several algorithms based on best-first and depth-first approaches have been proposed to return the set of all Pareto optimal solutions. This article presents a detailed comparison between two representatives of multiobjective depth-first algorithms, PIDMOA* and MO-DF-BnB. Both of them extend previous single-objective search algorithms with linear-space requirements to the multiobjective case. Experimental analyses on their time performance over tree-shaped search spaces are presented. The results clarify the fitness of both algorithms to parameters like the number or depth of goal nodes.  相似文献   

11.
Similarity searching often reduces to finding the k nearest neighbors to a query object. Finding the k nearest neighbors is achieved by applying either a depth- first or a best-first algorithm to the search hierarchy containing the data. These algorithms are generally applicable to any index based on hierarchical clustering. The idea is that the data is partitioned into clusters which are aggregated to form other clusters, with the total aggregation being represented as a tree. These algorithms have traditionally used a lower bound corresponding to the minimum distance at which a nearest neighbor can be found (termed MinDist) to prune the search process by avoiding the processing of some of the clusters as well as individual objects when they can be shown to be farther from the query object q than all of the current k nearest neighbors of q. An alternative pruning technique that uses an upper bound corresponding to the maximum possible distance at which a nearest neighbor is guaranteed to be found (termed MaxNearestDist) is described. The MaxNearestDist upper bound is adapted to enable its use for finding the k nearest neighbors instead of just the nearest neighbor (i.e., k=1) as in its previous uses. Both the depth-first and best-first k-nearest neighbor algorithms are modified to use MaxNearestDist, which is shown to enhance both algorithms by overcoming their shortcomings. In particular, for the depth-first algorithm, the number of clusters in the search hierarchy that must be examined is not increased thereby potentially lowering its execution time, while for the best-first algorithm, the number of clusters in the search hierarchy that must be retained in the priority queue used to control the ordering of processing of the clusters is also not increased, thereby potentially lowering its storage requirements.  相似文献   

12.
We show that the NP-hard optimization problems minimum and maximum weight exact satisfiability (XSAT) for a CNF formula C over n propositional variables equipped with arbitrary real-valued weights can be solved in O(||C||20.2441n ) time. To the best of our knowledge, the algorithms presented here are the first handling weighted XSAT optimization versions in non-trivial worst case time. We also investigate the corresponding weighted counting problems, namely we show that the number of all minimum, resp. maximum, weight exact satisfiability solutions of an arbitrarily weighted formula can be determined in O(n 2·||C||?+?20.40567n ) time. In recent years only the unweighted counterparts of these problems have been studied (Dahllöf and Jonsson, An algorithm for counting maximum weighted independent sets and its applications. In: Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms, pp. 292–298, 2002; Dahllöf et al., Theor Comp Sci 320: 373–394, 2004; Porschen, On some weighted satisfiability and graph problems. In: Proceedings of the 31st Conference on Current Trends in Theory and Practice of Informatics (SOFSEM 2005). Lecture Notes in Comp. Science, vol. 3381, pp. 278–287. Springer, 2005).  相似文献   

13.
This paper presents the analysis of a parallel formulation of depth-first search. At the heart of this parallel formulation is a dynamic work-distribution scheme that divides the work between different processors. The effectiveness of the parallel formulation is strongly influenced by the work-distribution scheme and the target architecture. We introduce the concept of isoefficiency function to characterize the effectiveness of different architectures and work-distribution schemes. Many researchers considered the ring architecture to be quite suitable for parallel depth-first search. Our analytical and experimental results show that hypercube and shared-memory architectures are significantly better. The analysis of previously known work-distribution schemes motivated the design of substantially improved schemes for ring and shared-memory architectures. In particular, we present a work-distribution algorithm that guarantees close to optimal performance on a shared-memory/-network-with-message-combining architecture (e.g. RP3). Much of the analysis presented in this paper is applicable to other parallel algorithms in which work is dynamically shared between different processors (e.g., parallel divide-and-conquer algorithms). The concept of isoefficiency is useful in characterizing the scalability of a variety of parallel algorithms.This work was supported by Army Research Office Grant No. DAAG29-84-K-0060 to the Artificial Intelligence Laboratory, and Office of Naval Research Grant N00014-86-K-0763 to the Computer Science Department at the University of Texas at Austin.  相似文献   

14.
15.
Weighted recombination is a means for improving the local search performance of evolution strategies. It aims to make effective use of the information available, without significantly increasing computational costs per time step. In this paper, the potential speed-up resulting from using rank-based weighted multirecombination is investigated. Optimal weights are computed for the infinite-dimensional sphere model, and comparisons with the performance of strategies that do not make use of weighted recombination are presented. It is seen that unlike strategies that rely on unweighted recombination and truncation selection, weighted multirecombination evolution strategies are able to improve on the serial efficiency of the (1+1)(1+1)-ES on the sphere. The implications of the use of weighted recombination for noisy optimization are studied, and parallels to the use of rescaled mutations are drawn. The significance of the findings is investigated in finite-dimensional search spaces.  相似文献   

16.
We propose a novel, exact any-time search strategy that combines iterative deepening \(\text{ A}\) * ( \(\text{ IDA}\) *) with depth-first search and we consider the job shop scheduling problem with makespan minimization as a test bed. The combination of these search strategies is done so that limited depth-first searches are issued from some of the states distributed along the frontier reached by \(\text{ IDA}\) * in each iteration. In this way, a proper equilibrium between intensification and diversification search effort is achieved while the algorithm keeps the capability of obtaining tight lower bounds. To evaluate the proposed strategy and to compare it with other methods, we have conducted an experimental study involving a number of conventional benchmarks with instances of various sizes. The results of these experiments show that the proposed algorithm takes less time than other methods in reaching optimal solutions for small and medium-size instances, and that it is quite competitive in reaching good solutions and good lower bounds for the instances that cannot be optimally solved.  相似文献   

17.
The key issue in top-k retrieval, finding a set of k documents (from a large document collection) that can best answer a user’s query, is to strike the optimal balance between relevance and diversity. In this paper, we study the top-k retrieval problem in the framework of facility location analysis and prove the submodularity of that objective function which provides a theoretical approximation guarantee of factor 1?\(\frac{1}{e}\) for the (best-first) greedy search algorithm. Furthermore, we propose a two-stage hybrid search strategy which first obtains a high-quality initial set of top-k documents via greedy search, and then refines that result set iteratively via local search. Experiments on two large TREC benchmark datasets show that our two-stage hybrid search strategy approach can supersede the existing ones effectively and efficiently.  相似文献   

18.
Many domains require scalable algorithms that help to determine diagnoses efficiently and often within predefined time limits. Anytime diagnosis is able to determine solutions in such a way and thus is especially useful in real-time scenarios such as production scheduling, robot control, and communication networks management where diagnosis and corresponding reconfiguration capabilities play a major role. Anytime diagnosis in many cases comes along with a trade-off between diagnosis quality and the efficiency of diagnostic reasoning. In this paper we introduce and analyze FlexDiag which is an anytime direct diagnosis approach. We evaluate the algorithm with regard to performance and diagnosis quality using a configuration benchmark from the domain of feature models and an industrial configuration knowledge base from the automotive domain. Results show that FlexDiag helps to significantly increase the performance of direct diagnosis search with corresponding quality tradeoffs in terms of minimality and accuracy.  相似文献   

19.
The search for good lineal, or depth-first, spanning trees is an important aspect in the implementation of a wide assortment of graph algorithms. We consider the complexity of findingoptimal lineal spanning trees under various notions of optimality. In particular, we show that several natural problems, such as constructing a shortest or a tallest lineal tree, are NP-hard. We also address the issue of polynomial-time, near-optimization strategies for these difficult problems, showing that efficient absolute approximation algorithms cannot exist unlessP = NP.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号