首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 482 毫秒
1.
Stochastic local search algorithms (SLS) have been increasingly applied to approximate solutions of the weighted maximum satisfiability problem (MAXSAT), a model for solutions of major problems in AI and combinatorial optimization. While MAXSAT instances have generally a strong intrinsic dependency between their variables, most of SLS algorithms start the search process with a random initial solution where the value of each variable is generated independently with the same uniform distribution. In this paper, we propose a new SLS algorithm for MAXSAT based on an unconventional distribution known as the Bose-Einstein distribution in quantum physics. It provides a stochastic initialization scheme to an efficient and very simple heuristic inspired by the co-evolution process of natural species and called Extremal Optimization (EO). This heuristic was introduced for finding high quality solutions to hard optimization problems such as colouring and partitioning. We examine the effectiveness of the resulting algorithm by computational experiments on a large set of test instances and compare it with some of the most powerful existing algorithms. Our results are remarkable and show that this approach is appropriate for this class of problems.  相似文献   

2.
Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function—i.e. a ratio of two polynomials—of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGS’s performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.  相似文献   

3.
In this paper, we show how Guided Local Search (GLS) can be applied to the SAT problem and show how the resulting algorithm can be naturally extended to solve the weighted MAX-SAT problem. GLS is a general, penalty-based meta-heuristic, which sits on top of local search algorithms to help guide them out of local minima. GLS has been shown to be successful in solving a number of practical real-life problems, such as the traveling salesman problem, BT"s workforce scheduling problem, the radio link frequency assignment problem, and the vehicle routing problem. We present empirical results of applying GLS to instances of the SAT problem from the DIMACS archive and also a small set of weighted MAX-SAT problem instances and compare them with the results of other local search algorithms for the SAT problem.  相似文献   

4.
Many real world problems, such as circuit designing and planning, can be encoded into the maximum satisfiability problem (MAX-SAT). To solve MAX-SAT, many effective local search heuristic algorithms have been reported in the literature. This paper aims to study how useful information could be gathered during the search history and used to enhance local search heuristic algorithms. For this purpose, we present an adaptive memory-based local search heuristic (denoted by AMLS) for solving MAX-SAT. The AMLS algorithm uses several memory structures to define new rules for selecting the next variable to flip at each step and additional adaptive mechanisms and diversification strategies. The effectiveness and efficiency of our AMLS algorithm is evaluated on a large range of random and structured MAX-SAT and SAT instances, many of which are derived from real world applications. The computational results show that AMLS competes favorably, in terms of several criteria, with four state-of-the-art SAT and MAX-SAT solvers AdaptNovelty+, AdaptG2WSATp, IRoTS and RoTS.  相似文献   

5.
This paper deals with an application of constraint programming in production scheduling with earliness and tardiness penalties that reflects the scheduling part of the Just-In-Time inventory strategy. Two scheduling problems are studied, an industrial case study problem of lacquer production scheduling, and also the job-shop scheduling problem with earliness/tardiness costs. The paper presents two algorithms that help the constraint programming solver to find solutions of these complex problems. The first algorithm, called the cost directed initialization, performs a greedy initialization of the search tree. The second one, called the time reversing transformation and designed for lacquer production scheduling, reformulates the problem to be more easily searchable when the default search or the cost directed initialization is used. The conducted experiments, using case study instances and randomly generated problem instances, show that our algorithms outperform generic approaches, and on average give better results than other nontrivial algorithms.  相似文献   

6.
Stochastic local search (SLS) is a popular paradigm in incomplete solving for the Boolean satisfiability problem (SAT). Most SLS solvers for SAT switch between two modes, i.e., the greedy (intensification) mode and the diversification mode. However, the performance of these two-mode SLS algorithms lags far behind on solving random 3-satisfiability (3-SAT) problem, which is a significant special case of the SAT problem. In this paper, we propose a new hybrid scoring function called M C based on a linear combination of a greedy property m a k e and a diversification property C o n f T i m e s, and then utilize M C to develop a new two-mode SLS solver called CCMC. To evaluate the performance of CCMC, we conduct extensive experiments to compare CCMC with five state-of-the-art two-mode SLS solvers (i.e., Sparrow2011, Sattime2011, EagleUP, gNovelty+PCL and CCASat) on a broad range of random 3-SAT instances, including all large 3-SAT ones from SAT Competition 2009 and SAT Competition 2011 as well as 200 generated satisfiable huge random 3-SAT ones. The experiments illustrate that CCMC obviously outperforms its competitors, indicating the effectiveness of CCMC. We also analyze the effectiveness of the underlying ideas in CCMC and further improve the performance of CCMC on solving random 5-SAT instances.  相似文献   

7.
Stochastic local search (SLS) algorithms have recently been proven to be among the best approaches to solving computationally hard problems. SLS algorithms typically have a number of parameters, optimized empirically, that characterize and determine their performance. In this article, we focus on the noise parameter. The theoretical foundation of SLS, including an understanding of how to the optimal noise varies with problem difficulty, is lagging compared to the strong empirical results obtained using these algorithms. A purely empirical approach to understanding and optimizing SLS noise, as problem instances vary, can be very computationally intensive. To complement existing experimental results, we formulate and analyze several Markov chain models of SLS in this article. In particular, we compute expected hitting times and show that they are rational functions for individual problem instances as well as their mixtures. Expected hitting time curves are analytical counterparts to noise response curves reported in the experimental literature. Hitting time analysis using polynomials and convex functions is also discussed. In addition, we present examples and experimental results illustrating the impact of varying noise probability on SLS run time. In experiments, where most probable explanations in Bayesian networks are computed, we use synthetic problem instances as well as problem instances from applications. We believe that our results provide an improved theoretical understanding of the role of noise in stochastic local search, thereby providing a foundation for further progress in this area.  相似文献   

8.
Stochastic local search (SLS) algorithms are typically composed of a number of different components, each of which should contribute significantly to the final algorithm's performance. If the goal is to design and engineer effective SLS algorithms, the algorithm developer requires some insight into the importance and the behavior of possible algorithmic components. In this paper, we analyze algorithmic components of SLS algorithms for the multiobjective travelling salesman problem. The analysis is done using a careful experimental design for a generic class of SLS algorithms for multiobjective combinatorial optimization. Based on the insights gained, we engineer SLS algorithms for this problem. Experimental results show that these SLS algorithms, despite their conceptual simplicity, outperform a well-known memetic algorithm for a range of benchmark instances with two and three objectives.  相似文献   

9.
The traditional approach to computational problem solving is to use one of the available algorithms to obtain solutions for all given instances of a problem. However, typically not all instances are the same, nor a single algorithm performs best on all instances. Our work investigates a more sophisticated approach to problem solving, called Recursive Algorithm Selection, whereby several algorithms for a problem (including some recursive ones) are available to an agent that makes an informed decision on which algorithm to select for handling each sub-instance of a problem at each recursive call made while solving an instance. Reinforcement learning methods are used for learning decision policies that optimize any given performance criterion (time, memory, or a combination thereof) from actual execution and profiling experience. This paper focuses on the well-known problem of state-space heuristic search and combines the A* and RBFS algorithms to yield a hybrid search algorithm, whose decision policy is learned using the Least-Squares Policy Iteration (LSPI) algorithm. Our benchmark problem domain involves shortest path finding problems in a real-world dataset encoding the entire street network of the District of Columbia (DC), USA. The derived hybrid algorithm exhibits better performance results than the individual algorithms in the majority of cases according to a variety of performance criteria balancing time and memory. It is noted that the proposed methodology is generic, can be applied to a variety of other problems, and requires no prior knowledge about the individual algorithms used or the properties of the underlying problem instances being solved.  相似文献   

10.
In this paper, we propose a memetic algorithm for the optimal winner determination problem in combinatorial auctions. First, we investigate a new selection strategy based on both fitness and diversity to choose individuals to participate in the reproduction phase of the memetic algorithm. The resulting algorithm is enhanced by using a stochastic local search (SLS) component combined with a specific crossover operator. This operator is used to identify promising search regions while the stochastic local search performs an intensified search of solutions around these regions. Experiments on various realistic instances of the considered problem are performed to show and compare the effectiveness of our approach.  相似文献   

11.
AND/OR search spaces accommodate advanced algorithmic schemes for graphical models which can exploit the structure of the model. We extend and evaluate the depth-first and best-first AND/OR search algorithms to solving 0-1 Integer Linear Programs (0-1 ILP) within this framework. We also include a class of dynamic variable ordering heuristics while exploring an AND/OR search tree for 0-1 ILPs. We demonstrate the effectiveness of these search algorithms on a variety of benchmarks, including real-world combinatorial auctions, random uncapacitated warehouse location problems and MAX-SAT instances.  相似文献   

12.
The important task of correcting label noise is addressed infrequently in literature. The difficulty of developing a robust label correction algorithm leads to this silence concerning label correction. To break the silence, we propose two algorithms to correct label noise. One utilizes self-training to re-label noise, called Self-Training Correction (STC). Another is a clustering-based method, which groups instances together to infer their ground-truth labels, called Cluster-based Correction (CC). We also adapt an algorithm from previous work, a consensus-based method called Polishing that consults with an ensemble of classifiers to change the values of attributes and labels. We simplify Polishing such that it only alters labels of instances, and call it Polishing Labels (PL). We experimentally compare our novel methods with Polishing Labels by examining their improvements on the label qualities, model qualities, and AUC metrics of binary and multi-class data sets under different noise levels. Our experimental results demonstrate that CC significantly improves label qualities, model qualities, and AUC metrics consistently. We further investigate how these three noise correction algorithms improve the data quality, in terms of label accuracy, in the context of image labeling in crowdsourcing. First, we look at three consensus methods for inferring a ground-truth label from the multiple noisy labels obtained from crowdsourcing, i.e., Majority Voting (MV), Dawid Skene (DS), and KOS. We then apply the three noise correction methods to correct labels inferred by these consensus methods. Our experimental results show that the noise correction methods improve the labeling quality significantly. As an overall result of our experiments, we conclude that CC performs the best. Our research has illustrated the viability of implementing noise correction as another line of defense against labeling error, especially in a crowdsourcing setting. Furthermore, it presents the feasibility of the automation of an otherwise manual process of analyzing a data set, and correcting and cleaning the instances, an expensive and time-consuming task.  相似文献   

13.
Instance-Based Learning Algorithms   总被引:46,自引:1,他引:45  
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.  相似文献   

14.
We examine the parallel execution of a class of stochastic algorithms called Markov chain Monte-Carlo (MCMC) algorithms. We focus on MCMC algorithms in the context of image processing, using Markov random field models. Our parallelisation approach is based on several, concurrently running, instances of the same stochastic algorithm that deal with the whole data set. Firstly we show that the speed-up of the parallel algorithm is limited because of the statistical properties of the MCMC algorithm. We examine coupled MCMC as a remedy for this problem. Secondly, we exploit the parallel execution to monitor the convergence of the stochastic algorithms in a statistically reliable manner. This new convergence measure for MCMC algorithms performs well, and is an improvement on known convergence measures. We also link our findings with recent work in the statistical theory of MCMC.  相似文献   

15.
Michela Milano  Alessio Guerri 《Software》2009,39(13):1127-1155
In combinatorial auctions bidders can post bids on groups of items. The problem of selecting the winning bids, called Winner Determination Problem, is NP‐hard. In this paper, we consider an interesting variant of this problem, called Bid Evaluation Problem (BEP), where items are services and are subject to precedence constraints and temporal windows. The BEP has many practical applications, such as, for instance, in transportation routes auctions and in take off and landing time slot allocation problems. We have developed a number of optimization algorithms based on Constraint Programming, on Integer Programming and on the combination of the two techniques. We first show that all algorithms proposed outperform the only commercial system for solving BEP instances called Multi AGent Negotiation Testbed, a more general tool for agent negotiation. Then, we evaluate the developed algorithms and use the decision tree machine learning technique for finding a relation between the instance structure and the solving algorithm and providing an automatic algorithm selection procedure. We show that we can achieve an accuracy of 90% in predicting the best algorithm given the instance to be solved with a significant time saving w.r.t. a single solving technique or a costless, but less accurate, prediction technique. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
A New Solution Path Algorithm in Support Vector Regression   总被引:1,自引:0,他引:1  
In this paper, regularization path algorithms were proposed as a novel approach to the model selection problem by exploring the path of possibly all solutions with respect to some regularization hyperparameter in an efficient way. This approach was later extended to a support vector regression (SVR) model called epsiv -SVR. However, the method requires that the error parameter epsiv be set a priori. This is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we analyze the solution space for epsiv-SVR and propose a new solution path algorithm, called epsiv-path algorithm, which traces the solution path with respect to the hyperparameter epsiv rather than lambda. Although both two solution path algorithms possess the desirable piecewise linearity property, our epsiv-path algorithm overcomes some limitations of the original lambda-path algorithm and has more advantages. It is thus more appealing for practical use.  相似文献   

17.
Solving stochastic integer programs (SIPs) is generally difficult. This paper considers a comparative study of stage- and scenario-wise Fenchel decomposition (FD) for two-stage SIPs with special structure. The standard FD approach is based on stage-wise or Benders’ decomposition. This work derives a scenario FD method based on decomposing the SIP problem by scenario and performs a computational study of the two approaches. In particular, two algorithms are studied, stage-wise FD (ST-FD) and scenario-wise FD (SC-FD) algorithms. The algorithms use FD cuts generated based on the scenario subproblem under each decomposition setting to iteratively recover (partially) the convex hull of integer points in the neighborhood of the optimal solution. The L-shaped method is used to solve the LP relaxation of the SIP problem in the ST-FD algorithm, while the progressive hedging algorithm (PHA) is used in the SC-FD algorithm. Computational results on knapsack test instances demonstrate the viability of both approaches towards solving large instances in reasonable amount of time and outperforming a direct solver in most cases. Overall, the ST-FD algorithm provides the best performance in our experiments.  相似文献   

18.
Over the last two decades, many sophisticated evolutionary algorithms have been introduced for solving constrained optimization problems. Due to the variability of characteristics in different COPs, no single algorithm performs consistently over a range of problems. In this paper, for a better coverage of the problem characteristics, we introduce an algorithm framework that uses multiple search operators in each generation. The appropriate mix of the search operators, for any given problem, is determined adaptively. The framework is tested by implementing two different algorithms. The performance of the algorithms is judged by solving 60 test instances taken from two constrained optimization benchmark sets from specialized literature. The first algorithm, which is a multi-operator based genetic algorithm (GA), shows a significant improvement over different versions of GA (each with a single one of these operators). The second algorithm, using differential evolution (DE), also confirms the benefit of the multi-operator algorithm by providing better and consistent solutions. The overall results demonstrated that both GA and DE based algorithms show competitive, if not better, performance as compared to the state of the art algorithms.  相似文献   

19.
In the domain of multi-robot path-planning problems, robots must move from their start locations to their goal locations while avoiding collisions with each other. The research problem that we addressed is to find a complete solution for the multi-robot path-planning problem. Our first contribution is to recognize the solvable instances of the problem with our solvability test; the theoretical analysis has already been provided to show the validity of this test. Our second contribution is to solve this problem completely, in polynomial time, with the Push and Spin (PASp) algorithm. Once the problem was solved, we found decisions within the complete solution that may improve the performance of the complete algorithm. Hence, our third contribution is to improve the performance by selecting the best path from the set of complete paths. We refer to the improved version of our algorithm as the improved PASp algorithm. In terms of the completeness evaluation, the mathematical proofs demonstrate that the PASp is a complete algorithm for a wider class of problem instances than the classes solved by the Push and Swap (PAS), Push and Rotate (PAR), Bibox or the tractable multi-robot path-planning (MAPP) algorithms. Moreover, PASp solves any graph recognized to be solvable without any assumptions. In addition, the theoretical proof of the PASp algorithm showed completive polynomial performance in terms of total-path-lengths and execution time. In our performance evaluation, the experimental results showed that the PASp performs competitively, in reasonable execution time, in terms of number of moves compared to the PAS, PAR, Bibox and MAPP algorithms on a set of benchmark problems from the video-game industry. In addition, the results showed the scalability and robustness of PASp in problems that can be solved only by PASp. Such problems require high levels of coordination with an efficient number of moves and short execution time. In grid and bi-connected graphs with too many cycles, PASp required more moves and more time than the PAS, PAR and Bibox algorithms. However, PASp is the only algorithm capable of solving such instances with only one unoccupied vertex. Furthermore, adding heuristic search and smooth operation to the improved PASp showed significant further improvement by reducing the number of moves for all problem instances. PASp produced the best plans in a bit higher time. Finally, the PASp algorithm solves a wider class of problems and performs more completely in very complex/crowded environments than other state-of-art algorithms. Additionally, the Spin operation introduces a novel swapping technique to exchange two items and restore others in a graph for industrial applications.  相似文献   

20.
Serial search algorithms often exhibit exponential run times and may require an exponential amount of storage as well. Thus, the design of parallel search algorithms with limited memory is of obvious interest. This paper presents an efficient SIMD parallel algorithm, called IDPS (for iterative-deepening parallel search). At a broad level IDPS is a parallel version of IDA*. While generically we have called our algorithm an IDPS, performance of four variants of it has been studied through experiments conducted on the well-known test-bed problem for search algorithms, namely the Fifteen Puzzle. During the experiments, data were gathered under two different static load balancing schemes. Under the first scheme, an unnormalized average efficiency of approximately 3/4 was obtained for 4K, 8K, and 16K processors. Under the second scheme, unnormalized average efficiencies of 0.92 and 0.76, and normalized average efficiencies of 0.70 and 0.63 were obtained for 8K and 16K processors, respectively. We show (as shown previously only for MIMD machines) that for admissible search, high average speedup can be obtained for problems of significant size. We believe that this research will enhance AI problem solving using parallel heuristic search algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号