首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 558 毫秒
1.
Noisy optimization is the optimization of objective functions corrupted by noise. A portfolio of solvers is a set of solvers equipped with an algorithm selection tool for distributing the computational power among them. Portfolios are widely and successfully used in combinatorial optimization. In this work, we study portfolios of noisy optimization solvers. We obtain mathematically proved performance (in the sense that the portfolio performs nearly as well as the best of its solvers) by an ad hoc portfolio algorithm dedicated to noisy optimization. A somehow surprising result is that it is better to compare solvers with some lag, i.e., propose the current recommendation of best solver based on their performance earlier in the run. An additional finding is a principled method for distributing the computational power among solvers in the portfolio.  相似文献   

2.
We analyze and compare two solvers for Boolean optimization problems: WMaxSatz, a solver for Partial MaxSAT, and MinSatz, a solver for Partial MinSAT. Both MaxSAT and MinSAT are similar, but previous results indicate that when solving optimization problems using both solvers, the performance is quite different on some cases. For getting insights about the differences in the performance of the two solvers, we analyze their behaviour when solving 2SAT-MaxOnes problem instances, given that 2SAT-MaxOnes is probably the most simple, but NP-hard, optimization problem we can solve with them. The analysis is based first on the study of the bounds computed by both algorithms on some particular 2SAT-MaxOnes instances, characterized by the presence of certain particular structures. We find that the fraction of positive literals in the clauses is an important factor regarding the quality of the bounds computed by the algorithms. Then, we also study the importance of this factor on the typical case complexity of Random-p 2SAT-MaxOnes, a variant of the problem where instances are randomly generated with a probability p of having positive literals in the clauses. For the case p=0, the performance results indicate a clear advantage of MinSatz with respect to WMaxSatz, but as we consider positive values of p WMaxSatz starts to show a better performance, although at the same time the typical complexity of Random-p 2SAT-MaxOnes decreases as p increases. We also study the typical value of the bound computed by the two algorithms on these sets of instances, showing that the behaviour is consistent with our analysis of the bounds computed on the particular instances we studied first.  相似文献   

3.
Nowadays, many real-world problems are encoded into SAT instances and efficiently solved by modern SAT solvers. These solvers, usually known as Conflict-Driven Clause Learning (CDCL) SAT solvers, include a variety of sophisticated techniques, such as clause learning, lazy data structures, conflict-based adaptive branching heuristics, or random restarts, among others. However, the reasons of their efficiency in solving real-world, or industrial, SAT instances are still unknown. The common wisdom in the SAT community is that these technique exploit some hidden structure of real-world problems.In this thesis, we characterize some important features of the underlying structure of industrial SAT instances. Namely, they are the community structure and the self-similar structure. We observe that most industrial SAT formulas, viewed as graphs, have these two properties. This means that (i) in a graph with a clear community structure, i.e. having high modularity, we can find a partition of its nodes into communities such that most edges connect nodes of the same community; and (ii) in a graph with a self-similar pattern, i.e. being fractal, its shape is kept after re-scalings, i.e., grouping sets of nodes into a single node. We also analyze how these structures are affected by the effects of CDCL techniques during the search.Using the previous structural studies, we propose three applications. First, we face the problem of generating pseudo-industrial random SAT instances using the notion of modularity. Our model generates instances similar to (classical) random SAT formulas when the modularity is low, but when this value is high, our model is also adequate to model realistic pseudo-industrial problems. Second, we propose a method based on the community structure of the instance to detect relevant learnt clauses. Our technique augments the original instance with this set of relevant clauses, and this results into an overall improvement of the efficiency of several state-of-the-art CDCL SAT solvers. Finally, we analyze the classification of industrial SAT instances into families using the previously analyzed structure features, and we compare them to other classifiers commonly used in portfolio SAT approaches.In summary, this dissertation extends the understandings of the structure of SAT instances, with the aim of better explaining the success of CDCL techniques and possibly improve them, and propose a number of applications based on this analysis of the underlying structure of SAT formulas.  相似文献   

4.
We study two natural extensions of Constraint Satisfaction Problems (CSPs). Balance-Max-CSP requires that in any feasible assignment each element in the domain is used an equal number of times. An instance of Hard-Max-CSP consists of soft constraints and hard constraints, and the goal is to maximize the weight of satisfied soft constraints while satisfying all the hard constraints. These two extensions contain many fundamental problems not captured by CSPs, and challenge traditional theories about CSPs in a more general framework. Max-2-SAT and Max-Horn-SAT are the only two nontrivial classes of Boolean CSPs that admit a robust satisfibiality algorithm, i.e., an algorithm that finds an assignment satisfying at least (1 ? g(ε)) fraction of constraints given a (1 ? ε)-satisfiable instance, where g(ε) → 0 as ε → 0, and g(0) = 0. We prove the inapproximability of these problems with balance or hard constraints, showing that each variant changes the nature of the problems significantly (in different ways). For instance, deciding whether an instance of 2-SAT admits a balanced assignment is NP-hard, and for Max-2-SAT with hard constraints, it is hard to find a constant-factor approximation even on (1 ? ε)-satisfiable instances (in particular, the version with hard constraints does not admit a robust satisfiability algorithm). We also study hardness results for a certain CSP over a larger domain capturing ordering constraints: we show that hard constraints rule out constant-factor approximation algorithms. All our hardness results are almost optimal — they completely rule out algorithms with certain properties, or can be matched by simple extensions to existing algorithms.  相似文献   

5.
– Ant System     
Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more fine-tuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Salesman Problem. To show that Ant Colony Optimization algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this area has mainly focused on the development of algorithmic variants which achieve better performance than Ant System.In this paper, we present – Ant System ( ), an Ant Colony Optimization algorithm derived from Ant System. differs from Ant System in several important aspects, whose usefulness we demonstrate by means of an experimental study. Additionally, we relate one of the characteristics specific to — that of using a greedier search than Ant System — to results from the search space analysis of the combinatorial optimization problems attacked in this paper. Our computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that is currently among the best performing algorithms for these problems.  相似文献   

6.
Symmetric block ciphers, such as the Advanced Encryption Standard (AES), are deterministic algorithms which transform plaintexts to ciphertexts using a secret key. These ciphers are designed such that it is computationally very difficult to recover the secret key if only pairs of plaintexts and ciphertexts are provided to the attacker. Constraint solvers have recently been suggested as a way of recovering the secret keys of symmetric block ciphers. To carry out such an attack, the attacker provides the solver with a set of equations describing the mathematical relationship between a known plaintext and a known ciphertext, and then attempts to solve for the unknown secret key. This approach is known to be intractable against AES unless side-channel data – information leaked from the cryptographic device due to its internal physical structure – is introduced into the equation set. A significant challenge in writing equations representing side-channel data is measurement noise. In this work we show how casting the problem as a pseudo-Boolean optimization instance provides an efficient and effective way of tolerating this noise. We describe a theoretical analysis, connecting the measurement signal-to-noise ratio and the tolerable set size of a non-optimizing solver with the success probability. We then conduct an extensive performance evaluation, comparing two optimizing variants for dealing with measurement noise to a non-optimizing method. Our best optimizing method provides a successful attack on the AES cipher which requires surprisingly little side-channel data and works in reasonable computation time. We also make available a set of AES cryptanalysis instances and provide some practical feedback on our experience of using open-source constraint solvers.  相似文献   

7.
The Prize-collecting Steiner Tree Problem (PCSTP) is a well-known problem in graph theory and combinatorial optimization. It has been successfully applied to solve real problems such as fiber-optic and gas distribution networks design. In this work, we concentrate on its application in biology to perform a functional analysis of genes. It is common to analyze large networks in genomics to infer a hidden knowledge. Due to the NP-hard characteristics of the PCSTP, it is computationally costly, if possible, to achieve exact solutions for such huge instances. Therefore, there is a need for fast and efficient matheuristic algorithms to explore and understand the concealed information in huge biological graphs. In this study, we propose a matheuristic method based on clustering algorithm. The main target of the method is to scale up the applicability of the currently available exact methods to large graph instances, without loosing too much on solution quality. The proposed matheuristic method is composed of a preprocessing procedures, a heuristic clustering algorithm and an exact solver for the PCSTP, applied on sub-graphs. We examine the performance of the proposed method on real-world benchmark instances from biology, and compare its results with those of the exact solver alone, without the heuristic clustering. We obtain solutions in shorter execution time and with negligible optimality gaps. This enables analyzing very large biological networks with the currently available exact solvers.  相似文献   

8.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

9.
Surrogate modelling based optimization has attracted much attention due to its ability of solving expensive-to-evaluate optimization problems, and a large majority of successful applications from various fields have been reported in literature. However, little effort has been devoted to solve scheduling problems through surrogate modelling, since evaluation for a given complete schedule of these complex problems is computationally cheap in most cases. In this paper, we develop a hybrid approach for solving the bottleneck stage scheduling problem (BSP) using the surrogate modelling technique. In our approach, we firstly transform the original problem into an expensive-to-evaluate optimization problem by cutting the original schedule into two partial schedules using decomposition, then a surrogate model is introduced to, quickly but crudely, evaluate a given partial schedule. Based on the surrogate model, we propose a differential evolution (DE) algorithm for solving BSPs in which a novel mechanism is developed to efficiently utilize the advantage of the surrogate model to enhance the performance of DE. Also, an improved adaptive proximity-based method is introduced to balance the exploration and exploitation during the evolutionary process of DE. Considering that data for training the surrogate model is generated at different iteration of DE, we adopt an incremental extreme learning machine as the surrogate model to reduce the computational cost while preserving good generalization performance. Extensive computational experiments demonstrate that significant improvements have been obtained by the proposed surrogate-modelling based approach.  相似文献   

10.
The multiprocessor open shop (MPOS) scheduling problem is NP-complete, a category of hard combinatorial optimization problems that have not received much attention in the literature. In this work, a special MPOS—a proportionate one—is introduced for the first time. Two original mixed integer programming formulations for the proportionate MPOS are developed and their complexity is discussed. Due to the complexity of the MPOS, this paper develops a compu-search methodology (a genetic algorithm (GA)) to schedule the shop with the objective of minimizing the makespan. In this novel GA, a clever chromosome representation of a schedule is developed that succinctly encodes a schedule of jobs across multiple stages. The innovative design of this chromosome enables any permutation of its genes to yield a feasible solution. This simple representation of an otherwise very complex schedule enables the genetic operators of crossover and mutation to easily manipulate a schedule as the algorithm iteratively searches for better schedules. A testbed of difficult instances of the problem are created to evaluate the performance of the GA. The solution for each instance is compared with a derived lower bound. Computational results reveal that the algorithm performs extremely well, demonstrating its potential to efficiently schedule MPOS problems. More importantly, successful experiments on large-scale problem instances suggest the readiness of the GA for industrial use.  相似文献   

11.
In cluster analysis, determining number of clusters is an important issue because information about the most appropriate number of clusters do not exist in the real-world problems. Automatic clustering is a clustering approach which is able to automatically find the most suitable number of clusters as well as divide the instances into the corresponding clusters. This study proposes a novel automatic clustering algorithm using a hybrid of improved artificial bee colony optimization algorithm and K-means algorithm (iABC). The proposed iABC algorithm improves the onlooker bee exploration scheme by directing their movements to a better location. Instead of using a random neighborhood location, the improved onlooker bee considers the data centroid to find a better initial centroid for the K-means algorithm. To increase efficiency of the improvement, the updating process is only applied on the worst cluster centroid. The proposed iABC algorithm is verified using some benchmark datasets. The computational result indicates that the proposed iABC algorithm outperforms the original ABC algorithm for automatic clustering problem. Furthermore, the proposed iABC algorithm is utilized to solve the customer segmentation problem. The result reveals that the iABC algorithm has better and more stable result than original ABC algorithm.  相似文献   

12.
We consider the scheduling problem in which two agents (agents A and B), each having its own job set (containing the A-jobs and B-jobs, respectively), compete to process their own jobs in a two-machine flowshop. Each agent wants to maximize a certain criterion depending on the completion times of its jobs only. Specifically, agent A desires to maximize either the weighted number of just-in-time (JIT) A-jobs that are completed exactly on their due dates or the maximum weight of the JIT A-jobs, while agent B wishes to maximize the weighted number of JIT B-jobs. Evidently four optimization problems can be formulated by treating the two agents’ criteria as objectives and constraints of the corresponding optimization problems. We focus on the problem of finding the Pareto-optimal schedules and present a bicriterion analysis of the problem. Solving this problem also solves the other three problems of bicriterion scheduling as a by-product. We show that the problems under consideration are either polynomially or pseudo-polynomially solvable. In addition, for each pseudo-polynomial-time solution algorithm, we show how to convert it into a two-dimensional fully polynomial-time approximation scheme for determining an approximate Pareto-optimal schedule. Finally, we conduct extensive numerical studies to evaluate the performance of the proposed algorithms.  相似文献   

13.
This paper gives attention to multi-objective optimization in scenarios where objective function evaluation is expensive, that is, expensive multi-objective optimization. We firstly propose a cluster-based neighborhood regression model, which incorporates the linear regression technique to predict the descent direction and generate new potential offspring. Combining this model with the classical decomposition-based multi-objective optimization framework, we propose an efficient and effective algorithm for tackling computationally expensive multi-objective optimization problems. As opposed to the conventional approach of replacing the original time-consuming objective functions with the approximated ones obtained by surrogate model, the proposed algorithm incorporates the proposed regression model to serve as an operator producing higher-quality offspring so that the algorithm requires fewer iterations to reach a given solution quality. The proposed algorithm is compared with several state-of-the-art surrogate-assisted algorithms on a variety of well-known benchmark problems. Empirical results demonstrate that the proposed algorithm outperforms or is competitive with other peer algorithms, and has the ability to keep a good trade-off between solution quality and running time within a fairly small number of function evaluations. In particular, our proposed algorithm shows obvious superiority in terms of the computational time used for the algorithm components, and can obtain acceptable solutions for expensive problems with high efficiency.  相似文献   

14.
We compare the concepts and computation of optimized diagnoses in the context of Boolean constraint based knowledge systems of automotive configuration, namely the preferred minimal diagnosis and the minimum weighted diagnosis. In order to restore the consistency of an over-constrained system w.r.t. a strict total order of the user requirements, the preferred minimal diagnosis tries to keep the most preferred user requirements and can be computed, for example, by the FASTDIAG algorithm. In contrast, partial weighted MinUNSAT solvers aim to find a set of unsatisfied clauses with the minimum sum of weights, such that the diagnosis is of minimum weight. It turns out that both concepts have similarities, i.e., both deliver an optimal minimal correction subset. We show use cases from automotive configuration where optimized diagnoses are desired. We point out theoretical commonalities and prove the reducibility of both concepts to each other, i.e., both problems are FPNP-complete, which was an open question. In addition to exact algorithms we present greedy algorithms. We evaluate the performance of exact and greedy algorithms on problem instances based on real automotive configuration data from three different German car manufacturers, and we compare the time and quality tradeoff.  相似文献   

15.
We consider optimization problems of the form (S, cost), where S is a clause set over Boolean variables x 1?...?x n , with an arbitrary cost function \(\mathit{cost}\colon \mathbb{B}^n \rightarrow \mathbb{R}\), and the aim is to find a model A of S such that cost(A) is minimized. Here we study the generation of proofs of optimality in the context of branch-and-bound procedures for such problems. For this purpose we introduce \(\mathtt{DPLL_{BB}}\), an abstract DPLL-based branch-and-bound algorithm that can model optimization concepts such as cost-based propagation and cost-based backjumping. Most, if not all, SAT-related optimization problems are in the scope of \(\mathtt{DPLL_{BB}}\). Since many of the existing approaches for solving these problems can be seen as instances, \(\mathtt{DPLL_{BB}}\) allows one to formally reason about them in a simple way and exploit the enhancements of \(\mathtt{DPLL_{BB}}\) given here, in particular its uniform method for generating independently verifiable optimality proofs.  相似文献   

16.
薛晗  赵强  马峰  邵哲平 《测控技术》2016,35(5):115-118
对随机组合优化问题中的概率旅行商问题(PTSP)的理论和方法进行了研究分析,采用现代进化算法中有代表性发展优势的萤火虫优化算法(FA),提出一种离散萤火虫优化算法(DFA)以求解.其中引入了新的学习机制使其相比原始的萤火虫优化算法,更容易搜索到全局最优解,有更好的收敛性能.实验中用TSPLIB中的经典实例进行测试来验证其可行性.考察了萤火虫数量和进化迭代次数对求解结果性能的影响,并将DFA与GA、PSO和ACO等其他著名的进化计算算法进行性能比较.实验结果证实了DFA无论对固定访问概率,还是访问概率为区间内随机数等不同情况,都具有良好的有效性和高效性,因此对求解随机组合优化系列问题的有效解决具有一定参考和借鉴价值.  相似文献   

17.
In this paper, we treat optimization problems as a kind of reinforcement learning problems regarding an optimization procedure for searching an optimal solution as a reinforcement learning procedure for finding the best policy to maximize the expected rewards. This viewpoint motivated us to propose a Q-learning-based swarm optimization (QSO) algorithm. The proposed QSO algorithm is a population-based optimization algorithm which integrates the essential properties of Q-learning and particle swarm optimization. The optimization procedure of the QSO algorithm proceeds as each individual imitates the behavior of the global best one in the swarm. The best individual is chosen based on its accumulated performance instead of its momentary performance at each evaluation. Two data sets including a set of benchmark functions and a real-world problem—the economic dispatch (ED) problem for power systems—were used to test the performance of the proposed QSO algorithm. The simulation results on the benchmark functions show that the proposed QSO algorithm is comparable to or even outperforms several existing optimization algorithms. As for the ED problem, the proposed QSO algorithm has found solutions better than all previously found solutions.  相似文献   

18.
In recent years backtrack search algorithms for propositional satisfiability (SAT) have been the subject of dramatic improvements. These improvements allowed SAT solvers to successfully solve instances with thousands or tens of thousands of variables. However, many new challenging problem instances are still too hard for current SAT solvers. As a result, further improvements to SAT technology are expected to have key consequences in solving hard real-world instances. This paper introduces a new idea: choosing the backtrack variable using a heuristic approach with the goal of diversifying the regions of the space that are explored during the search. The proposed heuristics are inspired by the heuristics proposed in recent years for the decision branching step of SAT solvers, namely, VSIDS and its improvements. Completeness conditions are established, which guarantee completeness for the new algorithm, as well as for any other incomplete backtracking algorithm. Experimental results on hundreds of instances derived from real-world problems show that the new technique is able to speed SAT solvers, while aborting fewer instances. These results clearly motivate the integration of heuristic backtracking in SAT solvers.  相似文献   

19.
This paper introduces a surrogate model based algorithm for computationally expensive mixed-integer black-box global optimization problems with both binary and non-binary integer variables that may have computationally expensive constraints. The goal is to find accurate solutions with relatively few function evaluations. A radial basis function surrogate model (response surface) is used to select candidates for integer and continuous decision variable points at which the computationally expensive objective and constraint functions are to be evaluated. In every iteration multiple new points are selected based on different methods, and the function evaluations are done in parallel. The algorithm converges to the global optimum almost surely. The performance of this new algorithm, SO-MI, is compared to a branch and bound algorithm for nonlinear problems, a genetic algorithm, and the NOMAD (Nonsmooth Optimization by Mesh Adaptive Direct Search) algorithm for mixed-integer problems on 16 test problems from the literature (constrained, unconstrained, unimodal and multimodal problems), as well as on two application problems arising from structural optimization, and three application problems from optimal reliability design. The numerical experiments show that SO-MI reaches significantly better results than the other algorithms when the number of function evaluations is very restricted (200–300 evaluations).  相似文献   

20.
Ant colony optimization is a well established metaheuristic from the swarm intelligence field for solving difficult optimization problems. In this work we present an application of ant colony optimization to the minimum connected dominating set problem, which is an NP-hard combinatorial optimization problem. Given an input graph, valid solutions are connected subgraphs of the given input graph. Due to the involved connectivity constraints, out-of-the-box integer linear programming solvers do not perform well for this problem. The developed ant colony optimization algorithm uses reduced variable neighborhood search as a sub-routine. Moreover, it can be applied to the weighted and to the non-weighted problem variants. An extensive experimental evaluation presents the comparison of our algorithm with the respective state-of-the-art techniques from the literature. It is shown that the proposed algorithm outperforms the current state of the art for both problem variants. For comparison purposes we also develop a constraint programming approach based on graph variables. Even though its performance deteriorates with growing instance size, it performs surprisingly well, solving 315 out of 481 considered problem instances to optimality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号