首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
Several sequential approximation algorithms for combinatorial optimization problems are based on the following paradigm: solve a linear or semidefinite programming relaxation, then use randomized rounding to convert fractional solutions of the relaxation into integer solutions for the original combinatorial problem. We demonstrate that such a paradigm can also yield parallel approximation algorithms by showing how to convert certain linear programming relaxations into essentially equivalent positive linear programming [LN] relaxations that can be near-optimally solved in NC. Building on this technique, and finding some new linear programming relaxations, we develop improved parallel approximation algorithms for Max Sat, Max Directed Cut, and Max k CSP. The Max Sat algorithm essentially matches the best approximation obtainable with sequential algorithms and has a fast sequential version. The Max k CSP algorithm improves even over previous sequential algorithms. We also show a connection between probabilistic proof checking and a restricted version of Max k CSP. This implies that our approximation algorithm for Max k CSP can be used to prove inclusion in P for certain PCP classes. Received November 1996; revised March 1997.  相似文献   

2.
The constrained shortest path problem (CSP) is one of the basic network optimization problems, which plays an important part in real applications. In this paper, an adaptive amoeba algorithm is combined with the Lagrangian relaxation algorithm to solve the CSP problem. The proposed method is divided into two steps: (1) the adaptive amoeba algorithm is modified to solve the shortest path problem (SPP) in a directed network; (2) the modified adaptive amoeba algorithm is combined with the Lagrangian relaxation method to solve the CSP problem. In addition, the evolving processes of the adaptive amoeba model have been detailed in the paper. Two examples are used to illustrate the efficiency of the proposed method. The results show that the proposed method can deal with the CSP problem effectively.  相似文献   

3.
Distributed constraint satisfaction with partially known constraints   总被引:1,自引:0,他引:1  
Distributed constraint satisfaction problems (DisCSPs) are composed of agents connected by constraints. The standard model for DisCSP search algorithms uses messages containing assignments of agents. It assumes that constraints are checked by one of the two agents involved in a binary constraint, hence the constraint is fully known to both agents. This paper presents a new DisCSP model in which constraints are kept private and are only partially known to agents. In addition, value assignments can also be kept private to agents and not be circulated in messages. Two versions of a new asynchronous backtracking algorithm that work with partially known constraints (PKC) are presented. One is a two-phase asynchronous backtracking algorithm and the other uses only a single phase. Another new algorithm preserves the privacy of assignments by performing distributed forward-checking (DisFC). We propose to use entropy as quantitative measure for privacy. An extensive experimental evaluation demonstrates a trade-off between preserving privacy and the efficiency of search, among the different algorithms. Partially supported by the Spanish project TIN2006-15387-C03-01. Partially supported by the Lynn and William Frankel center for Computer Sciences and the Paul Ivanier Center for Robotics and Production Management.  相似文献   

4.
Argumentation is a promising approach for defeasible reasoning. It consists of justifying each plausible conclusion by arguments. Since the available information may be inconsistent, a conclusion and its negation may both be justified. The arguments are thus said to be conflicting. The main issue is how to evaluate the arguments. Several semantics were proposed for that purpose. The most important ones are: stable, preferred, complete, grounded and admissible. A semantics is a set of criteria that should be satisfied by a set of arguments, called extension, in order to be acceptable. Different decision problems related to these semantics were defined (like whether an argumentation framework has a stable extension). It was also shown that most of these problems are intractable. Consequently, developing algorithms for these problems is not trivial and thus the implementation of argumentation systems not obvious. Recently, some solutions to this problem were found. The idea is to use a reduction method where a given problem is translated in another one like SAT or ASP. This paper follows this line of research. It studies how to encode the problem of computing the extensions of an argumentation framework (under each of the previous semantics) as a constraint satisfaction problem (CSP). Such encoding is of great importance since it makes it possible to use the very efficient solvers (developed by the CSP community) for computing the extensions. Our encodings take advantage of existing reductions to SAT problems in the case of Dung’s abstract framework. Among the various ways of translating a SAT problem into a CSP one, we propose the most appropriate one in the argumentation context. We also provide encodings in case two other families of argumentation frameworks: the constrained version of Dung’s abstract framework and preference-based argumentation framework.  相似文献   

5.
Evolutionary algorithms (EAs) have been applied to many optimization problems successfully in recent years. The genetic algorithm (GAs) and evolutionary programming (EP) are two different types of EAs. GAs use crossover as the primary search operator and mutation as a background operator, while EP uses mutation as the primary search operator and does not employ any crossover. This paper proposes a novel EP algorithm for cutting stock problems with and without contiguity. Two new mutation operators are proposed. Experimental studies have been carried out to examine the effectiveness of the EP algorithm. They show that EP can provide a simple yet more effective alternative to GAs in solving cutting stock problems with and without contiguity. The solutions found by EP are significantly better (in most cases) than or comparable to those found by GAs.Scope and purposeThe one-dimensional cutting stock problem (CSP) is one of the classical combinatorial optimization problems. While most previous work only considered minimizing trim loss, this paper considers CSPs with two objectives. One is the minimization of trim loss (i.e., wastage). The other is the minimization of the number of stocks with wastage, or the number of partially finished items (pattern sequencing or contiguity problem). Although some traditional OR techniques (e.g., programming based approaches) can find the global optimum for small CSPs, they are impractical to find the exact global optimum for large problems due to combinatorial explosion. Heuristic techniques (such as various hill-climbing algorithms) need to be used for large CSPs. One of the heuristic algorithms which have been applied to CSPs recently with success is the genetic algorithm (GA). This paper proposes a much simpler evolutionary algorithm than the GA, based on evolutionary programming (EP). The EP algorithm has been shown to perform significantly better than the GA for most benchmark problems we used and to be comparable to the GA for other problems.  相似文献   

6.
Distributed Constraint Satisfaction (DCSP) has long been considered an important area of research for artificial intelligence and multi-agent systems. Also, Ant Colony Optimization (ACO) is an important evolutionary method for solving various optimization problems. This paper demonstrates the power of ants in solving DCSPs and describes a new approach for such a solution, showing how it differs from previous ACO-based DCSP solvers. The presented algorithm is designed to provide the special requirements that are important in the distributed form of Constraint Satisfaction Problem (CSP). The paper describes the important criteria for distributed CSP and then demonstrates how the presented algorithm stands out over similar DCSP solvers considering these criteria. Finally, the proposed approach is evaluated on random binary problems. The practical results show that this method, in most of the cases, outperforms the Asynchronous Backtracking Algorithm (ABT) and Distributed Breakout Algorithm (DBA) two important algorithms in this field of research.  相似文献   

7.
The Hybrid Search for Minimal Perturbation Problems algorithm in Dynamic CSP (HS_MPP) (Zivan, Constraints, 16(3), 228–249, 2011) ensures for a given dynamic problem and its solution to the previous CSP, to find the optimal solution to the newly generated CSP. This proposed method exploits the fact that its reported solution must satisfy two requirements. First, that it is a solution for T complete assignment for the derived CSP and second, that it is as close as possible to the solution of the former CSP. Unfortunately, the pseudo-code of the algorithm in Zivan (Constraints, 16(3), 228–249, 2011) is confusing and may lead to an implementation in which HS_MPP may not perform the expected outcomes of a given instance of Dynamic CSPs correctly. In this erratum, we demonstrate the possible undesired outcomes and give corrections in HS_MPP’s pseudo-code.  相似文献   

8.
In a recent paper by Liu et al. [Exact algorithm and heuristic for the closest string problem, Computers & Operations Research 2011;38:1513-20], a polynomial time heuristic procedure is proposed for the closest string problem (CSP). Such heuristic called LDDA_LSS is a combination of a previously published approximation algorithm and local search strategies. This paper points out that an instant algorithm deriving a feasible solution directly from the continuous relaxation solution of a standard ILP formulation of CSP already strongly outperforms LDDA_LSS both in terms of solution quality and computing time. Two core based procedures are then proposed that further improve the results of the instant algorithm. Based on these results, we conclude that such LP-based approaches for their efficiency and simplicity should be used as a benchmark for future heuristics on CSP.  相似文献   

9.
The richness of the constraint satisfaction problem (or CSP) in representing combinatorial search maladies has resulted in a torrent of techniques for efficiently solving them. These techniques have focused on discovering better backtrack points, learning from dead-ends and avoiding repetitious interference, problem reduction method and the use of network heuristics. Much of this research has derived innovative methods for solving the CSP, however, the evaluations of the techniques have remained diverse and in many cases, statistically inaccurate.Another issue with regard to the performance measurement of constraint satisfaction techniques is the inability to model computational constraint processing cost. It is not uncommon to find evaluations that are based on CSPs that differ only on the percentage of constraints and the tightness of each constraint. This may be justifiable if it can be established that they are the only contributing factors of the performance variable. The three aspects mentioned above comprise this paper's main focus points. They come under the general headings of Modelling CSP Difficulty, Modelling Constraint Cost and Elucidating Major Performance Factors respectively. This paper seeks to provide a set of proposals with respect to the above three well-known areas so as collectively to enhance the robustness of evaluations conducted in the field of constraint satisfaction.  相似文献   

10.
Finite-domain constraint programming has been used with great success to tackle a wide variety of combinatorial problems in industry and academia. To apply finite-domain constraint programming to a problem, it is modelled by a set of constraints on a set of decision variables. A common modelling pattern is the use of matrices of decision variables. The rows and/or columns of these matrices are often symmetric, leading to redundancy in a systematic search for solutions. An effective method of breaking this symmetry is to constrain the assignments of the affected rows and columns to be ordered lexicographically. This paper develops an incremental propagation algorithm, GACLexLeq, that establishes generalised arc consistency on this constraint in O(n) operations, where n is the length of the vectors. Furthermore, this paper shows that decomposing GACLexLeq into primitive constraints available in current finite-domain constraint toolkits reduces the strength or increases the cost of constraint propagation. Also presented are extensions and modifications to the algorithm to handle strict lexicographic ordering, detection of entailment, and vectors of unequal length. Experimental results on a number of domains demonstrate the value of GACLexLeq.  相似文献   

11.
Systems of polynomial equations with coefficients over a field K can be used to concisely model combinatorial problems. In this way, a combinatorial problem is feasible (e.g., a graph is 3-colorable, hamiltonian, etc.) if and only if a related system of polynomial equations has a solution over the algebraic closure of the field K. In this paper, we investigate an algorithm aimed at proving combinatorial infeasibility based on the observed low degree of Hilbert’s Nullstellensatz certificates for polynomial systems arising in combinatorics, and based on fast large-scale linear-algebra computations over K. We also describe several mathematical ideas for optimizing our algorithm, such as using alternative forms of the Nullstellensatz for computation, adding carefully constructed polynomials to our system, branching and exploiting symmetry. We report on experiments based on the problem of proving the non-3-colorability of graphs. We successfully solved graph instances with almost two thousand nodes and tens of thousands of edges.  相似文献   

12.
Markov chains are a well known tool to model temporal properties of many phenomena, from text structure to fluctuations in economics. Because they are easy to generate, Markovian sequences, i.e. temporal sequences having the Markov property, are also used for content generation applications such as text or music generation that imitate a given style. However, Markov sequences are traditionally generated using greedy, left-to-right algorithms. While this approach is computationally cheap, it is fundamentally unsuited for interactive control. This paper addresses the issue of generating steerable Markovian sequences. We target interactive applications such as games, in which users want to control, through simple input devices, the way the system generates a Markovian sequence, such as a text, a musical sequence or a drawing. To this aim, we propose to revisit Markov sequence generation as a branch and bound constraint satisfaction problem (CSP). We propose a CSP formulation of the basic Markovian hypothesis as elementary Markov Constraints (EMC). We propose algorithms that achieve domain-consistency for the propagators of EMCs, in an event-based implementation of CSP. We show how EMCs can be combined to estimate the global Markovian probability of a whole sequence, and accommodate for different species of Markov generation such as fixed order, variable-order, or smoothing. Such a formulation, although more costly than traditional greedy generation algorithms, yields the immense advantage of being naturally steerable, since control specifications can be represented by arbitrary additional constraints, without any modification of the generation algorithm. We illustrate our approach on simple yet combinatorial chord sequence and melody generation problems and give some performance results.  相似文献   

13.
The value iteration algorithm is a well-known technique for generating solutions to discounted Markov decision process (MDP) models. Although simple to implement, the approach is nevertheless limited in situations where many Markov decision processes must be solved, such as in real-time state-based control problems or in simulation/optimization problems, because of the potentially large number of iterations required for the value function to converge to an ε-optimal solution. Experimental results suggest, however, that the sequence of solution policies associated with each iteration of the algorithm converges much more rapidly than does the value function. This behavior has significant implications for designing solution approaches for MDPs, yet it has not been explicitly characterized in the literature nor generated significant discussion. This paper seeks to generate such discussion by providing comparative empirical convergence results and exploring several predictors that allow estimation of policy convergence speed based on existing MDP parameters.  相似文献   

14.
We describe an algorithm (VQE) for a variant of the real quantifier elimination problem (QE). The variant problem requires the input to satisfy a certain extra condition, and allows the output to be almost equivalent to the input. The motivation/rationale for studying such a variant QE problem is that many quantified formulas arising in applications do satisfy the extra conditions. Furthermore, in most applications, it is sufficient that the output formula is almost equivalent to the input formula. The main idea underlying the algorithm is to substitute the repeated projection step of CAD by a single projection without carrying out a parametric existential decision over the reals. We find that the algorithm can tackle important and challenging problems, such as numerical stability analysis of the widely-used MacCormack’s scheme. The problem has been practically out of reach for standard QE algorithms in spite of many attempts to tackle it. However, the current implementation of VQE can solve it in about 12 hours. This paper extends the results reported at the conference ISSAC 2009.  相似文献   

15.
The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. However, the proper evaluation of new algorithm configuration (AC) procedures (or configurators) is hindered by two key hurdles. First, AC scenarios are hard to set up, including the target algorithm to be optimized and the problem instances to be solved. Second, and even more significantly, they are computationally expensive: a single configurator run involves many costly runs of the target algorithm. Here, we propose a benchmarking approach that uses surrogate scenarios, which are computationally cheap while remaining close to the original AC scenarios. These surrogate scenarios approximate the response surface corresponding to true target algorithm performance using a regression model. In our experiments, we construct and evaluate surrogate scenarios for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems. We generalize previous work by building surrogates for AC scenarios with multiple problem instances, stochastic target algorithms and censored running time observations. We show that our surrogate scenarios capture overall important characteristics of the original AC scenarios from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate.  相似文献   

16.
Building on a result of Larose and Tesson for constraint satisfaction problems (CSPs), we uncover a dichotomy for the quantified constraint satisfaction problem QCSP(B), where B is a finite structure that is a core. Specifically, such problems are either in ALogtime or are L-hard. This involves demonstrating that if CSP(B) is first-order expressible, and B is a core, then QCSP(B) is in ALogtime.We show that the class of B such that CSP(B) is first-order expressible (indeed trivial) is a microcosm for all QCSPs. Specifically, for any B there exists a C — generally not a core — such that CSP(C) is trivial, yet QCSP(B) and QCSP(C) are equivalent under logspace reductions.  相似文献   

17.
The multistage cutting stock problem (CSP) generalizes the one-dimensional CSP when a lengthwise cutting process is distributed over two or more successive stages. At every stage of the cutting process incoming rolls are slit into smaller rolls by width. The problem is to minimize total trim loss occurring at all stages of technological process meeting customer demands for finished rolls. We propose a row and column generation technique for solving the multistage one-dimensional CSP. The technique is a generalization of the column generation method suggested by Gilmore and Gomory for solving a classic CSP. The procedure generates only those intermediate rolls (rows) and cutting patterns (columns) that are needed. An auxiliary problem embedded into the frame of the revised simplex algorithm is a non-linear knapsack problem that can be solved efficiently. Computational results prove the overall method is a valuable addition to the tool set for modeling and solving the multistage CSP.Scope and purposeWe investigate a broad class of large-scale linear programming models and suggest a new and efficient way to solve them. The proposed method belongs to a category of decomposition techniques generalizing the famous column generation method. An iteration of the revised simplex algorithm may “enrich” the LP matrix either by generating a new column, as a purely column generation method does, or by generating a combination of a new row and a pair of new columns. The method is a row and column generation technique that we propose and investigate. Applications modeled by a multistage CSP occur in the industries that use a multistage cutting process: paper, leather, film, steel, etc., or a nested packing/loading process: transportation. The unknown variables in the multistage cutting stock problem are intermediate sizes (rows) and cutting patterns (columns). According to the algorithm both are to be generated dynamically. The proposed algorithm brings tremendous benefits in terms of the quality of solutions and computational performance.  相似文献   

18.
Many real problems can be naturally modelled as constraint satisfaction problems (CSPs). However, some of these problems are of a distributed nature, which requires problems of this kind to be modelled as distributed constraint satisfaction problems (DCSPs). In this work, we present a distributed model for solving CSPs. Our technique carries out a partition over the constraint network using a graph partitioning software; after partitioning, each sub-CSP is arranged into a DFS-tree CSP structure that is used as a hierarchy of communication by our distributed algorithm. We show that our distributed algorithm outperforms well-known centralized algorithms solving partitionable CSPs.  相似文献   

19.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

20.
A. Bertoni  G. Mauri  M. Torelli 《Calcolo》1980,17(2):163-174
This paper is intended to show that an algebraic approach can give useful suggestions to design efficient algorithms solving combinatorial problems. The problems we discusses in the paper are:
  1. Counting strings of given length generated by a regular grammar. For this problem, we give an exact algorithm whose complexity is 0 (logn) (with respect to the number of executed operations), and an approximate algorithm which however still has the same order of complexity;
  2. counting trees recognized by a tree automaton. For this problem, we give an exact algorithm of complexity 0(n) and an approximate one of complexity 0 (logn). For this approximate algorithm the relative error is shown to be 0 (1/n).
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号