首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Constraint programming (CP) has been used with great success to tackle a wide variety of constraint satisfaction problems which are computationally intractable in general. Global constraints are one of the important factors behind the success of CP. In this paper, we study a new global constraint, the multiset ordering constraint, which is shown to be useful in symmetry breaking and searching for leximin optimal solutions in CP. We propose efficient and effective filtering algorithms for propagating this global constraint. We show that the algorithms maintain generalised arc-consistency and we discuss possible extensions. We also consider alternative propagation methods based on existing constraints in CP toolkits. Our experimental results on a number of benchmark problems demonstrate that propagating the multiset ordering constraint via a dedicated algorithm can be very beneficial.  相似文献   

2.
Constraint Programming (CP) is a paradigm derived from artificial intelligence, operational research, and algorithmics that can be used to solve combinatorial problems. CP solves problems by interleaving search (assign a value to an unassigned variable) and propagation. Constraint propagation aims at removing/filtering inconsistent values from the domains of the variables in order to reduce the search space of the problem. In this thesis, we develop filtering algorithms for two complex combinatorial optimization problems: a Capacitated Lot Sizing Problem (CLSP) and the Constrained Arborescence Problem (CAP). Each of these problems has many variants and practical applications.The CLSP is the problem of finding an optimal production plan for single or multiple items while satisfying demands of clients and respecting resource restrictions. The CLSP finds important applications in production planning. In this thesis, we introduce a CLSP in CP. In many lot sizing and scheduling problems, in particular when the planning horizon is discrete and finite, there are stocking costs to be minimized. These costs depend on the time spent between the production of an order and its delivery. We focus on developing specialized filtering algorithms to handle the stocking cost part of a class of the CLSP. We propose the global optimization constraint StockingCost when the perperiod stocking cost is the same for all orders; and its generalized version, the IDStockingCost constraint (ID stands for Item Dependent).In this thesis, we also deal with a well-known problem in graph theory: the Minimum Weight Arborescence (MWA) problem. Consider a weighted directed graph in which we distinguish one vertex r as the root. An MWA rooted at r is a directed spanning tree rooted at r with minimum total weight. We focus on the CAP that requires one to find an arborescence that satisfies some side constraints (for example resource, degree, or diameter constraints) and that has minimum weight. The CAP has many real life applications in telecommunication networks, computer networks, transportation problems, scheduling problems, etc. After sensitivity analysis of the MWA, we introduce the CAP in CP. We propose a dedicated global optimization constraint to handle any known variant of the CAP in CP: the MinArborescence constraint. All the proposed filtering algorithms are analyzed theoretically and evaluated experimentally. The different experimental evaluations of these propagators against the state-of-the-art propagators show their respective efficiencies.  相似文献   

3.
This article introduces a new filtering algorithm for handling systems of quadratic equations and inequations. Such constraints are widely used to model distance relations in numerous application areas ranging from robotics to chemistry. Classical filtering algorithms are based upon local consistencies and thus, are often unable to achieve a significant pruning of the domains of the variables occurring in quadratic constraint systems. The drawback of these approaches comes from the fact that the constraints are handled independently. We introduce here a global filtering algorithm that works on a tight linear relaxation of the quadratic constraints. The Simplex algorithm is then used to narrow the domains. Since most implementations of the Simplex work with floating point numbers and thus, are unsafe, we provide a procedure to generate safe linearizations. We also exploit a procedure provided by Neumaier and Shcherbina to get a safe objective value when calling the Simplex algorithm. With these two procedures, we prevent the Simplex algorithm from removing any solution while filtering linear constraint systems. Experimental results on classical benchmarks show that this new algorithm yields a much more effective pruning of the domains than local consistency filtering algorithms.*This article is an extended version of [23].  相似文献   

4.
In this paper, we propose a way of exploiting Operations Research techniques within global constraints for cost-based domain filtering. In Constraint Programming, constraint propagation is aimed at removing from variable domains combinations of values which are proven infeasible. Pruning derives from feasibility reasoning. When coping with optimization problems, pruning can be performed also on the basis of costs, i.e., optimality reasoning. Cost-based filtering removes combination of values which are proven sub-optimal. For this purpose, we encapsulate in global constraints optimization components representing suitable relaxations of the constraint itself. These components embed efficient Operations Research algorithms computing the optimal solution of the relaxed problem and a gradient function representing the estimated cost of each variable-value assignment. We exploit these pieces of information for pruning and for guiding the search. We have applied these techniques to a couple of ILOG Solver global constraints (a constraint of difference and a path constraint) and tested the approach on a variety of combinatorial optimization problems such as Timetabling, Travelling Salesman Problems and Scheduling Problems with sequence dependent setup times. Comparisons with pure Constraint Programming approaches and related literature clearly show the benefits of the proposed approach.  相似文献   

5.
李哲  于哲舟  李占山 《软件学报》2023,34(9):4153-4166
约束规划(constraint programming, CP)是表示和求解组合问题的经典范式之一.扩展约束(extensional constraint)或称表约束(table constraint)是约束规划中最为常见的约束类型.绝大多数约束规划问题都可以用表约束表达.在问题求解时,相容性算法用于缩减搜索空间.目前,最为高效的表约束相容性算法是简单表约缩减(simple table reduction, STR)算法簇,如Compact-Table (CT)和STRbit算法.它们在搜索过程中维持广义弧相容(generalized arc consistency, GAC).此外,完全成对相容性(full pairwise consistency, fPWC)是一种比GAC剪枝能力更强的相容性.最为高效的维持fPWC算法是PW-CT算法.多年来,人们提出了多种表约束相容性算法来提高剪枝能力和执行效率.因子分解编码(factor-decomposition encoding, FDE)通过对平凡问题重新编码.它一定程度地扩大了问题模型,使在新的问题上维持相对较弱的GAC等价于在原问题...  相似文献   

6.
A table constraint is explicitly represented as its set of solutions or non-solutions. This ad hoc (or extensional) representation may require space exponential to the arity of the constraint, making enforcing GAC expensive. In this paper, we address the space and time inefficiencies simultaneously by presenting the mddc constraint. mddc is a global constraint that represents its (non-)solutions with a multi-valued decision diagram (MDD). The MDD-based representation has the advantage that it can be exponentially smaller than a table. The associated GAC algorithm (called mddc) has time complexity linear to the size of the MDD, and achieves full incrementality in constant time. In addition, we show how to convert a positive or negative table constraint into an mddc constraint in time linear to the size of the table. Our experiments on structured problems, car sequencing and still-life, show that mddc is also a fast GAC algorithm for some global constraints such as sequence and regular. We also show that mddc is faster than the state-of-the-art generic GAC algorithms in Gent et al. (2007), Lecoutre and Szymanek (2006), Lhomme and Régin (2005) for table constraint.  相似文献   

7.
Table constraints are important in constraint programming as they are present in many real problems from areas such as configuration and databases. As a result, numerous specialized algorithms that achieve generalized arc consistency (GAC) on table constraints have been proposed. Since these algorithms achieve GAC, they operate on one constraint at a time. In this paper we propose new filtering algorithms for positive table constraints that achieve stronger local consistency properties than GAC by exploiting intersections between constraints. The first algorithm, called maxRPWC+, is a domain filtering algorithm that is based on the local consistency maxRPWC and extends the GAC algorithm of Lecoutre and Szymanek (2006). The second algorithm extends the state-of-the-art STR-based algorithms to stronger relation filtering consistencies, i.e., consistencies that can remove tuples from constraints’ relations. Experimental results from benchmark problems demonstrate that the proposed algorithms are quite competitive with standard GAC algorithms like STR2 in some classes of problems with intersecting table constraints, being orders of magnitude faster in some cases.  相似文献   

8.
This paper studies the resolution of (augmented) weighted matching problems within a constraint programming (CP) framework. The first contribution of the paper is a set of techniques that improves substantially the performance of branch-and-bound algorithms based on constraint propagation and the second contribution is the introduction of weighted matching as a global constraint ( WeightedMatching), that can be propagated using specialized incremental algorithms from Operations Research. We first compare programming techniques that use constraint propagation with specialized algorithms from Operations Research, such as the Busaker and Gowen flow algorithm or the Hungarian method. Although CP is shown not to be competitive with specialized polynomial algorithms for pure matching problems, the situation is different as soon as the problems are modified with additional constraints. Using the previously mentioned set of techniques, a simpler branch-and-bound algorithm based on constraint propagation can outperform a complex specialized algorithm. These techniques have been applied with success to the Traveling Salesman Problems [5], which can be seen as an augmented matching problem. We also show that an incremental version of the Hungarian method can be used to propagate a WeightedMatching constraint. This is an extension to the weighted case of the work of Régin [19], which we show to bring significant improvements on a timetabling example.  相似文献   

9.
Several combinatorial problems, such as car sequencing and rostering, feature sequence constraints, restricting the number of occurrences of certain values in every subsequence of a given length. We present three new filtering algorithms for the sequence constraint, including the first that establishes domain consistency in polynomial time. The filtering algorithms have complementary strengths: One borrows ideas from dynamic programming; another reformulates it as a regular constraint; the last is customized. The last two algorithms establish domain consistency, and the customized one does so in polynomial time. We provide experimental results that demonstrate the practical usefulness of each. We also show that the customized algorithm applies naturally to a generalized version of the sequence constraint that allows subsequences of varied lengths. The significant computational advantage of using a single generalized sequence constraint over a semantically equivalent collection of among or sequence constraints is demonstrated empirically.  相似文献   

10.
Constraints have played a central role in cp because they capture key substructures of a problem and efficiently exploit them to boost inference. This paper intends to do the same thing for search, proposing constraint-centered heuristics which guide the exploration of the search space toward areas that are likely to contain a high number of solutions. We first propose new search heuristics based on solution counting information at the level of individual constraints. We then describe efficient algorithms to evaluate the number of solutions of two important families of constraints: occurrence counting constraints, such as alldifferent, and sequencing constraints, such as regular. In both cases we take advantage of existing filtering algorithms to speed up the evaluation. Experimental results on benchmark problems show the effectiveness of our approach. An earlier version of this paper appeared as [24].  相似文献   

11.
Constraint Programming (CP) has been successfully applied to several combinatorial optimization problems. One of its advantages is the availability of complex global constraints performing efficient propagation and interacting with each other through shared variables. However, CP techniques have shown their limitations in dealing with optimization problems since the link between the objective function and problem decision variables is often quite loose and does not produce an effective propagation. We propose to integrate optimization components in global constraints, aimed at optimally solving a relaxation corresponding to the constraint itself. The optimal solution of the relaxation provides pieces of information which can be exploited in order to perform pruning on the basis of cost-based reasoning. In fact, we exploit reduction rules based on lower bound and reduced costs calculation to remove those branches which cannot improve the best solution found so far. The interest of integrating efficient well-known Operations Research (OR) algorithms into CP is mainly due to the smooth interaction between CP domain reduction and information provided by the relaxation acting on variable domains which can be seen as a communication channel among different techniques. We have applied this technique to symmetric and asymmetric Traveling Salesman Problem (TSP) instances both because the TSP is an interesting problem arising in many real-life applications, and because pure CP techniques lead to disappointing results for this problem. We have tested the proposed optimization constraints using ILOG solver. Computational results on benchmarks available from literature, and comparison with related approaches are described in the paper. The proposed method on pure TSPs improves the performances of CP solvers, but is still far from the OR state of the art techniques for solving the problem. However, due to the flexibility of the CP framework, we could easily use the same technique on TSP with Time Windows, a time constrained variant of the TSP. For this type of problem, we achieve results that are comparable with state of the art OR results.  相似文献   

12.
This paper presents our investigations on a hybrid constraint programming based column generation (CP–CG) approach to nurse rostering problems. We present a complete model to formulate all the complex real-world constraints in several benchmark nurse rostering problems. The hybrid CP–CG approach is featured with not only the effective relaxation and optimality reasoning of linear programming but also the powerful expressiveness of constraint programming in modeling the complex logical constraints in nurse rostering problems. In solving the CP pricing subproblem, we propose two strategies to generate promising columns which contribute to the efficiency of the CG procedure. A Depth Bounded Discrepancy Search is employed to obtain diverse columns. A cost threshold is adaptively tightened based on the information collected during the search to generate columns of good quality. Computational experiments on a set of benchmark nurse rostering problems demonstrate a faster convergence by the two strategies and justify the effectiveness and efficiency of the hybrid CP–CG approach.  相似文献   

13.
Joseph Scott 《Constraints》2017,22(1):99-100
In constraint programming (CP), a combinatorial problem is modeled declaratively as a conjunction of constraints, each of which captures some of the combinatorial substructure of the problem. Constraints are more than a modeling convenience: every constraint is partially implemented by an inference algorithm, called a propagator, that rules out some but not necessarily all infeasible candidate values of one or more unknowns in the scope of the constraint. Interleaving propagation with systematic search leads to a powerful and complete solution method, combining a high degree of re-usability with natural, high-level modeling.A propagator can be characterized as a sound approximation of a constraint on an abstraction of sets of candidate values; propagators that share an abstraction are similar in the strength of the inference they perform when identifying infeasible candidate values. In this thesis, we consider abstractions of sets of candidate values that may be described by an elegant mathematical formalism, the Galois connection. We develop a theoretical framework from the correspondence between Galois connections and propagators, unifying two disparate views of the abstraction-propagation connection, namely the oft-overlooked distinction between representational and computational over-approximations. Our framework yields compact definitions of propagator strength, even in complicated cases (i.e., involving several types, or unknowns with internal structure); it also yields a method for the principled derivation of propagators from constraint definitions.We apply this framework to the extension of an existing CP solver to constraints over strings, that is, words of finite length. We define, via a Galois connection, an over-approximation for bounded-length strings, and demonstrate two different methods for implementing this over-approximation in a CP solver. First we use the Galois connection to derive a bounded-length string representation as an aggregation of existing scalar types; propagators for this representation are obtained by manual derivation, or automated synthesis, or a combination. Then we implement a string variable type, motivating design choices with knowledge gained from the construction of the over-approximation. The resulting CP solver extension not only substantially eases modeling for combinatorial string problems, but also leads to substantial efficiency improvements over prior CP methods.  相似文献   

14.
Semi-supervised graph clustering: a kernel approach   总被引:6,自引:0,他引:6  
Semi-supervised clustering algorithms aim to improve clustering results using limited supervision. The supervision is generally given as pairwise constraints; such constraints are natural for graphs, yet most semi-supervised clustering algorithms are designed for data represented as vectors. In this paper, we unify vector-based and graph-based approaches. We first show that a recently-proposed objective function for semi-supervised clustering based on Hidden Markov Random Fields, with squared Euclidean distance and a certain class of constraint penalty functions, can be expressed as a special case of the weighted kernel k-means objective (Dhillon et al., in Proceedings of the 10th International Conference on Knowledge Discovery and Data Mining, 2004a). A recent theoretical connection between weighted kernel k-means and several graph clustering objectives enables us to perform semi-supervised clustering of data given either as vectors or as a graph. For graph data, this result leads to algorithms for optimizing several new semi-supervised graph clustering objectives. For vector data, the kernel approach also enables us to find clusters with non-linear boundaries in the input data space. Furthermore, we show that recent work on spectral learning (Kamvar et al., in Proceedings of the 17th International Joint Conference on Artificial Intelligence, 2003) may be viewed as a special case of our formulation. We empirically show that our algorithm is able to outperform current state-of-the-art semi-supervised algorithms on both vector-based and graph-based data sets.  相似文献   

15.
This paper approaches the problem of modeling optimization problems containing substructures involving constraints on sequences of decision variables. Such constraints can be very complex to express with Mixed Integer Programming (MIP). We suggest an approach inspired by global constraints used in Constraint Programming (CP) to exploit formal languages for the modeling of such substructures with MIP. More precisely, we first suggest a way to use automata, as the CP regular constraint does, to express allowed patterns for the values taken by the constrained sequence of variables. Secondly, we present how context-free grammars can contribute to formulate constraints on sequences of variables in a MIP model. Experimental results on both approaches show that they facilitate the modeling, but also give models easier to solve by MIP solvers compared to compact assignment MIP formulations.  相似文献   

16.
Ant colony optimization metaheuristic (ACO) represents a new class of algorithms particularly suited to solve real-world combinatorial optimization problems. ACO algorithms, published for the first time in 1991 by M. Dorigo [Optimization, learning and natural algorithms (in Italian). Ph.D. Thesis, Dipartimento di Elettronica, Politecnico di Milano, Milan, 1992] and his coworkers, have been applied, particularly starting from 1999 (Bonabeau et al., Swarm intelligence: from natural to artificial systems, Oxford University Press, New York, 1999; Dorigo et al., Artificial life 5(2):137–172, 1999; Dorigo and Di Caro, Ant colony optimization: a new metaheuristic, IEEE Press, Piscataway, NJ, 1999; Dorigo et al., Ant colony optimization and swarm intelligence, Springer, Berlin Heidelberg New York, 2004; Dorigo and Stutzle, Ant colony optimization, MIT Press, Cambridge, MA, 2004), to several kinds of optimization problems such as the traveling salesman problem, quadratic assignment problem, vehicle routing, sequential ordering, scheduling, graph coloring, management of communications networks, and so on. The ant colony optimization metaheuristic takes inspiration from the studies of real ant colonies’ foraging behavior. The main characteristic of such colonies is that individuals have no global knowledge of problem solving but communicate indirectly among themselves, depositing on the ground a chemical substance called pheromone, which influences probabilistically the choice of subsequent ants, which tend to follow paths where the pheromone concentration is higher. Such behavior, called stigmergy, is the basic mechanism that controls ant activity and permits them to take the shortest path connecting their nest to a food source. In this paper, it is shown how to convert natural ant behavior to algorithms able to escape from local minima and find global minimum solutions to constrained combinatorial problems. Some examples on plane trusses are also presented.  相似文献   

17.
Distributed constraint satisfaction problems (DisCSPs) are composed of agents, each holding its own variables, that are connected by constraints to variables of other agents. Due to the distributed nature of the problem, message delay can have unexpected effects on the behavior of distributed search algorithms on DisCSPs. This has been recently shown in experimental studies of asynchronous backtracking algorithms (Bejar et al., Artif. Intell., 161:117–148, 2005; Silaghi and Faltings, Artif. Intell., 161:25–54, 2005). To evaluate the impact of message delay on the run of DisCSP search algorithms, a model for distributed performance measures is presented. The model counts the number of non concurrent constraints checks, to arrive at a solution, as a non concurrent measure of distributed computation. A simpler version measures distributed computation cost by the non-concurrent number of steps of computation. An algorithm for computing these distributed measures of computational effort is described. The realization of the model for measuring performance of distributed search algorithms is a simulator which includes the cost of message delays. Two families of distributed search algorithms on DisCSPs are investigated. Algorithms that run a single search process, and multiple search processes algorithms. The two families of algorithms are described and associated with existing algorithms. The performance of three representative algorithms of these two families is measured on randomly generated instances of DisCSPs with delayed messages. The delay of messages is found to have a strong negative effect on single search process algorithms, whether synchronous or asynchronous. Multi search process algorithms, on the other hand, are affected very lightly by message delay.  相似文献   

18.
Consistency techniques for continuous constraints   总被引:1,自引:0,他引:1  
We consider constraint satisfaction problems with variables in continuous, numerical domains. Contrary to most existing techniques, which focus on computing one single optimal solution, we address the problem of computing a compact representation of the space of all solutions admitted by the constraints. In particular, we show how globally consistent (also called decomposable) labelings of a constraint satisfaction problem can be computed.Our approach is based on approximating regions of feasible solutions by 2 k -trees, a representation commonly used in computer vision and image processing. We give simple and stable algorithms for computing labelings with arbitrary degrees of consistency. The algorithms can process constraints and solution spaces of arbitrary complexity, but with a fixed maximal resolution.Previous work has shown that when constraints are convex and binary, path-consistency is sufficient to ensure global consistency. We show that for continuous domains, this result can be generalized to ternary and in fact arbitrary n-ary constraints using the concept of (3,2)-relational consistency. This leads to polynomial-time algorithms for computing globally consistent labelings for a large class of constraint satisfaction problems with continuous variables.  相似文献   

19.
Scheduling is one of the most successful application areas of constraint programming mainly thanks to special global constraints designed to model resource restrictions. Among these global constraints, edge-finding and not-first/not-last are the most popular filtering algorithms for unary resources. In this paper we introduce new O(n log n) versions of these two filtering algorithms and one more O(n log n) filtering algorithm called detectable precedences. These algorithms use a special data structures Θ-tree and Θ-Λ-tree. These data structures are especially designed for “what-if” reasoning about a set of activities so we also propose to use them for handling so called optional activities, i.e. activities which may or may not appear on the resource. In particular, we propose new O(n log n) variants of filtering algorithms which are able to handle optional activities: overload checking, detectable precedences and not-first/not-last.  相似文献   

20.
Michel Rueher 《Software》1993,23(2):177-200
This paper explores PrologIII's capabilities through the study of two short but non-trivial applications that are chosen to show the advantages of constraint logic programming languages over conventional Prologs, especially for tackling combinatorial problems. The first application (manipulation of causal graphs) highlights the expressive power of the constraints on booleans and shows that constraint logic programming techniques provide great flexibility for the exploration of the domain of an application, and therefore significantly improve Prolog's prototyping capabilities. The second application (map colouring) demonstrates that a combined use of numerical constraints and constraints on trees of PrologIII enables one to improve solutions based on standard backtracking, through the use of simple heuristics. Such heuristics allow drastic reduction of the search space and yield good performance even for quite complex problems. The adaptability of the approach to other combinatorial problems is shown on several examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号