首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
李宏博  梁艳春  李占山 《软件学报》2016,27(11):2701-2711
广泛弧相容算法(generalized arc consistency,简称GAC),是求解约束满足问题的核心方法.表约束理论上可以表示所有约束关系,在过去10年中,有很多应用于表约束的广泛弧相容算法被提出来.在这些算法中,表缩减算法的效率非常高.但是目前的表缩减算法只能应用于正表约束,无法直接应用于负表约束.首先,提出一种表缩减算法STR-N,可以直接应用于负表约束;然后,给出了STR-N的两个改进版本STR-N2和STR-NIC.实验结果显示,STR-N算法在负表约束上的求解效率具有明显的优势.  相似文献   

2.
杨明奇  李占山  张家晨 《软件学报》2019,30(11):3355-3363
表约束是一种外延的知识表示方法,每个约束在对应的变量集上列举出所有支持或禁止的元组.广义弧相容(generalized arc consistency,简称GAC)是求解约束满足问题应用最广泛的相容性.Simple Tabular Reduction(STR)是一类高效的维持GAC的算法.在回溯搜索中,STR动态地删除无效元组,降低了查找支持的开销,并拥有单位时间的回溯代价,在高元表约束上获得了广泛运用,并有大量基于STR的改进算法被提出,其中,元组集的压缩表示是目前研究较多的方法.同样基于动态维持元组集有效部分的思想,为STR提出一种检测并删除无效元组和为变量更新支持的算法,作用于原始表约束并拥有单位时间的回溯代价.实验结果表明,该算法在表约束上维持GAC的效率普遍高于现有的非基于压缩表示的STR算法,并且在一些实例上的效率高于最新的基于元组集压缩表示的STR算法.  相似文献   

3.
王震  李哲  李占山 《软件学报》2021,32(11):3530-3540
表约束在约束程序(constraint programming,简称CP)中被广泛研究.目前,求解表约束问题效率最高的算法是CT (compact-table)和STRbit (simple tabular reduction bit).它们在搜索过程中维持广义弧相容(generalized arc consistency,简称GAC).完全成对相容(full pairwise consistency,简称fPWC)是一种强于GAC的相容性关系,目前,实现fPWC效率最高的算法是PW-CT,但是它无法直接在通用的求解器上实现.因子分解编码(factor-decomposition encoding,简称FDE)是实现fPWC的一种编码方式,通常和简单表缩减(STR)算法一起来使用.当前效率最高的STR算法使用了bitset的数据结构,用这些算法来求解FDE实例可能会造成内存溢出.提出了STRFDE算法——一种使用bitset结构来求解FDE实例的方法.它结合了CT和STRbit的优势,在保证求解效率的同时,使占用的内存尽可能小.实验结果表明,在许多存在非平凡相交的实例上,该算法是有竞争力的.  相似文献   

4.
Table constraints are important in constraint programming as they are present in many real problems from areas such as configuration and databases. As a result, numerous specialized algorithms that achieve generalized arc consistency (GAC) on table constraints have been proposed. Since these algorithms achieve GAC, they operate on one constraint at a time. In this paper we propose new filtering algorithms for positive table constraints that achieve stronger local consistency properties than GAC by exploiting intersections between constraints. The first algorithm, called maxRPWC+, is a domain filtering algorithm that is based on the local consistency maxRPWC and extends the GAC algorithm of Lecoutre and Szymanek (2006). The second algorithm extends the state-of-the-art STR-based algorithms to stronger relation filtering consistencies, i.e., consistencies that can remove tuples from constraints’ relations. Experimental results from benchmark problems demonstrate that the proposed algorithms are quite competitive with standard GAC algorithms like STR2 in some classes of problems with intersecting table constraints, being orders of magnitude faster in some cases.  相似文献   

5.
杨明奇  李占山  李哲 《软件学报》2017,28(12):3156-3166
广义弧相容是求解约束满足问题应用最广泛的相容性,MDDc、STR2和STR3是表约束上维持广义弧相容应用较多的算法,其中MDDc基于对约束压缩表示的思想,将表约束表示成多元决策图,对各个元组之间存在较多交叠部分的约束具有很好的压缩效果;STR3同STR2一样基于动态维持有效元组的思想,当元组集规模缩减较慢时STR3维持广义弧相容的效率高于STR2.通过深入分析发现MDDc中查找节点的有效出边和STR3中检测并删除无效元组是耗时最多的操作.本文分别对MDDc和STR3提出一种自适应查找有效出边和检测删除无效元组的方法AdaptiveMDDc和AdaptiveSTR,对于同一操作,可以根据回溯搜索不同阶段的局势自适应地选择代价最小的实现方法,得益于较低的判断代价以及回溯搜索不同阶段采用不同方法的效率差异,AdaptiveMDDc和AdaptiveSTR相比原算法速度提升显著,其中AdaptiveSTR在一些问题上相比STR3提速三倍以上.  相似文献   

6.
李哲  于哲舟  李占山 《软件学报》2023,34(9):4153-4166
约束规划(constraint programming, CP)是表示和求解组合问题的经典范式之一.扩展约束(extensional constraint)或称表约束(table constraint)是约束规划中最为常见的约束类型.绝大多数约束规划问题都可以用表约束表达.在问题求解时,相容性算法用于缩减搜索空间.目前,最为高效的表约束相容性算法是简单表约缩减(simple table reduction, STR)算法簇,如Compact-Table (CT)和STRbit算法.它们在搜索过程中维持广义弧相容(generalized arc consistency, GAC).此外,完全成对相容性(full pairwise consistency, fPWC)是一种比GAC剪枝能力更强的相容性.最为高效的维持fPWC算法是PW-CT算法.多年来,人们提出了多种表约束相容性算法来提高剪枝能力和执行效率.因子分解编码(factor-decomposition encoding, FDE)通过对平凡问题重新编码.它一定程度地扩大了问题模型,使在新的问题上维持相对较弱的GAC等价于在原问题...  相似文献   

7.
Filtering algorithms for table constraints can be classified in two categories: constraint-based and value-based. In the constraint-based approaches, the propagation queue only contains information on the constraints that must be reconsidered. For the value-based approaches, the propagation queue also contains information on the removed values. This paper proposes five efficient value-based algorithms for table constraints. Two of them (AC5TCOpt-Tr and AC5TCOpt-Sparse) are proved to have an optimal time complexity of O(r·t+r·d) per table constraint. Substantial experimental results are presented. An empirical analysis is conducted on the effect of the arity of the tables. The experiments show that our propagators are the best when the arity of the table is 3 or 4. Indeed, on instances containing only binary constraints, our algorithms are outperformed by classical AC algorithm AC3rm. AC3rm is dedicated to binary constraints. However, all our algorithms outperform existing state-of-the-art constraint based STR2+ and MDD c and the optimal value-based STR3 algorithms on those instances. On instances with small arity tables (up to arity 4), all our algorithms are generally faster than STR2+, MDD c and than STR3. AC5TCOpt-Sparse is globally the best propagator on those benchmarks. On benchmarks containing large arity tables (arity 5 or more), each of the three existing state-of-the-art algorithms is the winning strategy on one different benchmark.  相似文献   

8.
A table constraint is explicitly represented as its set of solutions or non-solutions. This ad hoc (or extensional) representation may require space exponential to the arity of the constraint, making enforcing GAC expensive. In this paper, we address the space and time inefficiencies simultaneously by presenting the mddc constraint. mddc is a global constraint that represents its (non-)solutions with a multi-valued decision diagram (MDD). The MDD-based representation has the advantage that it can be exponentially smaller than a table. The associated GAC algorithm (called mddc) has time complexity linear to the size of the MDD, and achieves full incrementality in constant time. In addition, we show how to convert a positive or negative table constraint into an mddc constraint in time linear to the size of the table. Our experiments on structured problems, car sequencing and still-life, show that mddc is also a fast GAC algorithm for some global constraints such as sequence and regular. We also show that mddc is faster than the state-of-the-art generic GAC algorithms in Gent et al. (2007), Lecoutre and Szymanek (2006), Lhomme and Régin (2005) for table constraint.  相似文献   

9.
表约束,也称为外延式约束,是约束编程领域最常见的约束形式,表压缩方法通过紧凑的表示元组集可以极大地缩减空间消耗,同时加速 GAC 算法。笛卡尔乘积表示和短支持是表约束中最常见的两种表压缩方法,两种表压缩方法在同一问题上的压缩率是影响它们优化效果的主要原因。基于 STR 算法提出一种自适应表压缩方法,在求解问题时自适应选择压缩率大的表压缩方法,将自适应表压缩方法应用到 STR2 上提出了 STR2 Adaptive 算法,可以同时覆盖两种表压缩方法的优势。实验结果表明,STR2 Adaptive 算法在绝大部分实例上都能自适应选择最佳的表压缩方法,有效地减少了STR2算法空间消耗和CPU运行时间。然后将自适应表压缩方法扩展到采用了高效的比特向量表示的 STRbit 算法上提出了 STRbit Adaptive 算法。实验结果表明,STRbit Adaptive 算法效率同样普遍优于 STRbit 算法。  相似文献   

10.
Much research in the area of constraint processing has recently been focused on extracting small unsatisfiable “cores” from unsatisfiable constraint systems with the goal of finding minimal unsatisfiable subsets (MUSes). While most techniques have provided ways to find an approximation of an MUS (not necessarily minimal), we have developed a sound and complete algorithm for producing all MUSes of an unsatisfiable constraint system. In this paper, we describe a relationship between satisfiable and unsatisfiable subsets of constraints that we subsequently use as the foundation for MUS extraction algorithms, implemented for Boolean satisfiability constraints. The algorithms provide a framework with which many related subproblems can be solved, including relaxations of completeness to handle intractable instances, and we develop several variations of the basic algorithms to illustrate this. Experimental results demonstrate the performance of our algorithms, showing how the base algorithms run quickly on many instances, while the variations are valuable for producing results on instances whose complete results are intractably large. Furthermore, our algorithms are shown to perform better than the existing algorithms for solving either of the two distinct phases of our approach.  相似文献   

11.
陈佳楠  李哲  李占山 《软件学报》2021,32(9):2769-2782
并行传播是并行约束程序领域中的一个研究方向,其研究内容是如何并行执行在约束上的过滤算法.根据维持表约束网络广义弧相容(generalized arc consistency,简称GAC)的串行传播模式,提出了维持表约束网络临时广义弧相容(temporary generalized arc consistency,简称TGAC)的并行传播模式,该模式基于多核CPU,由并行传播算法和并行过滤算法两部分组成;之后,给出了并行传播模式的可靠性证明,而且通过对并行传播模式的最坏时间复杂度分析,可以认为并行传播模式在平均过滤时间较长的实例上要快于串行传播模式;最终的实验结果也验证了上述结论,并行传播模式在多数实例集上取得了从1.4~3.4不等的加速比.  相似文献   

12.
In this paper we propose a random CSP model, called Model GB, which is a natural generalization of standard Model B. This paper considers Model GB in the case where each constraint is easy to satisfy. In this case Model GB exhibits non-trivial behaviour (not trivially satisfiable or unsatisfiable) as the number of variables approaches infinity. A detailed analysis to obtain an asymptotic estimate (good to 1+o(1)) of the average number of nodes in a search tree used by the backtracking algorithm on Model GB is also presented. It is shown that the average number of nodes required for finding all solutions or proving that no solution exists grows exponentially with the number of variables. So this model might be an interesting distribution for studying the nature of hard instances and evaluating the performance of CSP algorithms. In addition, we further investigate the behaviour of the average number of nodes as r (the ratio of constraints to variables) varies. The results indicate that as r increases, random CSP instances get easier and easier to solve, and the base for the average number of nodes that is exponential in n tends to 1 as r approaches infinity. Therefore, although the average number of nodes used by the backtracking algorithm on random CSP is exponential, many CSP instances will be very easy to solve when r is sufficiently large.  相似文献   

13.
The paper focuses on evaluating constraint satisfaction search algorithms on application based random problem instances. The application we use is a well-studied problem in the electric power industry: optimally scheduling preventive maintenance of power generating units within a power plant. We show how these scheduling problems can be cast as constraint satisfaction problems and used to define the structure of randomly generated non-binary CSPs. The random problem instances are then used to evaluate several previously studied algorithms. The paper also demonstrates how constraint satisfaction can be used for optimization tasks. To find an optimal maintenance schedule, a series of CSPs are solved with successively tighter cost-bound constraints. We introduce and experiment with an “iterative learning” algorithm which records additional constraints uncovered during search. The constraints recorded during the solution of one instance with a certain cost-bound are used again on subsequent instances having tighter cost-bounds. Our results show that on a class of randomly generated maintenance scheduling problems, iterative learning reduces the time required to find a good schedule. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

14.
The patient bed assignment problem consists of managing, in the best possible way, a set of beds with particular features and assigning them to a set of patients with special requirements. This assignment problem can be seen an optimization problem, of which the intended aims are usually to minimize the number of internal movements within a unit and to maximize bed usage according to the levels of criticality of the patients, among others. The usual approaches for solving this problem follow a traditional model based on the constraint programming paradigm, mainly using hard constraints. However, in real-life problems, constraints that should ideally be satisfied are often violated. In this paper, we present a new model for the patient bed assignment problem based on the minimum sum of unsatisfied constraints. This technique enables the consideration of soft constraints in the potential solutions that exhibit the best performance. The aim is to find the assignment that minimizes a weighted sum of the unsatisfied constraints. To this end, we use an autonomous binary version of the bat algorithm, which is an optimization technique inspired by the bio-sonar behaviour of microbats, to find the best set of potential solutions without requiring any expert user knowledge to achieve an efficient solution process. To validate our proposal, we use our model to solve problem instances based on data from several hospitals, and we perform a detailed comparative statistical analysis with a traditional constraint programming solver and several well-known optimization algorithms, including the classic bat algorithm. Promising results show that our approach is capable of efficiently solving 30 instances with decreased solution times.  相似文献   

15.
Employee timetabling is the operation of assigning employees to tasks in a set of shifts during a fixed period of time, typically a week. We present a general definition of employee timetabling problems (ETPs) that captures many real-world problem formulations and includes complex constraints. The proposed model of ETPs can be represented in a tabular form that is both intuitive and efficient for constraint representation and processing. The constraint networks of ETPs include non-binary constraints and are difficult to formulate in terms of simple constraint solvers. We investigate the use of local search techniques for solving ETPs. In particular, we propose several versions of hill-climbing that make use of a novel search space that includes also partial assignments. We show that, on large and difficult instances of real world ETPs, where systematic search fails, local search methods perform well and solve the hardest instances. According to our experimental results on various techniques, a simple version of hill climbing based on random moves is the best method for solving large ETP instances.  相似文献   

16.
Combining constraints using logical connectives such as disjunction is ubiquitous in constraint programming, because it adds considerable expressive power to a constraint language. We explore the solver architecture needed to propagate such combinations of constraints efficiently. In particular we describe two new features named satisfying sets and constraint trees. We also make use of movable triggers (Gent et al., 2006) [1], and with these three complementary features we are able to make considerable efficiency gains.A key reason for the success of Boolean Satisfiability (SAT) solvers is their ability to propagate Or constraints efficiently, making use of movable triggers. We successfully generalise this approach to an Or of an arbitrary set of constraints, maintaining the crucial property that at most two constraints are active at any time, and no computation at all is done on the others. We also give an And propagator within our framework, which may be embedded within the Or. Using this approach, we demonstrate speedups of over 10,000 times in some cases, compared to traditional constraint programming approaches. We also prove that the Or algorithm enforces generalised arc consistency (GAC) when all its child constraints have a GAC propagator, and no variables are shared between children. By extending the Or propagator, we present a propagator for AtLeastK, which expresses that at least k of its child constraints are satisfied in any solution.Some logical expressions (e.g. exclusive-or) cannot be compactly expressed using And, Or and AtLeastK. Therefore we investigate reification of constraints. We present a fast generic algorithm for reification using satisfying sets and movable triggers.  相似文献   

17.
In this paper, we propose a way of exploiting Operations Research techniques within global constraints for cost-based domain filtering. In Constraint Programming, constraint propagation is aimed at removing from variable domains combinations of values which are proven infeasible. Pruning derives from feasibility reasoning. When coping with optimization problems, pruning can be performed also on the basis of costs, i.e., optimality reasoning. Cost-based filtering removes combination of values which are proven sub-optimal. For this purpose, we encapsulate in global constraints optimization components representing suitable relaxations of the constraint itself. These components embed efficient Operations Research algorithms computing the optimal solution of the relaxed problem and a gradient function representing the estimated cost of each variable-value assignment. We exploit these pieces of information for pruning and for guiding the search. We have applied these techniques to a couple of ILOG Solver global constraints (a constraint of difference and a path constraint) and tested the approach on a variety of combinatorial optimization problems such as Timetabling, Travelling Salesman Problems and Scheduling Problems with sequence dependent setup times. Comparisons with pure Constraint Programming approaches and related literature clearly show the benefits of the proposed approach.  相似文献   

18.
Many multiobjective optimization problems in the engineering field are required to be solved within more or less severe time restrictions. Because the optimization criteria, the parameters, and/or constraints might change with time, the optimization solutions must be recalculated when a change takes place. The time required by the optimization procedure to arrive at the new solutions should be bounded accordingly with the rate of change observed in these dynamic problems. This way, the faster the optimization algorithm is to obtain solutions, the wider is the set of dynamic problems to which that algorithm can be applied. Here, we analyze the performance of the nondominated sorting algorithm (NSGA-II), strength Pareto evolutionary algorithm (SPEA2), and single front genetic algorithms (SFGA, and SFGA2) on two different multiobjective optimization problems, with two and three objectives, respectively. For these two studied problems, the single front genetic algorithms have obtained adequate quality in the solutions in very little time. Moreover, for the second and more complex problem approached, SFGA2 and NSGA-II obtain the best hypervolume in the found set of nondominated solutions, but SFGA2 employs much less time than NSGA-II. These results may suggest that single front genetic algorithms, especially SFGA2, could be appropiate to deal with optimization problems with high rates of change, and thus stronger time constraints.  相似文献   

19.
We study the ‘classical’ discrete, solid-void or black-and-white topology optimization problem, in which minimum compliance is sought, subject to constraints on the available material resource. We assume that this problem is solved using methods that relax the discreteness requirements during intermediate steps, and that the associated programming problems are solved using sequential approximate optimization (SAO) algorithms based on duality. More specifically, we assume that the advantages of the well-known Falk dual are exploited. Such algorithms represent the state-of-the-art in (large-scale) topology optimization when multiple constraints are present; an important example being the method of moving asymptotes (MMA).We depart by noting that the aforementioned SAO algorithms are invariably formulated using strictly convex subproblems. We then numerically illustrate that strictly concave constraint functions, like those present in volumetric penalization, as recently proposed by Bruns and co-workers, may increase the difficulty of the topology optimization problem when strictly convex approximations are used in the SAO algorithm. In turn, volumetric penalization methods are of notable importance, since they seem to hold much promise for generating predominantly solid-void or discrete designs.We then argue that the nonconvex problems we study may in some instances efficiently be solved using dual SAO methods based on nonconvex (strictly concave) approximations which exhibit monotonicity with respect to the design variables.Indeed, for the topology problem resulting from SIMP-like volumetric penalization, we show explicitly that convex approximations are not necessary. Even though the volumetric penalization constraint is strictly concave, the maximum of the resulting dual subproblem still corresponds to the optimum of the original primal approximate subproblem.  相似文献   

20.
Rapid growth in social networks(SNs)presents a unique scalability challenge for SN operators because of the massive amounts of data distribution among large number of concurrent online users.A request from any user may trigger hundreds of server activities to generate a customized page and which has already become a huge burden.Based on the theoretical model and analytical study considering realistic network scenarios,this article proposes a hybrid P2P-based architecture called PAIDD.PAIDD fulfills effective data distribution primarily through P2P connectivity and social graph among users but with the help of central servers.To increase system efficiency,PAIDD performs optimized content prefetching based on social interactions among users.PAIDD chooses interaction as the criteria because user’s interaction graph is measured to be much smaller than the social graph.Our experiments confirm that PAIDD ensures satisfactory user experience without incurring extensive overhead on clients’network.More importantly,PAIDD can effectively achieve one order of magnitude of load reduction at central servers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号