首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The AllDifferent constraint is a crucial component of any constraint toolkit, language or solver, since it is very widely used in a variety of constraint models. The literature contains many different versions of this constraint, which trade strength of inference against computational cost. In this paper, we focus on the highest strength of inference, enforcing a property known as generalised arc consistency (GAC). This work is an analytical survey of optimizations of the main algorithm for GAC for the AllDifferent constraint. We evaluate empirically a number of key techniques from the literature. We also report important implementation details of those techniques, which have often not been described in published papers. We pay particular attention to improving incrementality by exploiting the strongly-connected components discovered during the standard propagation process, since this has not been detailed before. Our empirical work represents by far the most extensive set of experiments on variants of GAC algorithms for AllDifferent. Overall, the best combination of optimizations gives a mean speedup of 168 times over the same implementation without the optimizations.  相似文献   

2.
Previous research shows that a cardinality reasoning can improve the pruning of the bin-packing constraint. We first introduce a new algorithm, called BPCFlow, that filters both load and cardinality bounds on the bins, using a flow reasoning similar to the Global Cardinality Constraint. Moreover, we detect impossible assignments of items by combining the load and cardinality of the bins, using a method to detect items that are either ”too-big” or ”too-small”. This method is adapted to two previously existing filtering techniques along with BPCFlow, creating three new propagators. We then experiment the four new algorithms on Balanced Academic Curriculum Problem and Tank Allocation Problem instances. BPCFlow is shown to be stronger than previously existing filtering, and more computationally intensive. We show that the new filtering is useful on a small number of hard instances, while being too expensive for general use. Our results show that the introduced ”too-big/too-small” filtering can most of the time drastically reduce the size of the search tree and the computation time. This method is profitable in 88% of the tested instances.  相似文献   

3.
We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-the-art algorithms by a factor of log n. We show how to perform Time-Tabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems.  相似文献   

4.
A general-purpose scene-analysis system is described which uses constraint-filtering techniques to apply domain knowledge in the interpretation of the regions extracted from a segmented image. An example is given of the configuration of the system for a particular domain, FLIR (Forward Looking InfraRed) images, as well as results of the system's performance on some typical images from this domain. A number of improvements to these techniques are proposed.  相似文献   

5.
This paper presents a new post-processing algorithm based on a robust statistical model to remove the blocking artifacts observed in block discrete cosine transform (BDCT)-based image compression standards. The novelty is the implementation of a new robust weight function for the block artifact reduction. The blocking artifacts in an image are treated as an outlier random variable. The robust formulation aims at eliminating the artifacts outliers, while preserving the edge structures in the restored image. Extensive simulation results and comparative studies demonstrate that the presented method provides superior results in terms of pixel-wise (PSNR) and perceptual (SSIM) measures.  相似文献   

6.
Precision and consistency are important prerequisites for class models to conform to their intended domain semantics. Precision can be achieved by augmenting models with design constraints and consistency can be achieved by avoiding contradictory constraints. However, there are different views of what constitutes a contradiction for design constraints. Moreover, state-of-the-art analysis approaches for proving constrained models consistent either scale poorly or require the use of interactive theorem proving. In this paper, we present a heuristic approach for efficiently analyzing constraint specifications built from constraint patterns. This analysis is based on precise notions of consistency for constrained class models and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time. We introduce a consistency checker implementing these ideas and we report on case studies in applying our approach to analyze industrial-scale models. These studies show that pattern-based constraint development supports the creation of concise specifications and provides immediate feedback on model consistency.  相似文献   

7.
针对传统防火墙线性匹配算法匹配效率低、维护困难等问题,提出并实现了一种面向IP地址集过滤的高效、灵活的Netfilter扩展框架Salist。 Salist包含一个基于内核虚拟文件的表管理模块,一个可自动对IP地址集进行去重、归并和排序的表内规则管理模块,一个基于Bsearch算法的高效的包匹配模块。通过理论分析和实际测试证明, Salist使包匹配算法时间复杂度由传统线性匹配的O( n)降低为O( log n),规则合并减少了规则表占用的内核内存空间10%以上,按文件分离的规则管理机制简化了对规则集进行维护的难度。结果表明Salist使用在核心网络设备中可极大提高包转发速率,降低规则的内存占用和管理难度。  相似文献   

8.
In large-scale distributed simulation, thousands of objects keep moving and interacting in a virtual environment, which produces a mass of messages. High level architecture (HLA) is the prevailing standard for modeling and simulation. It specifies two publish-subscribe mechanisms for message filtering: class-based and value-based. However, the two mechanisms can only judge whether a message is relevant to a subscriber or not. Lacking of the ability to evaluate the relevance, all relevant messages are delivered with the same priority even when congestion occurs. It significantly limits the scalability and performance of distributed simulation. Aiming to solve the relevance evaluation problem, speed up message filtering, and filter more unnecessary messages, a new relevance evaluation mechanism Layer of Interest (LoI) was proposed by this paper. LoI defines a relevance classifier based on the impact of spatial distance on receiving attributes and attribute values. An adaptive publish-subscribe scheme was built on the basis of LoI. This scheme can abandon most irrelevant messages directly. Run-time infrastructure (RTI) can also apply congestion control by reducing the frequency of sending or receiving object messages based on each objects’ LoI. The experiment results verify the efficiency of message filtering and RTI congestion control. Supported by the National Basic Research Program of China (Grant No. 2009CB320805), the National Natural Science Foundation of China (Grant No. 60603084), and the National High-Tech Research & Development Program of China (Grant No. 2006AA01Z331)  相似文献   

9.
Two combinatorial filtering methods for efficiently selecting reaction products with desired properties are presented. The first, "direct reactants" method is applicable only to those molecular properties that are strictly additive or approximately additive, with relatively small interference between neighboring fragments. This method uses only the molecular properties of reactants. The second, "basis products" method can be used to filter not only the strictly additive properties but also the approximately additive molecular properties where a certain degree of mutual influence occurs between neighboring fragments. This method requires the molecular properties of the "basis products," which are the products formed by combining all the reactants for a given reaction component with the simplest set of complementary reactant partners. There is a one-to-one correspondence between the reactants and the "basis products." The latter is a product representation of the former. High efficiency of both methods is enhanced further by a tree-sorting and hierarchical selection algorithm, which is performed on the reaction components in a limited space determined systematically from the filtering criteria. The methods are illustrated with product logPs, van der Waals volumes, solvent accessible surface areas, and other product properties. Good results are obtained when filtering for a number of important molecular properties in a virtual library of 1.5 billion.  相似文献   

10.
RDF Site Summaries constitute an application of RDF on the Web that has considerably grown in popularity. However, the way RSS systems operate today limits their scalability. Current RSS feed arregators follow a pull-based architecture model, which is not going to scale with the increasing number of RSS feeds becoming available on the Web. In this paper, we introduce G-ToPSS, a scalable publish/subscribe system for selective information dissemination. G-ToPSS only sends newly updated information to the interested user and follows a push-based architecture model. G-ToPSS is particularly well suited for applications that deal with large-volume content distribution from diverse sources. G-ToPSS allows use of an ontology as a way to provide additional information about the data disseminated. We have implemented and experimentally evaluated G-ToPSS and we provide results demonstrating its scalability compared to alternative approaches. In addition, we describe an application of G-ToPSS and RSS to a Web-based content management system that provides an expressive, efficient, and convenient update notification dissemination system.  相似文献   

11.
Let F1,…,FsR[X1,…,Xn] be polynomials of degree at most d, and suppose that F1,…,Fs are represented by a division free arithmetic circuit of non-scalar complexity size L. Let A be the arrangement of Rn defined by F1,…,Fs.For any point xRn, we consider the task of determining the signs of the values F1(x),…,Fs(x) (sign condition query) and the task of determining the connected component of A to which x belongs (point location query). By an extremely simple reduction to the well-known case where the polynomials F1,…,Fs are affine linear (i.e., polynomials of degree one), we show first that there exists a database of (possibly enormous) size sO(L+n) which allows the evaluation of the sign condition query using only (Ln)O(1)log(s) arithmetic operations. The key point of this paper is the proof that this upper bound is almost optimal.By the way, we show that the point location query can be evaluated using dO(n)log(s) arithmetic operations. Based on a different argument, analogous complexity upper-bounds are exhibited with respect to the bit-model in case that F1,…,Fs belong to Z[X1,…,Xn] and satisfy a certain natural genericity condition. Mutatis mutandis our upper-bound results may be applied to the sparse and dense representations of F1,…,Fs.  相似文献   

12.
Skyline queries have been increasingly used in multi-criteria decision making and data mining applications. They retrieve a set of interesting points from a potentially large set of data points. A point is said to be interesting if it is not dominated by any other point. Skyline cube (skycube) consists of skylines of all possible non-empty subsets of a given set of dimensions. In this paper, we propose two algorithms for computing skycube using bitmaps that are derivable from indexes. The Point-based skycube algorithm is an improvement over the existing Bitmap algorithm, extended to compute skycube. The Point-based algorithm processes one point at a time to check for skylines in all subspaces. The Domain-based skycube algorithm views points as value combinations and probes entire search space for potential skyline points. It significantly reduces bitmap access for low cardinality dimensions. Our experimental study shows that the two algorithms strictly dominate, or at least comparable to, the current skycube algorithm in most of the cases, suggesting that such an approach could be a useful addition to the set of skyline query processing techniques.  相似文献   

13.
14.
15.
Wireless sensor networks (WSNs) have been widely used in many fields. The issue of node localization is a fundamental problem in WSNs. And it is the basis and prerequisite for many applications. Due to the mobility of the sensor nodes, it is more challenging to locate nodes in the mobile WSNs than in the static ones. The existing localization schemes for mobile WSNs are almost based on the Sequential Monte Carlo (SMC) localization method. The SMC-based schemes may suffer from low sampling efficiency resulted from a large sampling area, which makes them difficult to achieve high localization accuracy and efficiency. Some schemes try to reduce the sampling area by further employing position relationship with neighbor common nodes, while we have found that the movements of the neighbor beacon nodes have not been fully exploited. Addressing this issue, in this paper, some new constraint rules are developed and some existing constraint rules are optimized with the consideration of the moving distance and direction of neighbor beacons. A series of distance constraint conditions are further created, by which, the scope/size of the sampling area can be further reduced, and the samples can be filtered more accurately. The performance of our algorithm is evaluated by extensive simulation experiments. The simulation results show that the localization error and computation cost of our proposed algorithm are lower than those of the existing ones, even when the speed of the sensor nodes is relative high.  相似文献   

16.
In this paper, we propose a dense stereo algorithm based on the census transform and improved dynamic programming (DP). Traditional scanline-based DP algorithms are the most efficient ones among global algorithms, but are well-known to be affected by the streak effect. To solve this problem, we improve the traditional three-state DP algorithm by taking advantage of an extended version of sequential vertical consistency constraint. Using this method, we increase the accuracy of the disparity map greatly. Optimizations have been made so that the computational cost is only increased by about 20%, and the additional memory needed for the improvement is negligible. Experimental results show that our algorithm outperforms many state-of-the-art algorithms with similar efficiency on Middlebury College’s stereo Web site. Besides, the algorithm is robust enough for image pairs with utterly different contrasts by using of census transform as the basic match metric.  相似文献   

17.
Multistage stochastic programming with endogenous uncertainty is a new topic in which the timing of uncertainty realization is decision-dependent. In this case, the number of nonanticipativity constraints (NACs) increases very quickly with the number of scenarios, making the problem computationally intractable. Fortunately, a large number of NACs are typically redundant and their elimination leads to a considerable reduction in the problem size. Identifying redundant NACs has been addressed in the literature only in the special case where the scenario set is equal to the Cartesian product of all possible outcomes for endogenous parameters; however, this is a scarce condition in practice. In this paper, we consider the general case where the scenario set is an arbitrary set; and two approaches, able to identify all redundant NACs, are proposed. The first approach is by mixed integer programming formulation and the second one is an exact polynomial time algorithm. Proving the fact that the proposed algorithm is able to make the uppermost reduction in the number of NACs is another novelty of this paper. Computational results evaluate the efficiency of the proposed approaches.  相似文献   

18.
The most challenging problem in mesh denoising is to distinguish features from noise. Based on the robust guided normal estimation and alternate vertex updating strategy, we investigate a new feature-preserving mesh denoising method. To accurately capture local structures around features, we propose a corner-aware neighborhood (CAN) scheme. By combining both overall normal distribution of all faces in a CAN and individual normal influence of the interested face, we give a new consistency measuring method, which greatly improves the reliability of the estimated guided normals. As the noise level lowers, we take as guidance the previous filtered normals, which coincides with the emerging rolling guidance idea. In the vertex updating process, we classify vertices according to filtered normals at each iteration and reposition vertices of distinct types alternately with individual regularization constraints. Experiments on a variety of synthetic and real data indicate that our method adapts to various noise, both Gaussian and impulsive, no matter in the normal direction or in a random direction, with few triangles flipped.  相似文献   

19.
Distributed compliant mechanisms are components that use elastic strain to obtain a desired kinematic behavior. Compliant mechanisms obtained via topology optimization using the standard approach of minimizing/maximizing the output displacement with a spring at the output port, representing the stiffness of the external medium, usually contain one-node connected hinges. Those hinges are undesired since an ideal compliant mechanism should be a continuous part. This work compares the use of two strategies for stress constrained problems: local and global stress constraints, and analyses their influence in eliminating the one-node connected hinges. Also, the influence of spatial filtering in eliminating the hinges is studied. An Augmented Lagrangian formulation is used to couple the objective function and constraints, and the resulting optimization problem is solved by using an algorithm based on the classical optimality criteria approach. Two compliant mechanisms problems are studied by varying the stress limit and filtering radius. It is observed that a proper combination of filtering radius and stress limit can eliminate one-node connected hinges.  相似文献   

20.
An efficient robust constrained model predictive control algorithm with a time varying terminal constraint set is developed for systems with model uncertainty and input constraints. The approach is novel in that it off-line constructs a continuum of terminal constraint sets and on-line achieves robust stability by using a relatively short control horizon (even N=0) with a time varying terminal constraint set. This algorithm not only dramatically reduces the on-line computation but also significantly enlarges the size of the allowable set of initial conditions. Moreover, this control scheme retains the unconstrained optimal performance in the neighborhood of the equilibrium. The controller design is illustrated through a benchmark problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号