首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The AllDifferent constraint is a crucial component of any constraint toolkit, language or solver, since it is very widely used in a variety of constraint models. The literature contains many different versions of this constraint, which trade strength of inference against computational cost. In this paper, we focus on the highest strength of inference, enforcing a property known as generalised arc consistency (GAC). This work is an analytical survey of optimizations of the main algorithm for GAC for the AllDifferent constraint. We evaluate empirically a number of key techniques from the literature. We also report important implementation details of those techniques, which have often not been described in published papers. We pay particular attention to improving incrementality by exploiting the strongly-connected components discovered during the standard propagation process, since this has not been detailed before. Our empirical work represents by far the most extensive set of experiments on variants of GAC algorithms for AllDifferent. Overall, the best combination of optimizations gives a mean speedup of 168 times over the same implementation without the optimizations.  相似文献   

2.
Previous research shows that a cardinality reasoning can improve the pruning of the bin-packing constraint. We first introduce a new algorithm, called BPCFlow, that filters both load and cardinality bounds on the bins, using a flow reasoning similar to the Global Cardinality Constraint. Moreover, we detect impossible assignments of items by combining the load and cardinality of the bins, using a method to detect items that are either ”too-big” or ”too-small”. This method is adapted to two previously existing filtering techniques along with BPCFlow, creating three new propagators. We then experiment the four new algorithms on Balanced Academic Curriculum Problem and Tank Allocation Problem instances. BPCFlow is shown to be stronger than previously existing filtering, and more computationally intensive. We show that the new filtering is useful on a small number of hard instances, while being too expensive for general use. Our results show that the introduced ”too-big/too-small” filtering can most of the time drastically reduce the size of the search tree and the computation time. This method is profitable in 88% of the tested instances.  相似文献   

3.
We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-the-art algorithms by a factor of log n. We show how to perform Time-Tabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems.  相似文献   

4.
A general-purpose scene-analysis system is described which uses constraint-filtering techniques to apply domain knowledge in the interpretation of the regions extracted from a segmented image. An example is given of the configuration of the system for a particular domain, FLIR (Forward Looking InfraRed) images, as well as results of the system's performance on some typical images from this domain. A number of improvements to these techniques are proposed.  相似文献   

5.
This paper presents a new post-processing algorithm based on a robust statistical model to remove the blocking artifacts observed in block discrete cosine transform (BDCT)-based image compression standards. The novelty is the implementation of a new robust weight function for the block artifact reduction. The blocking artifacts in an image are treated as an outlier random variable. The robust formulation aims at eliminating the artifacts outliers, while preserving the edge structures in the restored image. Extensive simulation results and comparative studies demonstrate that the presented method provides superior results in terms of pixel-wise (PSNR) and perceptual (SSIM) measures.  相似文献   

6.
汤启  何腊梅 《计算机应用》2018,38(5):1481-1487
针对带非线性等式约束的非线性系统的状态估计问题,给出了一种新形式的基于无迹卡尔曼滤波及伪观测手段的处理约束的状态估计方法(SPUKF)。在该方法中原动态系统被虚拟地分离成两个并行的子系统,各时刻的状态估计由基于这两个子系统构建的两套滤波链交替得到。相对于伪观测法中的序贯形式估计器,SPUKF无需事先确定观测及约束的处理次序且能获得更好的估计结果,故可以用来解决序贯方法中观测与约束的处理次序问题。由钟摆运动的实例仿真结果看到,SPUKF不仅有好于序贯形式无迹卡尔曼滤波的估计效果,误差改善比达到22%左右,而且算法运行时间与序贯形式估计器相近。此外,其估计效果还与批处理无迹卡尔曼滤波相当。  相似文献   

7.
Precision and consistency are important prerequisites for class models to conform to their intended domain semantics. Precision can be achieved by augmenting models with design constraints and consistency can be achieved by avoiding contradictory constraints. However, there are different views of what constitutes a contradiction for design constraints. Moreover, state-of-the-art analysis approaches for proving constrained models consistent either scale poorly or require the use of interactive theorem proving. In this paper, we present a heuristic approach for efficiently analyzing constraint specifications built from constraint patterns. This analysis is based on precise notions of consistency for constrained class models and exploits the semantic properties of constraint patterns, thereby enabling syntax-based consistency checking in polynomial-time. We introduce a consistency checker implementing these ideas and we report on case studies in applying our approach to analyze industrial-scale models. These studies show that pattern-based constraint development supports the creation of concise specifications and provides immediate feedback on model consistency.  相似文献   

8.
针对传统防火墙线性匹配算法匹配效率低、维护困难等问题,提出并实现了一种面向IP地址集过滤的高效、灵活的Netfilter扩展框架Salist。 Salist包含一个基于内核虚拟文件的表管理模块,一个可自动对IP地址集进行去重、归并和排序的表内规则管理模块,一个基于Bsearch算法的高效的包匹配模块。通过理论分析和实际测试证明, Salist使包匹配算法时间复杂度由传统线性匹配的O( n)降低为O( log n),规则合并减少了规则表占用的内核内存空间10%以上,按文件分离的规则管理机制简化了对规则集进行维护的难度。结果表明Salist使用在核心网络设备中可极大提高包转发速率,降低规则的内存占用和管理难度。  相似文献   

9.
虚假数据注入攻击是无线传感器网络的一种严重威胁,针对大多数虚假数据过滤方案没考虑节点身份攻击和中间节点被攻击者俘获的问题,提出了一种抗节点身份攻击的虚假数据过滤方案,方案不仅在数据转发过程中对转发的数据进行验证、过滤,同时对协作产生感知数据的节点的身份进行验证。安全性分析和性能评价表明,该方案不仅能抵抗各种攻击,而且在存储开销方面与其他方案相比,具有明显优势,并且随着数据包被转发跳数的增加,该方案的虚假数据过滤能力和能量节省也显著增加。  相似文献   

10.
Multi-passive-sensor systems are a common means for the target tracking and their bearings processing is a prerequisite for stable control and nonlinear filtering. This study proposes a mathematical methodology that is based on the incorporating deterministic unscented transition rules into stochastic sequential importance sampling frame and makes use of soft spatiotemporal constraint comprise multiview epipolar geometry constraint and numerical regularization to solve the correspondence problem. A prototype measurement-driven target tracking frame was developed that can work in real time and achieve filtering improvements of 41%–46% and 43%–48% in terms of root-mean-square error and root time-averaged mean square error compared with the state-of-the-art multiple model Rao–Blackwell particle filtering method, as proven by the simulation results.  相似文献   

11.
Two combinatorial filtering methods for efficiently selecting reaction products with desired properties are presented. The first, "direct reactants" method is applicable only to those molecular properties that are strictly additive or approximately additive, with relatively small interference between neighboring fragments. This method uses only the molecular properties of reactants. The second, "basis products" method can be used to filter not only the strictly additive properties but also the approximately additive molecular properties where a certain degree of mutual influence occurs between neighboring fragments. This method requires the molecular properties of the "basis products," which are the products formed by combining all the reactants for a given reaction component with the simplest set of complementary reactant partners. There is a one-to-one correspondence between the reactants and the "basis products." The latter is a product representation of the former. High efficiency of both methods is enhanced further by a tree-sorting and hierarchical selection algorithm, which is performed on the reaction components in a limited space determined systematically from the filtering criteria. The methods are illustrated with product logPs, van der Waals volumes, solvent accessible surface areas, and other product properties. Good results are obtained when filtering for a number of important molecular properties in a virtual library of 1.5 billion.  相似文献   

12.
In large-scale distributed simulation, thousands of objects keep moving and interacting in a virtual environment, which produces a mass of messages. High level architecture (HLA) is the prevailing standard for modeling and simulation. It specifies two publish-subscribe mechanisms for message filtering: class-based and value-based. However, the two mechanisms can only judge whether a message is relevant to a subscriber or not. Lacking of the ability to evaluate the relevance, all relevant messages are delivered with the same priority even when congestion occurs. It significantly limits the scalability and performance of distributed simulation. Aiming to solve the relevance evaluation problem, speed up message filtering, and filter more unnecessary messages, a new relevance evaluation mechanism Layer of Interest (LoI) was proposed by this paper. LoI defines a relevance classifier based on the impact of spatial distance on receiving attributes and attribute values. An adaptive publish-subscribe scheme was built on the basis of LoI. This scheme can abandon most irrelevant messages directly. Run-time infrastructure (RTI) can also apply congestion control by reducing the frequency of sending or receiving object messages based on each objects’ LoI. The experiment results verify the efficiency of message filtering and RTI congestion control. Supported by the National Basic Research Program of China (Grant No. 2009CB320805), the National Natural Science Foundation of China (Grant No. 60603084), and the National High-Tech Research & Development Program of China (Grant No. 2006AA01Z331)  相似文献   

13.
Let F1,…,FsR[X1,…,Xn] be polynomials of degree at most d, and suppose that F1,…,Fs are represented by a division free arithmetic circuit of non-scalar complexity size L. Let A be the arrangement of Rn defined by F1,…,Fs.For any point xRn, we consider the task of determining the signs of the values F1(x),…,Fs(x) (sign condition query) and the task of determining the connected component of A to which x belongs (point location query). By an extremely simple reduction to the well-known case where the polynomials F1,…,Fs are affine linear (i.e., polynomials of degree one), we show first that there exists a database of (possibly enormous) size sO(L+n) which allows the evaluation of the sign condition query using only (Ln)O(1)log(s) arithmetic operations. The key point of this paper is the proof that this upper bound is almost optimal.By the way, we show that the point location query can be evaluated using dO(n)log(s) arithmetic operations. Based on a different argument, analogous complexity upper-bounds are exhibited with respect to the bit-model in case that F1,…,Fs belong to Z[X1,…,Xn] and satisfy a certain natural genericity condition. Mutatis mutandis our upper-bound results may be applied to the sparse and dense representations of F1,…,Fs.  相似文献   

14.
RDF Site Summaries constitute an application of RDF on the Web that has considerably grown in popularity. However, the way RSS systems operate today limits their scalability. Current RSS feed arregators follow a pull-based architecture model, which is not going to scale with the increasing number of RSS feeds becoming available on the Web. In this paper, we introduce G-ToPSS, a scalable publish/subscribe system for selective information dissemination. G-ToPSS only sends newly updated information to the interested user and follows a push-based architecture model. G-ToPSS is particularly well suited for applications that deal with large-volume content distribution from diverse sources. G-ToPSS allows use of an ontology as a way to provide additional information about the data disseminated. We have implemented and experimentally evaluated G-ToPSS and we provide results demonstrating its scalability compared to alternative approaches. In addition, we describe an application of G-ToPSS and RSS to a Web-based content management system that provides an expressive, efficient, and convenient update notification dissemination system.  相似文献   

15.
Recent advances in AI planning have centred upon the reduction of planning to a constraint satisfaction problem (CSP) enabling the application of the efficient search algorithms available in this area. This paper continues this approach, presenting a novel technique which exploits (restriction/relaxation-based) dynamic CSP (rrDCSP) in order to further improve planner performance. Using the standard Graphplan framework, it is shown how significant efficiency gains may be obtained by viewing plan extraction as the solution of a hierarchy of such rrDCSPs. Furthermore, by using flexible constraints as a formal foundation, it is shown how the traditional boolean notion of planning can be extended to incorporate prioritised and preference-based information. Plan extraction in this context is shown to generalise the Boolean rrDCSP approach, being systematically supported by the recently developed solution techniques for dynamic flexible CSPs (DFCSPs). The proposed techniques are evaluated via benchmark boolean problems and a novel flexible benchmark problem. Results obtained are very encouraging.  相似文献   

16.
Skyline queries have been increasingly used in multi-criteria decision making and data mining applications. They retrieve a set of interesting points from a potentially large set of data points. A point is said to be interesting if it is not dominated by any other point. Skyline cube (skycube) consists of skylines of all possible non-empty subsets of a given set of dimensions. In this paper, we propose two algorithms for computing skycube using bitmaps that are derivable from indexes. The Point-based skycube algorithm is an improvement over the existing Bitmap algorithm, extended to compute skycube. The Point-based algorithm processes one point at a time to check for skylines in all subspaces. The Domain-based skycube algorithm views points as value combinations and probes entire search space for potential skyline points. It significantly reduces bitmap access for low cardinality dimensions. Our experimental study shows that the two algorithms strictly dominate, or at least comparable to, the current skycube algorithm in most of the cases, suggesting that such an approach could be a useful addition to the set of skyline query processing techniques.  相似文献   

17.
18.
非局部均值滤波方法具有优异的去噪性能,但该算法计算复杂度太高,且滤波后图像有大量结构残留。研究了基于预选择的非局部均值滤波方法,并指出已有方法在提取图像子块特征方面的不足。利用梯度域奇异值分解提取图像子块的结构特征,提出一种有效保持细节特征的快速非局部滤波方法。主要贡献有:(1)基于局部结构特征的鲁棒预选择方法;(2)相似集大小与滤波性能的关系以及相似子块的自动选取;(3)结构相似权系数的构造。利用欧氏距离的对称性进一步提高运行速度。实验结果表明,该方法在去除噪声的同时能有效地保持图像细节信息,取得滤波性能与运行速度之间较好的平衡。  相似文献   

19.
20.
针对非负支撑域受限递归逆滤波(NAS-RIF)算法对噪声敏感和耗时长等缺点,提出了一种改进的NAS-RIF盲复原算法。首先,为了改进原始NAS-RIF算法的抗噪性能和复原效果,引入了一种新的NAS-RIF算法代价函数;其次,为了提高算法的运算效率,结合Haar小波变换,仅对低频子频带的图像进行NAS-RIF算法复原,而高频子频带的信息,则通过带间预测分别从低频子频带的复原图像中预测得到;最后,为了保证高频信息的准确性,提出了一种基于最小均方误差(MMSE)的带间预测。分别对模拟退化图像和真实图像进行了仿真实验,采用该算法得到的信噪比增益分别为5.2216 dB和8.1039 dB。实验结果表明:该算法在保持图像边缘细节的前提下,能够较好地抑制噪声;此外,该算法的运算效率也得到了较大的提高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号