首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
张永刚  程竹元 《计算机科学》2018,45(Z6):41-45, 62
约束传播技术对于约束满足问题的求解性能至关重要。约束传播技术在一个预处理过程中能彻底地移除一些局部不相容值,或者在搜索期间高效地剪枝搜索树。最大受限路径相容算法(max Restricted Path Consistency,maxRPC)是最近提出的一种强相容性约束传播算法,它能够删除更多不相容值,在解决复杂问题中取得了很好的效果。文中对弧相容算法AC和最大受限路径相容算法maxRPC的相关算法AC3,AC3rm,maxRPC1,maxRPC2,maxRPCrm,maxRPC3等及其相关变体分别进行介绍和比较。在Mistral求解器上的实验测试结果验证了各种算法的性能。  相似文献   

2.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms.(1) Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms.(2) In this paper, we give parallel algorithms for solving node and arc consistency. We show that any parallel algorithm for enforcing are consistency in the worst case must have O(na) sequential steps, wheren is number of nodes, anda is the number of labels per node. We give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.This work was partially supported by NSF Grants MCS-8221750, DCR-8506393, and DMC-8502115.  相似文献   

3.
李宏博  梁艳春  李占山 《软件学报》2015,26(12):3140-3150
研究了可用于求解约束满足问题的最大受限路径相容算法(maxRPC).maxRPC算法执行过程中有大量无效的寻找路径相容证明(PC-witness)的操作,有效地识别和避免这些无效的寻找PC-witness的操作,可以提高maxRPC算法的求解效率.首先,提出了在一条约束上任意两个相容的值在任意路径上存在PC-witness的概率;然后,基于这一概率提出了一种概率最大受限路径相容算法(PmaxRPC),并将新算法成功应用于求解约束满足问题的回溯搜索.实验结果显示:PmaxRPC可以避免一部分无效的寻找PC-witness的操作,在求解约束满足问题时,PmaxRPC效率高于maxRPC.在某些测试用例上,PmaxRPC比maxRPC和最流行的弧相容算法效率更高.  相似文献   

4.
片上多核处理器存储一致性验证   总被引:2,自引:0,他引:2  
存储一致性验证是片上多核处理器功能验证的重要部分.由于验证并行程序的执行结果是否符合存储一致性模型理论上是NP难问题,现有的验证方法中只能采用一些时间复杂度大于O(n3)的不完全方法.发现在支持写原子性的多处理器系统中,两条执行时间不重叠的操作之间存在确定的时间序.通过引入时间序的概念,设计并实现了一种线性时间复杂度的存储一致性验证工具LCHECK.LCHECK利用时间序将验证局部化,使得在表示程序执行结果的有向图中,序关系边的推导和正确性检测都被限定在有限范围内.与现有其他方法相比,LCHECK时间复杂度低,对程序长度和访存地址数没有限制,因此验证效率更高.作为国产片上多核处理器龙芯3号的重要验证工具, LCHECK发现了一些存储系统的设计错误.  相似文献   

5.
Constraint satisfaction problems are ubiquitous in artificial intelligence and many algorithms have been developed for their solution. This paper provides a unified survey of some of these, in terms of three classes: (i) tree search, (ii) arc consistency (AC), and (iii) hybrid tree search/arc consistency algorithms. It is shown that several important algorithms, when slightly rearranged, are of the latter hybrid form, but with arc consistency components that do not necessarily achieve full arc consistency at the tree nodes. Accordingly, we define several new partial AC procedures, AC1/5, AC1/4, AC1/3, and AC½, analogous to the well-known full AC algorithms which Mackworth has called AC1, AC2, and AC3. The fractional suffixes on our AC algorithms are roughly proportional to the degree of partial arc consistency they achieve. Unlike traditional versions, our AC algorithms (full and partial) are presented in a parameterized form to allow them to be embedded efficiently at the nodes of a tree search process. Algorithm complexities are compared empirically, using the n-queens problem and a new version called confused n-queens. Gaschnig's Backmarking (a tree search algorithm) and Haralick's Forward Checking (a hybrid algorithm) are found to be the most efficient. For the hybrid algorithms, we find that it pays to do little arc consistency processing at the nodes, incurring more nodes, but sufficiently reducing the work per node so as to obtain less work over the whole tree. The unified view taken here suggests several new algorithms. Preliminary results show one of these to be the best algorithm so far.  相似文献   

6.
Two novel algorithms for the fast computation of the Zernike and Pseudo-Zernike moments are presented in this paper. The proposed algorithms are very useful, particularly in the case of using the computed moments, as discriminative features in pattern classification applications, where the computation of single moments of several orders is required. The derivation of the algorithms is based on the elimination of the factorial computations, by computing recursively the fractional terms of the orthogonal polynomials being used. The newly introduced algorithms are compared to the direct methods, which are the only methods that permit the computation of single moments of any order. The computational complexity of the proposed method is O(p 2) in multiplications, with p being the moment order, while the corresponding complexity of the direct method is O(p 3). Appropriate experiments justify the superiority of the proposed recursive algorithms over the direct ones, establishing them as alternative to the original algorithms, for the fast computation of the Zernike and Pseudo-Zernike moments.  相似文献   

7.
This paper presents a novel domain-consistency algorithm which does not maintain supports dynamically during propagation, but rather maintains forbidden values. It introduces the optimal NAC4 (negative AC4) algorithm based on this idea, as an instance of the generic algorithm AC5. The paper then shows how forbidden values and supports can be used jointly to achieve domain consistency on logical combinations of constraints and to compute validity as well as entailment of constraints. The combination of NAC4 and AC4, denoted byPNAC4, allows to achieve domain consistency in time O(ed) for classes of constraints in which the number of supports is O(d 2) but the number of forbidden values is O(d), or conversely. The paper also presents a simple variant of AC3, denoted PNAC3. Both PNAC4 and PNAC3 are especially efficient on classes of constraints offering a O(1) getSupports or getForbidden function. Experimental results show that, on these particular classes of constraints, the joint exploitation of supports and forbidden values outperforms the standard AC algorithms, and that the use of a specialized getSupports or getForbidden function enhances the efficiency of the algorithms, especially for PNAC3 which is very close to the efficiency of totally dedicated consistency algorithms.  相似文献   

8.
《国际计算机数学杂志》2012,89(5):1082-1096
In this paper, we present a new second-order Mehrotra-type predictor–corrector algorithm for semi-definite programming (SDP). The proposed algorithm is based on a new wide neighbourhood. We are particularly concerned with an important inequality. Based on the inequality, the convergence is shown for a specific class of search directions. In particular, the complexity bound is O(√nlog??1) for the Nesterov–Todd search direction and O(n log??1) for the Helmberg-Kojima-Monteiro search direction. The derived complexity bounds coincide with the currently best known theoretical complexity bounds obtained so far for SDP. We provide some preliminary numerical results as well.  相似文献   

9.
《国际计算机数学杂志》2012,89(12):1465-1476
A finite binary Constraint Satisfaction Problem (CSPs) is defined as consisting of a set of n problem variables, a domain of d potential values for each variable and a set of m binary constraints involving only two variables at a time. A solution to such a CSP is specified by assignment of a value to each variable that does not violate any of the constraints. The CSPs belong to the class of NP-Complete Problems. Backtracking and its variants have been generally used for solving CSPs. The class of Partial Constraint Satisfaction Problems (PCSPs) is a subclass of CSPs that are either too difficult to solve or are unsolvable. Near optimal solutions are always desired to these problems.

In this article, we have considered only finite binary CSPs or PCSPs and developed a method of time complexity O(n 2 d 2) to obtain a near optimal solution for them. The performance of the method in terms of the average number of consistency checks and the average number of constraint violations is measured on various randomly generated binary CSPs and compared with the Branch and Bound (BB) method used to obtain the same solution. The BB method is a widely used optimization technique that may be viewed as a variation of backtracking. Thus, it was a natural choice in seeking an analog of backtracking to find optimal partial solutions for PCSPs. The proposed method moves much faster to the solution. The performance results indicate that in terms of the number of consistency checks, the proposed method has much less consistency checks than BB whereas in terms of average number of constraint violations both methods are same. An upper bound on the distance of the solution from the optimal solution is obtained analytically as ?n(n???2)(d???2)/(d???1)?.  相似文献   

10.
This paper analyzes the complexity of heuristic search algorithms, i.e. algorithms which find the shortest path in a graph by using an estimate to guide the search. In particular, algorithm A1, due to Hart, Nilsson and Raphael, is shown to require O(2N) steps, in the worst case, for searching a graph with N nodes, if the so called “consistency assumption” does not hold for the estimate. Furthermore, a new search algorithm is presented which runs in O(N2) steps in the worst case and which never requires more steps than A1.  相似文献   

11.
Summary Given a file of N records each of which has k keys, the worst-case analysis for the region and partial region queries in multidimensional binary search trees and balanced quad trees are presented. It is shown that the search algorithms proposed in [1, 3] run in time O(k·N 1–1/k) for region queries in both tree structures. For partial region queries with s keys specified, the search algorithms run at most in time O(s·N 1–1/k ) in both structures.This author's research was supported in part by the National Science Foundation under grant MCS-76-17321 and in part by the Joint Service Electronics Program under contract DAAB-07-72-C-0259  相似文献   

12.
Operations on basic data structures such as queues, priority queues, stacks, and counters can dominate the execution time of a parallel program due to both their frequency and their coordination and contention overheads. There are considerable performance payoffs in developing highly optimized, asynchronous, distributed, cache-conscious, parallel implementations of such data structures. Such implementations may employ a variety of tricks to reduce latencies and avoid serial bottlenecks, as long as the semantics of the data structure are preserved. The complexity of the implementation and the difficulty in reasoning about asynchronous systems increases concerns regarding possible bugs in the implementation. In this paper we consider postmortem, black-box procedures for testing whether a parallel data structure behaved correctly. We present the first systematic study of algorithms and hardness results for such testing procedures, focusing on queues, priority queues, stacks, and counters, under various important scenarios. Our results demonstrate the importance of selecting test data such that distinct values are inserted into the data structure (as appropriate). In such cases we present an O(n) time algorithm for testing linearizable queues, an O(n log n) time algorithm for testing linearizable priority queues, and an O( np 2 ) time algorithm for testing sequentially consistent queues, where n is the number of data structure operations and p is the number of processors. In contrast, we show that testing such data structures for executions with arbitrary input values is NP-complete. Our results also help clarify the thresholds between scenarios that admit polynomial time solutions and those that are NP-complete. Our algorithms are the first nontrivial algorithms for these problems.  相似文献   

13.
Consistency enforcement is used to prune the search tree of a constraint satisfaction problem (CSP). Arc consistency (AC) is a well-studied consistency level, with many implementations. Bounds consistency (BC), a looser consistency level, is known to have equal time complexity to AC. To solve a CSP, we have to implement an algorithm of our own or employ an existing solver. In any case, at some point, we have to decide between enforcing either AC or BC. As the choice between AC or BC is more or less predefined and currently made without considering the individualities of each CSP, this study attempts to make this decision deterministic and efficient, without the need of trial and error. We find that BC fits better while solving a CSP with its maximum domains' size being greater than its constrained variables number. We study the behavior of maintaining arc or bounds consistency during search, and we show how the overall search methods complexity is affected by the employed consistency level.  相似文献   

14.
Efficient Pose Clustering Using a Randomized Algorithm   总被引:3,自引:2,他引:3  
Pose clustering is a method to perform object recognition by determining hypothetical object poses and finding clusters of the poses in the space of legal object positions. An object that appears in an image will yield a large cluster of such poses close to the correct position of the object. If there are m model features and n image features, then there are O(m 3 n 3 ) hypothetical poses that can be determined from minimal information for the case of recognition of three-dimensional objects from feature points in two-dimensional images. Rather than clustering all of these poses, we show that pose clustering can have equivalent performance for this case when examining only O(mn) poses, due to correlation between the poses, if we are given two correct matches between model features and image features. Since we do not usually know two correct matches in advance, this property is used with randomization to decompose the pose clustering problem into O(n 2 ) problems, each of which clusters O(mn) poses, for a total complexity of O(mn 3 ) . Further speedup can be achieved through the use of grouping techniques. This method also requires little memory and makes the use of accurate clustering algorithms less costly. We use recursive histograming techniques to perform clustering in time and space that is guaranteed to be linear in the number of poses. Finally, we present results demonstrating the recognition of objects in the presence of noise, clutter, and occlusion.  相似文献   

15.
As trees are used in a wide variety of application areas, the comparison of trees arises in many guises. Here we consider two generalizations of classical tree pattern matching, which consists of determining if one tree is isomorphic to a subgraph of another. For the embedding problems of subgraph isomorphism and topological embedding, we present algorithms for determining a largest tree embeddable in two trees T and T' (or a largest subtree) and a smallest tree in which each of T and T' can be embedded (or a smallest supertree). Both subtrees and supertrees can be used in a variety of different applications. For example, when each of the two trees contains partial information about a data set, such as the evolution of a set of species, the subtree or supertree corresponds to a structuring of the data in a manner consistent with both original trees. The size of a subtree or supertree of two trees can also be used to measure the similarity between two arrangements of data, whether images, documents, or RNA secondary structures. In this paper we present a general paradigm for sequential and parallel subtree and supertree algorithms for subgraph isomorphism and topological embedding. Our sequential algorithms run in time O(n 2.5 log n) and our parallel algorithms in time O(log 3 n) on a randomized crew pram using a polynomial number of processors. In addition, we produce better algorithms for these problems when the underlying trees are ordered, that is, when the children of each node have a left-to-right ordering associated with them. In particular, we obtain O(n 2 ) -time sequential algorithms and O(log 3 n) -time deterministic parallel algorithms on crew prams for both embeddings. Received July 17, 1995; revised May 25, 1996, and December 10, 1996.  相似文献   

16.
A generalization of binary search trees and binary split trees is developed that takes advantage of two-way key comparisons: the two-way comparison tree. The two-way comparison tree has little use for dynamic situations but is an improvement over the optimal binary search tree and the optimal binary split tree for static data sets. AnO(n) time and space algorithm is presented for constructing an optimal two-way comparison tree when access probabilities are equal, and an exact formula for the optimal cost is developed. The construction of the optimal two-way comparison tree for unequal access frequencies, both successful and unsuccessful, is computable inO(n 5) time andO(n 3) space using algorithms similar to those for the optimal binary split tree. The optimal two-way comparison tree can improve search cost by up to 50% over the optimal binary search tree.  相似文献   

17.
Graph-Modeled Data Clustering: Exact Algorithms for Clique Generation   总被引:1,自引:0,他引:1  
We present efficient fixed-parameter algorithms for the NP-complete edge modification problems Cluster Editing and Cluster Deletion. Here, the goal is to make the fewest changes to the edge set of an input graph such that the new graph is a vertex-disjoint union of cliques. Allowing up to k edge additions and deletions (Cluster Editing), we solve this problem in O(2.27k + |V|3) time; allowing only up to k edge deletions (Cluster Deletion), we solve this problem in O(1.77k + |V|3) time. The key ingredients of our algorithms are two easy to implement bounded search tree algorithms and an efficient polynomial-time reduction to a problem kernel of size O(k3). This improves and complements previous work. Finally, we discuss further improvements on search tree sizes using computer-generated case distinctions.  相似文献   

18.
Computing the minimal representation of a given set of constraints (a CSP) over the Point Algebra (PA) is a fundamental temporal reasoning problem. The main property of a minimal CSP over PA is that the strongest entailed relation between any pair of variables in the CSP can be derived in constant time. We study some new methods for solving this problem which exploit and extend two prominent graph-based representations of a CSP over PA: the timegraph and the series-parallel (SP) metagraph. Essentially, these are graphs partitioned into sets of chains and series-parallel subgraphs, respectively, on which the search is supported by a metagraph data structure. The proposed approach is based on computing the metagraph closure for these representations, which can be accomplished by some methods studied in the paper.In comparison with the known techniques based on enforcing path consistency, under certain conditions about the structure of the input CSP and the size of the generated metagraph, the proposed metagraph closure approach has better worst-case time and space complexity. Moreover, for every sparse CSP over the convex PA, the time complexity is reduced to O(n2) from O(n3), where n is the number of variables involved in the CSP.An extensive experimental analysis presented in the paper compares the proposed techniques and other known algorithms. These experimental results identify the best performing methods and show that, in practice, for CSPs exhibiting chain or SP-graph structure and randomly generated (both sparse and dense) CSPs, the metagraph closure approach is significantly faster than the approach based on enforcing path consistency.  相似文献   

19.
Thinning algorithms based on quadtree and octree representations   总被引:1,自引:0,他引:1  
Thinning is a critical pre-processing step to obtain skeletons for pattern analysis. Quadtree and octree are hierarchical data representations in image processing and computer graphics. In this paper, we present new 2-D area-based and 3-D surface-based thinning algorithms for directly converting quadtree and octree representations to skeletons. The computational complexity of our thinning algorithm for a 2-D or a 3-D image with each length N is respectively O(N2) or O(N3), which is more efficient than the existing algorithms of O(N3) or O(N4). Furthermore, our thinning algorithms can lessen boundary noise spurs and are suited for parallel implementation.  相似文献   

20.
Sch?ning 《Algorithmica》2008,32(4):615-623
Abstract. A simple probabilistic algorithm for solving the NP-complete problem k -SAT is reconsidered. This algorithm follows a well-known local-search paradigm: randomly guess an initial assignment and then, guided by those clauses that are not satisfied, by successively choosing a random literal from such a clause and changing the corresponding truth value, try to find a satisfying assignment. Papadimitriou [11] introduced this random approach and applied it to the case of 2-SAT, obtaining an expected O(n 2 ) time bound. The novelty here is to restart the algorithm after 3n unsuccessful steps of local search. The analysis shows that for any satisfiable k -CNF formula with n variables the expected number of repetitions until a satisfying assignment is found this way is (2⋅ (k-1)/ k) n . Thus, for 3-SAT the algorithm presented here has a complexity which is within a polynomial factor of (\frac 4 3 ) n . This is the fastest and also the simplest among those algorithms known up to date for 3-SAT achieving an o(2 n ) time bound. Also, the analysis is quite simple compared with other such algorithms considered before.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号