首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Partial-order reduction is one of the main techniques used to tackle the combinatorial state explosion problem occurring in explicit-state model checking of concurrent systems. The reduction is performed by exploiting the independence of concurrently executed events, which allows portions of the state space to be pruned. An important condition for the soundness of partial-order-based reduction algorithms is a condition that prevents indefinite ignoring of actions when pruning the state space. This condition is commonly known as the cycle proviso. In this paper, we present a new version of this proviso, which is applicable to a general search algorithm skeleton that we refer to as the general state exploring algorithm (GSEA). GSEA maintains a set of open states from which states are iteratively selected for expansion and moved to a closed set of states. Depending on the data structure used to represent the open set, GSEA can be instantiated as a depth-first, a breadth-first, or a directed search algorithm such as Best-First Search or A*. The proviso is characterized by reference to the open and closed set of states of the search algorithm. As a result, it can be computed in an efficient manner during the search based on local information. We implemented partial-order reduction for GSEA based on our proposed proviso in the tool HSF-SPIN, an extension of the explicit-state model checker SPIN for directed model checking. We evaluate the state space reduction achieved by partial-order reduction using the proposed proviso by comparing it on a set of benchmark problems to the use of other provisos. We also compare the use of breadth-first search (BFS) and A*, two algorithms ensuring that counterexamples of minimal length will be found, together with the proviso that we propose.  相似文献   

2.
在网格中,经常需要以某个结点源点,构造一棵广度优先生成树来进行广播和聚合通信,现有的广度优先搜索算法都是基于图论的同步式算法,而在异步式的网格系统中不能采用这种算法,在开发国家高性能计算环境的过程中,以异步自动机为基础建立了网格理论模型,在这个模型的基础上实现了一种异步式网格广度优先搜索算法--GridBFS算法,还证明了,GridBFS算法最终将产生一棵广度优先生成树,并且能够检测到算法的终止。  相似文献   

3.
《Artificial Intelligence》2006,170(4-5):385-408
Recent work shows that the memory requirements of A* and related graph-search algorithms can be reduced substantially by only storing nodes that are on or near the search frontier, using special techniques to prevent node regeneration, and recovering the solution path by a divide-and-conquer technique. When this approach is used to solve graph-search problems with unit edge costs, we show that a breadth-first search strategy can be more memory-efficient than a best-first strategy. We also show that a breadth-first strategy allows a technique for preventing node regeneration that is easier to implement and can be applied more widely. The breadth-first heuristic search algorithms introduced in this paper include a memory-efficient implementation of breadth-first branch-and-bound search and a breadth-first iterative-deepening A* algorithm that is based on it. Computational results show that they outperform other systematic search algorithms in solving a range of challenging graph-search problems.  相似文献   

4.
Enhanced iterative-deepening search   总被引:1,自引:0,他引:1  
Iterative-deepening searches mimic a breadth-first node expansion with a series of depth-first searches that operate with successively extended search horizons. They have been proposed as a simple way to reduce the space complexity of best-first searches like A* from exponential to linear in the search depth. But there is more to iterative-deepening than just a reduction of storage space. As the authors show, the search efficiency can be greatly improved by exploiting previously gained node information. The information management techniques considered here owe much to their counterparts from the domain of two-player games, namely the use of fast-execution memory functions to guide the search. The authors' methods not only save node expansions, but are also faster and easier to implement than previous proposals  相似文献   

5.
Although breadth-first search procedures cannot explore truly large search spaces, actual implementations of such procedures can result in surprisingly powerful problem-solvers that outperform more sophisticated heuristic search procedures. We describe two breadth-first search procedures. The first one, S&R, proves theorems from Principia of Whitehead and Russell, and is compared to two versions of the Logic Theorist. Previous estimates of the size of the search space are significantly reduced. When theorems are proved in an optimal order, this order differs markedly from that found in Principia, while more general theorems than those of Principia are often found.The second system, S&M, adapts breadth-first search to locally infinite search spaces in systems of rewriting rules. S&M is compared extensively to the heuristic theorem-prover of Quinlan and Hunt, and to some other theorem provers.  相似文献   

6.
王治和  谢斌 《计算机科学》2008,35(1):126-127
结合区间编码和结点模型映射方法提出一种用于关系数据库的扩展存储模式.通过按广度优先遍历XML树实现对双亲/孩子关系结构连接算法的改进.改进后的算法降低了内存空间的开销,缩小了列表的扫描范围,明显提高了查找匹配速度,达到了查询优化的目的.  相似文献   

7.
介绍采用数据表作为存储结构,使用树型控件显示树的结点和层次关系.通过建立表、访问表来实现树的存储和广度优先搜索.  相似文献   

8.
一种求解关键路径的新算法   总被引:5,自引:1,他引:4       下载免费PDF全文
王明福 《计算机工程》2008,34(9):106-108
通过定义节点编码图概念,提出一种不需要拓扑排序的求解关键路径的新算法。该算法扩充图的邻接表的存储结构,使图的存储与算法求解过程共享同一存储空间。从图的源节点开始,用加权取极大运算规则,广度优先递归对图中所有节点进行编码。编码图生成后,利用反向搜索求出从源点到汇点的所有关键路径及长度。该算法比现有算法更简单直观,所需的存储空间更小,算法时间复杂度降低到O(n+e),优于现有算法的O(n2)。  相似文献   

9.
基于深度优先搜索的一般图匹配算法   总被引:1,自引:0,他引:1       下载免费PDF全文
对于一般图的匹配问题,Edmonds算法以Berge定理为基础,采用广度优先搜索增广路,图中可能存在“花”。遇到这种情况,要对它进行缩减“花”处理,再进行搜索。当找到增广路时,要将缩减图恢复,算法显得复杂。Gabow等算法使用先给固的顶点和边编号,并使用了不同数组和虚拟顶点,避免了处理花。算法的复杂性为O(n^3),但增加了空间复杂性。本文提出的基于深度优先搜索算法,在搜索增广路时不会出现“花”的情况,算法相对简单;同时,算法时间效率为O(n*degree(n)),degree(n)为顶顶点的平均度数。另外,当图的边动态增减时,使用该算法可以很快调整最大匹配,并且该算法空间复杂性在同一数量级也可以推广到广度优先搜索。  相似文献   

10.
贝叶斯网络(BN)应用于分类应用时对目标变量预测有直接贡献的局部模型称作一般贝叶斯网络分类器(GBNC)。推导GBNC的传统途径是先学习完整的BN,而现有推导BN结构的算法限制了应用规模。为了避免学习全局BN,提出仅执行局部搜索的结构学习算法IPC-GBNC,它以目标变量节点为中心执行广度优先搜索,且将搜索深度控制在不超过2层。理论上可证明算法IPC-GBNC是正确的,而基于仿真和真实数据的实验进一步验证了其学习效果和效率的优势:(1)可输出和执行全局搜索的PC算法相同甚至更高质量的结构;(2)较全局搜索消耗少得多的计算量;(3)同时实现了降维(类似决策树学习算法)。相比于绝大多数经典分类器,GBNC的分类性能相当,但兼具直观、紧凑表达和强大推理的能力(且支持不完整观测值)。  相似文献   

11.
Planning graphs have been shown to be a rich source of heuristic information for many kinds of planners. In many cases, planners must compute a planning graph for each element of a set of states, and the naive technique enumerates the graphs individually. This is equivalent to solving a multiple-source shortest path problem by iterating a single-source algorithm over each source.We introduce a data-structure, the state agnostic planning graph, that directly solves the multiple-source problem for the relaxation introduced by planning graphs. The technique can also be characterized as exploiting the overlap present in sets of planning graphs. For the purpose of exposition, we first present the technique in deterministic (classical) planning to capture a set of planning graphs used in forward chaining search. A more prominent application of this technique is in conformant and conditional planning (i.e., search in belief state space), where each search node utilizes a set of planning graphs; an optimization to exploit state overlap between belief states collapses the set of sets of planning graphs to a single set. We describe another extension in conformant probabilistic planning that reuses planning graph samples of probabilistic action outcomes across search nodes to otherwise curb the inherent prediction cost associated with handling probabilistic actions. Finally, we show how to extract a state agnostic relaxed plan that implicitly solves the relaxed planning problem in each of the planning graphs represented by the state agnostic planning graph and reduces each heuristic evaluation to counting the relevant actions in the state agnostic relaxed plan. Our experimental evaluation (using many existing International Planning Competition problems from classical and non-deterministic conformant tracks) quantifies each of these performance boosts, and demonstrates that heuristic belief state space progression planning using our technique is competitive with the state of the art.  相似文献   

12.
充分利用XML数据库文档的结构特性,结合Dewey编码的编码原理,设计了一种数据服务(DAS)模式下的XML加密数据的查询算法(ILISA)。将树型结构上的数据检索变换为顺序链表的数据检索,应用插值搜索算法替代深度与广度优先遍历,带来了良好的时间复杂性。设计了一种XML索引表数据结构,使得检索空间大幅缩减。最后给出ILISA的复杂性描述,证明了该算法具有良好的效果。  相似文献   

13.
A great challenge in developing planning systems for practical applications is the difficulty of acquiring the domain information needed to guide such systems. This paper describes a way to learn some of that knowledge. More specifically, the following points are discussed. (1) We introduce a theoretical basis for formally defining algorithms that learn preconditions for Hierarchical Task Network (HTN) methods. (2) We describe Candidate Elimination Method Learner ( CaMeL ), a supervised, eager, and incremental learning process for preconditions of HTN methods. We state and prove theorems about CaMeL's soundness, completeness, and convergence properties. (3) We present empirical results about CaMeL's convergence under various conditions. Among other things, CaMeL converges the fastest on the preconditions of the HTN methods that are needed the most often. Thus CaMeL's output can be useful even before it has fully converged.  相似文献   

14.
Three relevant areas of interest in symbolic Machine Learning are incremental supervised learning, multistrategy learning and predicate invention. In many real-world tasks, new observations may point out the inadequacy of the learned model. In such a case, incremental approaches allow to adjust it, instead of learning a new model from scratch. Specifically, when a negative example is wrongly classified by a model, specialization refinement operators are needed. A powerful way to specialize a theory in Inductive Logic Programming is adding negated preconditions to concept definitions. This paper describes an empowered specialization operator that allows to introduce the negation of conjunctions of preconditions using predicate invention. An implementation of the operator is proposed, and experiments purposely devised to stress it prove that the proposed approach is correct and viable even under quite complex conditions.  相似文献   

15.
Searching of state transitions is an important subject of problem solving in artificial intelligence, computer science, engineering and operations research. In artificial intelligence, a breadth-first search is optimal, with uniform cost, but it takes considerable time to obtain a solution. Neural networks process state transitions in parallel with learning ability. The authors have developed a search procedure for state transitions, that resembles a breadth-first search, using neural networks. First, the input pattern states are self-organized in the neural network, which consists of a Kohonen layer followed by a state-planning layer. The state-planning layer makes lateral connections between the cells of transitions. Then, the initial and the target states are given as a problem. The network shows an optimal transition pathway of states in the neuron firings. Next, the state-transition procedure is developed for the formation of a concept for action planning. Here, as the action planning, an integration between the symbols and the action pattern is carried out in the extended neural network.  相似文献   

16.
We use case-injected genetic algorithms (CIGARs) to learn to competently play computer strategy games. CIGARs periodically inject individuals that were successful in past games into the population of the GA working on the current game, biasing search toward known successful strategies. Computer strategy games are fundamentally resource allocation games characterized by complex long-term dynamics and by imperfect knowledge of the game state. CIGAR plays by extracting and solving the game's underlying resource allocation problems. We show how case injection can be used to learn to play better from a human's or system's game-playing experience and our approach to acquiring experience from human players showcases an elegant solution to the knowledge acquisition bottleneck in this domain. Results show that with an appropriate representation, case injection effectively biases the GA toward producing plans that contain important strategic elements from previously successful strategies.  相似文献   

17.
Test sequencing algorithms with unreliable tests   总被引:1,自引:0,他引:1  
In this paper, we consider imperfect test sequencing problems under a single fault assumption. This is a partially observed Markov decision problem (POMDP), a sequential multistage decision problem wherein a failure source must be identified using the results of imperfect tests at each stage. The optimal solution for this problem can be obtained by applying a continuous-state dynamic programming (DP) recursion. However, the DP recursion is computationally very expensive owing to the continuous nature of the state vector comprising the probabilities of faults. In order to alleviate this computational explosion, we present an efficient implementation of the DP recursion. We also consider various problems with special structure (parallel systems) and derive closed form solutions/index-rules without having to resort to DP. Finally, we present various top-down graph search algorithms for problems with no special structure, including multistep DP, multistep information heuristics, and certainty equivalence algorithms  相似文献   

18.
基于搜索技术的BOM数据死锁检验算法研究*   总被引:1,自引:0,他引:1  
在BOM数据的建立过程中存在一种造成MRP计算死循环的数据死锁现象,在分析BOM数据复杂性和描述BOM数据要求的基础上,对BOM数据管理中出现的数据死锁恶性问题作了深入分析,并提出基于深度优先和宽度优先搜索技术的BOM数据死锁检验算法,计算复杂性分析和应用实例表明该算法是高效和可行的。  相似文献   

19.
High dimensional pose state space is the main challenge in articulated human pose tracking which makes pose analysis computationally expensive or even infeasible. In this paper, we propose a novel generative approach in the framework of evolutionary computation, by which we try to widen the bottleneck with effective search strategy embedded in the extracted state subspace. Firstly, we use ISOMAP to learn the low-dimensional latent space of pose state in the aim of both reducing dimensionality and extracting the prior knowledge of human motion simultaneously. Then, we propose a manifold reconstruction method to establish smooth mappings between the latent space and original space, which enables us to perform pose analysis in the latent space. In the search strategy, we adopt a new evolutionary approach, clonal selection algorithm (CSA), for pose optimization. We design a CSA based method to estimate human pose from static image, which can be used for initialization of motion tracking. In order to make CSA suitable for motion tracking, we propose a sequential CSA (S-CSA) algorithm by incorporating the temporal continuity information into the traditional CSA. Actually, in a Bayesian inference view, the sequential CSA algorithm is in essence a multilayer importance sampling based particle filter. Our methods are demonstrated in different motion types and different image sequences. Experimental results show that our CSA based pose estimation method can achieve viewpoint invariant 3D pose reconstruction and the S-CSA based motion tracking method can achieve accurate and stable tracking of 3D human motion.  相似文献   

20.
基于索引数组与集合枚举树的最大频繁项集挖掘算法   总被引:2,自引:0,他引:2  
由于其内在的计算复杂性,挖掘密集型数据集的全部频繁项集非常困难,解决方案之一是挖掘最大频繁项集。集合枚举树是最大频繁项集挖掘算法中常用的数据结构,最大频繁项集的挖掘过程也可以看作是集合枚举树的搜索过程。为缩小集合枚举树的搜索空间,采用宽度优先和深度优先相结合的混合搜索策略,提出了一种新的最大频繁项集的挖掘算法Index-MaxMiner。该算法首先设计了索引数组这种新的数据结构,并给出了一个基于二进制位图技术的索引数组的计算方法。通过为每个频繁项增加包含索引,Index-MaxMiner利用一次宽度优先搜索得到了候选最大频繁项集,使集合枚举树的第一层结点个数大幅度减少。然后在候选最大频繁项集中通过深度优先搜索,得到全部最大频繁项集,从而实现了集合枚举树的跳跃式搜索,大大缩小了搜索空间。实验结果表明,该算法可有效提高最大频繁项集的挖掘效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号