首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
This is the first of two papers presenting and evaluating the power of a new framework for combinatorial optimization in graphical models, based on AND/OR search spaces. We introduce a new generation of depth-first Branch-and-Bound algorithms that explore the AND/OR search tree using static and dynamic variable orderings. The virtue of the AND/OR representation of the search space is that its size may be far smaller than that of a traditional OR representation, which can translate into significant time savings for search algorithms. The focus of this paper is on linear space search which explores the AND/OR search tree. In the second paper we explore memory intensive AND/OR search algorithms. In conjunction with the AND/OR search space we investigate the power of the mini-bucket heuristics in both static and dynamic setups. We focus on two most common optimization problems in graphical models: finding the Most Probable Explanation in Bayesian networks and solving Weighted CSPs. In extensive empirical evaluations we demonstrate that the new AND/OR Branch-and-Bound approach improves considerably over the traditional OR search strategy and show how various variable ordering schemes impact the performance of the AND/OR search scheme.  相似文献   

2.
AND/OR search spaces accommodate advanced algorithmic schemes for graphical models which can exploit the structure of the model. We extend and evaluate the depth-first and best-first AND/OR search algorithms to solving 0-1 Integer Linear Programs (0-1 ILP) within this framework. We also include a class of dynamic variable ordering heuristics while exploring an AND/OR search tree for 0-1 ILPs. We demonstrate the effectiveness of these search algorithms on a variety of benchmarks, including real-world combinatorial auctions, random uncapacitated warehouse location problems and MAX-SAT instances.  相似文献   

3.
《Artificial Intelligence》2007,171(2-3):73-106
The paper introduces an AND/OR search space perspective for graphical models that include probabilistic networks (directed or undirected) and constraint networks. In contrast to the traditional (OR) search space view, the AND/OR search tree displays some of the independencies present in the graphical model explicitly and may sometimes reduce the search space exponentially. Indeed, most algorithmic advances in search-based constraint processing and probabilistic inference can be viewed as searching an AND/OR search tree or graph. Familiar parameters such as the depth of a spanning tree, treewidth and pathwidth are shown to play a key role in characterizing the effect of AND/OR search graphs vs. the traditional OR search graphs. We compare memory intensive AND/OR graph search with inference methods, and place various existing algorithms within the AND/OR search space.  相似文献   

4.
5.
Weighted heuristic search (best-first or depth-first) refers to search with a heuristic function multiplied by a constant w [31]. The paper shows, for the first time, that for optimization queries in graphical models the weighted heuristic best-first and weighted heuristic depth-first branch and bound search schemes are competitive energy-minimization anytime optimization algorithms. Weighted heuristic best-first schemes were investigated for path-finding tasks. However, their potential for graphical models was ignored, possibly because of their memory costs and because the alternative depth-first branch and bound seemed very appropriate for bounded depth. The weighted heuristic depth-first search has not been studied for graphical models. We report on a significant empirical evaluation, demonstrating the potential of both weighted heuristic best-first search and weighted heuristic depth-first branch and bound algorithms as approximation anytime schemes (that have sub-optimality bounds) and compare against one of the best depth-first branch and bound solvers to date.  相似文献   

6.
The paper focuses on developing effective importance sampling algorithms for mixed probabilistic and deterministic graphical models. The use of importance sampling in such graphical models is problematic because it generates many useless zero weight samples which are rejected yielding an inefficient sampling process. To address this rejection problem, we propose the SampleSearch scheme that augments sampling with systematic constraint-based backtracking search. We characterize the bias introduced by the combination of search with sampling, and derive a weighting scheme which yields an unbiased estimate of the desired statistics (e.g., probability of evidence). When computing the weights exactly is too complex, we propose an approximation which has a weaker guarantee of asymptotic unbiasedness. We present results of an extensive empirical evaluation demonstrating that SampleSearch outperforms other schemes in presence of significant amount of determinism.  相似文献   

7.
Cohen et al. [5] recently initiated the theoretical study of connection caching in the world-wide web. They extensively studied uniform connection caching, where the establishment cost is uniform for all connections [5], [6]. They showed that ordinary paging algorithms can be used to derive algorithms for uniform connection caching and analyzed various algorithms such as Belady's rule, LRU and Marking strategies. In particular, in [5] Cohen et al. showed that LRU yields a (2k-1) -competitive algorithm, where k is the size of the largest cache in the network. In [6] they investigated Marking algorithms with different types of communication among nodes and presented deterministic k -competitive algorithms. In this paper we study generalized connection caching , also introduced in [5], where connections can incur varying establishment costs. This model is reasonable because the cost of establishing a connection depends, for instance, on the distance of the nodes to be connected and on the congestion in the network. Algorithms for ordinary weighted caching can be used to derive algorithms for generalized connection caching. We present tight or nearly tight analyses on the performance achieved by the currently known weighted caching algorithms when applied in generalized connection caching. In particular we give online algorithms that achieve an optimal competitive ratio of k . Our deterministic algorithm uses extra communication while maintaining open connections. We develop a generalized algorithm that trades communication for performance and achieves a competitive ratio of (1+ε)k , for any 0<ε≤ 1 , using at most
bits of communication on each open link. Additionally we consider two extensions of generalized connection caching where (1) connections have time-out values, or (2) the establishment cost of connections is asymmetric. We show that the performance ratio of our algorithms can be preserved in scenario (1). In the case of (2) we derive nearly tight upper and lower bounds on the best possible competitiveness.  相似文献   

8.
For an ISP (Internet Service Provider) that has deployed P2P caches in more than one ASs (autonomous systems), cooperative caching which makes their caches cooperate with each other can save more cost of carrying P2P traffic than independent caching. However, existing cooperative caching algorithms only use objects’ popularity as the measurement to decide which objects should be cached, and cost on intra-ISP links that has great impact on the benefits of cooperative caching is not considered. In this paper, we first model the cooperative caching problem as a NP-Complete problem, which is based on our analysis about the cost of serving requests with consideration of both the objects’ popularity and the cost on intra-ISP links. Then we propose a novel cooperative caching algorithm named cLGV (Cooperative, Lowest Global Value). The cLGV algorithm uses a new concept global value to estimate the benefits of caching or replacing an object in the cooperative caching system, and the global value of each object is evaluated according to not only objects’ popularity in each AS but also cost on intra-ISP links among ASs. Results of both synthetic and real traces driven simulations indicate that our cLGV algorithm can save the cost of carrying P2P traffic at least 23 % higher than that of existing cooperative caching algorithms.  相似文献   

9.
In this paper, the problem of caching continuous media data in a (main) memory and disk caching system is addressed. Caching schemes can significantly reduce the load on the network as well as on the servers, also the retrieval of documents from the cache requires short response time. In interval-level caching algorithms, an interval of data between two adjacent streams is the basic caching entity. In this paper, we design a novel algorithm, referred to as variable bit rate caching (VBRC) algorithm, which belongs to the interval-level caching algorithms. The proposed VBRC algorithm can be used in the system for memory caching or disk caching. VBRC can handle variable retrieval bandwidth as well as constant retrieval bandwidth . In designing the VBRC algorithm, we propose the strategies of reducing the number of switching operation, which will probably cause discontinuity of retrieving data. Also, we propose a just-in-time scheme for resource allocation in our VBRC algorithm and show that the caching performance in comparison with the reservation scheme adopted in the resource-based caching (RBC) algorithm is significantly improved. Our simulation study compares the recent and most popular generalized interval caching, RBC, and VBRC, on several influencing factors such as cache space size, cache I/O bandwidth, request arrival rate, and percentage of requests for large documents, with respect to the byte hit ratio and the number of switching operations. The simulation result confirms our analysis.
Bharadwaj VeeravalliEmail: URL: http://cnds.ece.nus.edu.sg
  相似文献   

10.
《Knowledge》2002,15(1-2):103-110
Multilevel flow models (MFM) are graphical models of goals and functions of technical systems. MFM was invented by Morten Lind at the Technical University of Denmark and several new algorithms and implementations have been contributed by the group headed by Jan Eric Larsson at Lund Institute of Technology. MFM has several properties which makes for a relatively easy knowledge engineering task, compared to mathematical models as used in classical control theory and compared to the rule bases used in standard expert systems. In addition, MFM allows for diagnostic algorithms with excellent real-time properties. This paper gives an overview of existing MFM algorithms, and different MFM projects which have been performed, or are currently in progress.  相似文献   

11.
Test sequencing is a binary identification problem wherein one needs to develop a minimal expected cost testing procedure to determine which one of a finite number of possible failure sources, if any, is present. The problem can be solved optimally using dynamic programming or AND/OR graph search methods (AO/sup */, CF, and HS). However, for large systems, the associated computation with dynamic programming or AND/OR graph search methods is substantial, due to the rapidly increasing number of OR nodes (denoting ambiguity states) and AND nodes (denoting tests) in the search graph. In order to overcome the computational explosion, the one-step or multistep lookahead heuristic algorithms have been developed to solve the test sequencing problem. In this paper, we propose to apply rollout strategies, which can be combined with the one-step or multistep lookahead heuristic algorithms, in a computationally more efficient manner than the optimal strategies, to obtain solutions superior to those using the one-step or multistep lookahead heuristic algorithms. The rollout strategies are illustrated and tested using a range of real-world systems. We show computational results, which suggest that the information-heuristic based rollout policies are significantly better than other rollout policies based on Huffman coding and entropy.  相似文献   

12.
13.
We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit's single output wire may be observed. We give polynomial time algorithms to learn (1) arbitrary circuits with logarithmic depth and constant fan-in and (2) Boolean circuits of constant depth and unbounded fan-in over AND, OR, and NOT gates. Thus, both AC0 and NC1 circuits are learnable in polynomial time in this model. Negative results show that some restrictions on depth, fan-in and gate types are necessary: exponentially many experiments are required to learn AND/OR circuits of unbounded depth and fan-in; it is NP-hard to learn AND/OR circuits of unbounded depth and fan-in 2; and it is NP-hard to learn circuits of constant depth and unbounded fan-in over AND, OR, and threshold gates, even when the target circuit is known to contain at most one threshold gate and that threshold gate has threshold 2. We also consider the effect of adding an oracle for behavioral equivalence. In this case there are polynomial-time algorithms to learn arbitrary circuits of constant fan-in and unbounded depth and to learn Boolean circuits with arbitrary fan-in and unbounded depth over AND, OR, and NOT gates. A corollary is that these two classes are PAC-learnable if experiments are available. Finally, we consider an extension of the model called the synchronous model. We show that an even more general class of circuits are learnable in this model. In particular, we are able to learn circuits with cycles.  相似文献   

14.
缓存是有效减少响应时间和系统负载的关键技术,是搜索引擎系统结构研究的重要领域之一.通过对搜狗搜索引擎在近1个月内约1500万条用户查询日志进行分析和研究,针对查询结果缓存,从查询局部性、缓存策略、缓存容量、工作负载周期性等方面进行分析.分析表明,混合缓存策略以及提高缓存容量相结合的技术能有效提高搜索引擎系统性能.  相似文献   

15.
Inversion of high-frequency surface wave dispersion curves is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this paper, we implemented an investigation to fully exploit and utilize the potentiality of pattern search algorithms and to further enhance their performance for surface wave analysis. We first investigate effects of different inversion strategies, initial mesh size and final mesh size, expansion factor, and contraction factor, as well as inclusion of noise in surface wave data on the performance of the approaches, by three synthetic earth models. Then, a comparative analysis with genetic algorithms is made to further highlight the advantages of the proposed inverse procedure. Finally, the insights issued from this analysis are verified by a real-world example from Henan, China.Results from both synthetic and field data demonstrate: (a) generalized pattern search (GPS) algorithm with the maximal positive basis set 2N vectors works better than GPS algorithm with the minimal positive basis set N+1 vectors; (b) if one gets a suitable initial mesh size by taking some experimentation, then setting expansion factor Λ=1 (i.e., not allow expansions) and contraction factor θ=1/2 can greatly enhance the performance of pattern search algorithms. This is particularly true as the algorithm converges and final mesh size should go to zero; and (c) pattern search algorithms possess stronger immunity with respect to noise and should be considered good not only in terms of accuracy but also in terms of computation effort, especially when compared to the application of genetic algorithms to Rayleigh wave inversion.  相似文献   

16.
Abstract. In meta-searchers accessing distributed Web-based information repositories, performance is a major issue. Efficient query processing requires an appropriate caching mechanism. Unfortunately, standard page-based as well as tuple-based caching mechanisms designed for conventional databases are not efficient on the Web, where keyword-based querying is often the only way to retrieve data. In this work, we study the problem of semantic caching of Web queries and develop a caching mechanism for conjunctive Web queries based on signature files. Our algorithms cope with both relations of semantic containment and intersection between a query and the corresponding cache items. We also develop the cache replacement strategy to treat situations when cached items differ in size and contribution when providing partial query answers. We report results of experiments and show how the caching mechanism is realized in the Knowledge Broker system. Received June 15, 1999 / Accepted December 24, 1999  相似文献   

17.
This paper advances the design of a unified model for the representation of search in first-order clausal theorem-proving, by extending to tableau-based subgoal-reduction strategies (e.g., model-elimination tableaux), the marked search-graph model, already introduced for ordering-based strategies, those that use (ordered) resolution, paramodulation/superposition, simplification, and subsumption. The resulting analytic marked search-graphs subsume AND–OR graphs, and allow us to represent those dynamic components, such as backtracking and instantiation of rigid variables, that have long been an obstacle to modelling subgoal-reduction strategies properly. The second part of the paper develops for analytic marked search-graphs the bounded search spaces approach to the analysis of infinite search spaces. We analyze how tableau inferences and backtracking affect the bounded search spaces during a derivation. Then, we apply this analysis to measure the effects of regularity and lemmatization by folding-up on search complexity, by comparing the bounded search spaces of strategies with and without these features. We conclude with a discussion comparing the marked search-graphs for tableaux, linear resolution, and ordering-based strategies, showing how this search model applies across these classes of strategies.  相似文献   

18.
Intelligence analysis is a domain characterized by a torrent of streaming data within which a very small portion contains useful knowledge or actionable intelligence. Intelligence analysts have to sift through the compiled data and weave through a complex web of convoluted connections in an attempt to illuminate information requirements (IR) and maintain situational awareness. Automated methodologies have eased the manual burden of this process to some extent. Data are naturally modeled in a graphical form representing the known people, places, events and the relationships between them. Graph matching algorithms in which an information requirement is formulated as a template graph or situation of interest to be found in the observed data graph have been successfully employed in intelligence analysis processes. Absent from these past contributions is the recognition that partial information requirements, such as indicators and warnings, are not mutually exclusive to a specific IR, and an understanding of the characteristics of the underlying data can lead to significant performance benefits. The knowledge of overlapping template sections forms the motivation for precedence tree guided search and AND/OR templates. Through the recognition of the overlapping sections, a single AND/OR template can be created to answer many information requirements. This paper presents a novel algorithm for the intelligent traversal of an AND/OR template, providing increased algorithmic efficiency over the execution of multiple sequential graph matching instances. This paper focuses on development of an algorithm for intelligent AND/OR template traversal with computational results illustrating the effectiveness of the developed methods. The results indicate a significant improvement in runtime (with a speedup over 5 in some cases) while maintaining a good solution quality (within 2% of multiple AND path graph matching executions) in AND/OR and precedence tree guided graph matching.  相似文献   

19.
In this paper we formulate a hierarchical configurable deformable template (HCDT) to model articulated visual objects??such as horses and baseball players??for tasks such as parsing, segmentation, and pose estimation. HCDTs represent an object by an AND/OR graph where the OR nodes act as switches which enables the graph topology to vary adaptively. This hierarchical representation is compositional and the node variables represent positions and properties of subparts of the object. The graph and the node variables are required to obey the summarization principle which enables an efficient compositional inference algorithm to rapidly estimate the state of the HCDT. We specify the structure of the AND/OR graph of the HCDT by hand and learn the model parameters discriminatively by extending Max-Margin learning to AND/OR graphs. We illustrate the three main aspects of HCDTs??representation, inference, and learning??on the tasks of segmenting, parsing, and pose (configuration) estimation for horses and humans. We demonstrate that the inference algorithm is fast and that max-margin learning is effective. We show that HCDTs gives state of the art results for segmentation and pose estimation when compared to other methods on benchmarked datasets.  相似文献   

20.
In contrast to the problem of finding all defective elements in group testing, we consider the problem of finding one defective in a set D of defective elements of cardinality d. We consider adaptive search algorithms only. A similar problem for the classical and threshold models was solved in [1]. In the present paper we consider the additive testing model. We obtain an optimal answer in the problem of adaptive search of one defective element in this model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号