首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Search engine users often encounter the difficulty of phrasing the precise query that could lead to satisfactory search results. Query recommendation is considered an effective assistant in enhancing keyword-based queries in search engines and Web search software. In this paper, we present a Query-URL Bipartite based query reCommendation approach, called QUBiC. It utilizes the connectivity of a query-URL bipartite graph to recommend related queries and can significantly improve the accuracy and effectiveness of personalized query recommendation systems comparing with the conventional pairwise similarity based approach. The main contribution of the QUBiC approach is its three-phase framework for personalized query recommendations. The first phase is the preparation of queries and their search results returned by a search engine, which generates a historical query-URL bipartite collection. The second phase is the discovery of similar queries by extracting a query affinity graph from the bipartite graph, instead of operating on the original bipartite graph directly using biclique-based approach or graph clustering. The query affinity graph consists of only queries as its vertices and its edges are weighted according to a query-URL vector based similarity (dissimilarity) measure. The third phase is the ranking of similar queries. We devise a novel rank mechanism for ordering the related queries based on the merging distances of a hierarchical agglomerative clustering (HAC). By utilizing the query affinity graph and the HAC-based ranking, we are able to capture the propagation of similarity from query to query by inducing an implicit topical relatedness between queries. Furthermore, the flexibility of the HAC strategy makes it possible for users to interactively participate in the query recommendation process, and helps to bridge the gap between the determinacy of actual similarity values and the indeterminacy of users’ information needs, allowing the lists of related queries to be changed from user to user and query to query, thus adaptively recommending related queries on demand. Our experimental evaluation results show that the QUBiC approach is highly efficient and more effective compared to the conventional query recommendation systems, yielding about 13.3 % as the most improvement in terms of precision.  相似文献   

2.
In a previous paper, we laid out the vision of a novel graph query processing paradigm where instead of processing a visual query graph after its construction, it interleaves visual query formulation and processing by exploiting the latency offered by the gui to filter irrelevant matches and prefetch partial query results [8]. Our recent attempts at implementing this vision [8, 9] show significant improvement in system response time (srt) for subgraph queries. However, these efforts are designed specifically for graph databases containing a large collection of small or medium-sized graphs. In this paper, we propose a novel algorithm called quble (QUery Blender for Large nEtworks) to realize this visual subgraph querying paradigm on very large networks (e.g., protein interaction networks, social networks). First, it decomposes a large network into a set of graphlets and supergraphlets using a minimum cut-based graph partitioning technique. Next, it mines approximate frequent and small infrequent fragments (sifs) from them and identifies their occurrences in these graphlets and supergraphlets. Then, the indexing framework of [9] is enhanced so that the mined fragments can be exploited to index graphlets for efficient blending of visual subgraph query formulation and query processing. Extensive experiments on large networks demonstrate effectiveness of quble.  相似文献   

3.
In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

4.
A caller must satisfy the callee??s precondition??that is, reach a state in which the callee may be called. Preconditions describe the state that needs to be reached, but not how to reach it. We combine static analysis with model checking to mine Fair Computation Tree Logic (CTL F ) formulas that describe the operations a parameter goes through: ??In parseProperties(String xml), the parameter xml normally stems from getProperties().?? Such operational preconditions can be learned from program code, and the code can be checked for their violations. Applied to AspectJ, our Tikanga prototype found 169 violations of operational preconditions, uncovering 7 unique defects and 27 unique code smells??with 52% true positives in the 25% top-ranked violations.  相似文献   

5.
We study the problem of answering k -hop reachability queries in a directed graph, i.e., whether there exists a directed path of length $k$ , from a source query vertex to a target query vertex in the input graph. The problem of $k$ -hop reachability is a general problem of the classic reachability (where $k=\infty $ ). Existing indexes for processing classic reachability queries, as well as for processing shortest path distance queries, are not applicable or not efficient for processing $k$ -hop reachability queries. We propose an efficient index for processing $k$ -hop reachability queries. Our experimental results on a wide range of real datasets show that our method is efficient and scalable in terms of both index construction and query processing.  相似文献   

6.
In this paper, we introduce a new problem termed query reverse engineering (QRE). Given a database \(D\) and a result table \(T\) —the output of some known or unknown query \(Q\) on \(D\) —the goal of QRE is to reverse-engineer a query \(Q'\) such that the output of query \(Q'\) on database \(D\) (denoted by \(Q'(D)\) ) is equal to \(T\) (i.e., \(Q(D)\) ). The QRE problem has useful applications in database usability, data analysis, and data security. In this work, we propose a data-driven approach, TALOS for Tree-based classifier with At Least One Semantics, that is based on a novel dynamic data classification formulation and extend the approach to efficiently support the three key dimensions of the QRE problem: whether the input query is known/unknown, supporting different query fragments, and supporting multiple database versions.  相似文献   

7.
The Mizar Mathematical Library is one of the largest libraries of formalized and mechanically verified mathematics. Its language is highly optimized for authoring by humans. As in natural languages, the meaning of an expression is influenced by its (mathematical) context in a way that is natural to humans, but harder to specify for machine manipulation. Thus its custom file format can make the access to the library difficult. Indeed, the Mizar system itself is currently the only system that can fully operate on the Mizar library. This paper presents a translation of the Mizar library into the OMDoc format (Open Mathematical Documents), an XML-based representation format for mathematical knowledge. OMDoc is geared towards machine support and interoperability by making formula structure and context dependencies explicit. Thus, the Mizar library becomes accessible for a wide range of OMDoc-based tools for formal mathematics and knowledge management. We exemplify interoperability by indexing the translated library in the MathWebSearch engine, which provides an “applicable theorem search” service (almost) out of the box.  相似文献   

8.
We introduce two hierarchies of unknown ordinal height. The hierarchies are induced by natural fragments of a calculus based on finite types and Gödel’s T, and all the classes in the hierarchies are uniformly defined without referring to explicit bounds. Deterministic complexity classes like logspace, p, pspace, linspace and exp are captured by the hierarchies. Typical subrecursive classes are also captured, e.g. the small relational Grzegorczyk classes ? * 0 , ? * 1 and ? * 2 .  相似文献   

9.
Graphs appear in numerous applications including cyber security, the Internet, social networks, protein networks, recommendation systems, citation networks, and many more. Graphs with millions or even billions of nodes and edges are common-place. How to store such large graphs efficiently? What are the core operations/queries on those graph? How to answer the graph queries quickly? We propose Gbase, an efficient analysis platform for large graphs. The key novelties lie in (1) our storage and compression scheme for a parallel, distributed settings and (2) the carefully chosen graph operations and their efficient implementations. We designed and implemented an instance of Gbase using Mapreduce/Hadoop. Gbase provides a parallel indexing mechanism for graph operations that both saves storage space, as well as accelerates query responses. We run numerous experiments on real and synthetic graphs, spanning billions of nodes and edges, and we show that our proposed Gbase is indeed fast, scalable, and nimble, with significant savings in space and time.  相似文献   

10.
We consider the problem of optimal real-time scheduling of periodic and sporadic tasks on identical multiprocessors. A number of recent papers have used the notions of fluid scheduling and deadline partitioning to guarantee optimality and improve performance. This article develops a unifying theory with the DP-Fair scheduling policy and examines how it overcomes problems faced by greedy scheduling algorithms. In addition, we present DP-Wrap, a simple DP-Fair scheduling algorithm which serves as a least common ancestor to other recent algorithms. The DP-Fair scheduling policy is extended to address the problem of scheduling sporadic task sets with arbitrary deadlines.  相似文献   

11.
12.
Reasoning about the termination of equational programs in sophisticated equational languages such as Elan, Maude, OBJ, CafeOBJ, Haskell, and so on, requires support for advanced features such as evaluation strategies, rewriting modulo, use of extra variables in conditions, partiality, and expressive type systems (possibly including polymorphism and higher-order). However, many of those features are, at best, only partially supported by current term rewriting termination tools (for instance mu-term, C i ME, AProVE, TTT, Termptation, etc.) while they may be essential to ensure termination. We present a sequence of theory transformations that can be used to bridge the gap between expressive membership equational programs and such termination tools, and prove the correctness of such transformations. We also discuss a prototype tool performing the transformations on Maude equational programs and sending the resulting transformed theories to some of the aforementioned standard termination tools.  相似文献   

13.
A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
  • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
  • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
  • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
  相似文献   

14.
The ability to efficiently obtain exact distance information from both directed and undirected graphs is desired by many real-world applications. In this work, we unified the query indexing efforts on directed and undirected graphs into one by proposing the TreeMap approach. Our approach has very tight bounds on query time, index size, and construction time for answering queries on both directed and undirected graphs. The query time complexity is close to constant for graphs with a small width of tree decomposition, and the index construction can be completed without materializing the distance matrix or other high-cost operations. In the empirical study, we demonstrated that the TreeMap approach in general performs much better than competitive methods in indexing real graphs for answering exact distance queries.  相似文献   

15.
Conventional data warehouses employ the query-at-a-time model, which maps each query to a distinct physical plan. When several queries execute concurrently, this model introduces contention and thrashing, because the physical plans??unaware of each other??compete for access to the underlying I/O and computation resources. As a result, while modern systems can efficiently optimize and evaluate a single complex data analysis query, their performance suffers significantly and can be highly erratic when multiple complex queries run at the same time. We present in this paper Cjoin, a new design that substantially improves throughput in large-scale data analytics systems processing many concurrent join queries. In contrast to the conventional query-at-a-time model our approach employs a single physical plan that shares I/O, computation, and tuple storage across all in-flight join queries. We use an ??always on?? pipeline of non-blocking operators, managed by a controller that continuously examines the current query mix and optimizes the pipeline on the fly. Our design enables data analytics engines to scale gracefully to large data sets, provide predictable execution times, and reduce contention. We implemented Cjoin as an extension to the PostgreSQL DBMS. This prototype outperforms conventional commercial systems by an order of magnitude for tens to hundreds of concurrent queries.  相似文献   

16.
A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.  相似文献   

17.
We give a self-reduction for the Circuit Evaluation problem (CircEval) and prove the following consequences.
  1. Amplifying size–depth lower bounds. If CircEval has Boolean circuits of n k size and n 1?δ depth for some k and δ, then for every ${\epsilon > 0}$ , there is a δ′ > 0 such that CircEval has circuits of ${n^{1 + \epsilon}}$ size and ${n^{1- \delta^{\prime}}}$ depth. Moreover, the resulting circuits require only ${\tilde{O}(n^{\epsilon})}$ bits of non-uniformity to construct. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
  2. Lower bounds for quantified Boolean formulas. Let c, d > 1 and e < 1 satisfy c < (1 ? e d )/d. Either the problem of recognizing valid quantified Boolean formulas (QBF) is not solvable in TIME[n c ], or the Circuit Evaluation problem cannot be solved with circuits of n d size and n e depth. This implies unconditional polynomial-time uniform circuit lower bounds for solving QBF. We also prove that QBF does not have n c -time uniform NC circuits, for all c < 2.
  相似文献   

18.
The evolution of the web has outpaced itself: A growing wealth of information and increasingly sophisticated interfaces necessitate automated processing, yet existing automation and data extraction technologies have been overwhelmed by this very growth. To address this trend, we identify four key requirements for web data extraction, automation, and (focused) web crawling: (1) interact with sophisticated web application interfaces, (2) precisely capture the relevant data to be extracted, (3) scale with the number of visited pages, and (4) readily embed into existing web technologies. We introduce OXPath as an extension of XPath for interacting with web applications and extracting data thus revealed—matching all the above requirements. OXPath’s page-at-a-time evaluation guarantees memory use independent of the number of visited pages, yet remains polynomial in time. We experimentally validate the theoretical complexity and demonstrate that OXPath’s resource consumption is dominated by page rendering in the underlying browser. With an extensive study of sublanguages and properties of OXPath, we pinpoint the effect of specific features on evaluation performance. Our experiments show that OXPath outperforms existing commercial and academic data extraction tools by a wide margin.  相似文献   

19.
Classic Learning     
Frazier  Michael  Pitt  Leonard 《Machine Learning》1996,25(2-3):151-193
  相似文献   

20.
A DIN Kernel LISP Draft (DKLisp) has been developed by DIN as Reaction to Action D1 (N79), short term goal, of ISO WG16. It defines a subset language, as compatible as possible with the ANSICommon-Lisp draft, but also with theEuLisp draft. It combines the most important LISP main stream features in a single, compact, but nevertheless complete language definition, which thereby could be well suited as basis for a short term InternationalLisp Standard. Besides the functional and knowledge processing features, the expressive power of the language is well comparable with contemporary procedural languages, as e.g. C++ (of course without libraries). Important features ofDKLisp are:
  • to be a “Lisp-1,” but allowing an easy “Lisp-2” transformation;
  • to be a simple, powerful and standardized educationalLisp;
  • to omit all features, which are unclean or in heavy discussion;
  • DKLisp programs run nearly unchanged inCommon-Lisp;
  • DKLisp contains a simple object and package system;
  • DKLisp contains those data classes and control structures also common to most modernLisp and non-Lisp languages;
  • DKLisp offers a simple stream I/O;
  • DKLisp contains a clean unified hierarchical class/type system;
  • DKLisp contains the typical “Lisp-features” in an orthogonal way;
  • DKLisp allows and encourages really small but powerful implementations;
  • DKLisp comes in levels, so allowing ANSICommon-Lisp to be an extension ofDKLisp level-1.
  • The present is the second version of the proposal, namely version 1.2, with slight differences with respect to the one sent to ISO. Sources of such changes were the remarks generously sent by many readers of the previous attempt.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号