首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Integrating ontologies and rules on the Semantic Web enables software agents to interoperate between them; however, this leads to two problems. First, reasoning services in SWRL (a combination of OWL and RuleML) are not decidable. Second, no studies have focused on distributed reasoning services for integrating ontologies and rules in multiple knowledge bases. In order to address these problems, we consider distributed reasoning services for ontologies and rules with decidable and effective computation. In this paper, we describe multiple order-sorted logic programming that transfers rigid properties from knowledge bases. Our order-sorted logic contains types (rigid sorts), non-rigid sorts, and unary predicates that distinctly express essential sorts, non-essential sorts, and non-sortal properties. We formalize the order-sorted Horn-clause calculus for such properties in a single knowledge base. This calculus is extended by embedding rigid-property derivation for multiple knowledge bases, each of which can transfer rigid-property information from other knowledge bases. In order to enable the reasoning to be effective and decidable, we design a query-answering system that combines order-sorted linear resolution and rigid-property resolution as top-down algorithms.  相似文献   

2.
□ This article proposes a method to perform automated inference from event assertions through the use of an ontology formalized under Conceptual Structure Theory. An upper event ontology is first presented as an example of how the ontology is built and how real-life event assertions and their relations are modeled into its hierarchies of type. Three different types are distinguished: an event type as a basic classification of events of similar nature, an event relation type as a predicate on event types, and an event meta-relation type as a “predicate of predicates” on event types, with formal relationships between them and their properties elicited. The proposed formalization could be leveraged by search engines on the Semantic Web and query-answering systems in specific domains of discourse.  相似文献   

3.
Order-sorted logic programming with predicate hierarchy   总被引:1,自引:0,他引:1  
Order-sorted logic has been formalized as first-order logic with sorted terms where sorts are ordered to build a hierarchy (called a sort-hierarchy). These sorted logics lead to useful expressions and inference methods for structural knowledge that ordinary first-order logic lacks. Nitta et al. pointed out that for legal reasoning a sort-hierarchy (or a sorted term) is not sufficient to describe structural knowledge for event assertions, which express facts caused at some particular time and place. The event assertions are represented by predicates with n arguments (i.e., n-ary predicates), and then a particular kind of hierarchy (called a predicate hierarchy) is built by a relationship among the predicates. To deal with such a predicate hierarchy, which is more intricate than a sort-hierarchy, Nitta et al. implemented a typed (sorted) logic programming language extended to include a hierarchy of verbal concepts (corresponding to predicates). However, the inference system lacks a theoretical foundation because its hierarchical expressions exceed the formalization of order-sorted logic. In this paper, we formalize a logic programming language with not only a sort-hierarchy but also a predicate hierarchy. This language can derive general and concrete expressions in the two kinds of hierarchies. For the hierarchical reasoning of predicates, we propose a manipulation of arguments in which surplus and missing arguments in derived predicates are eliminated and supplemented. As discussed by Allen, McDermott and Shoham in research on temporal logic and as applied by Nitta et al. to legal reasoning, if each predicate is interpreted as an event or action (not as a static property), then missing arguments should be supplemented by existential terms in the argument manipulation. Based on this, we develop a Horn clause resolution system extended to add inference rules of predicate hierarchies. With a semantic model restricted by interpreting a predicate hierarchy, the soundness and completeness of the Horn-clause resolution is proven.  相似文献   

4.
Spatial data warehouses (SDWs) allow for spatial analysis together with analytical multidimensional queries over huge volumes of data. The challenge is to retrieve data related to ad hoc spatial query windows according to spatial predicates, avoiding the high cost of joining large tables. Therefore, mechanisms to provide efficient query processing over SDWs are essential. In this paper, we propose two efficient indices for SDW: the SB-index and the HSB-index. The proposed indices share the following characteristics. They enable multidimensional queries with spatial predicate for SDW and also support predefined spatial hierarchies. Furthermore, they compute the spatial predicate and transform it into a conventional one, which can be evaluated together with other conventional predicates by accessing a star-join Bitmap index. While the SB-index has a sequential data structure, the HSB-index uses a hierarchical data structure to enable spatial objects clustering and a specialized buffer-pool to decrease the number of disk accesses. The advantages of the SB-index and the HSB-index over the DBMS resources for SDW indexing (i.e. star-join computation and materialized views) were investigated through performance tests, which issued roll-up operations extended with containment and intersection range queries. The performance results showed that improvements ranged from 68% up to 99% over both the star-join computation and the materialized view. Furthermore, the proposed indices proved to be very compact, adding only less than 1% to the storage requirements. Therefore, both the SB-index and the HSB-index are excellent choices for SDW indexing. Choosing between the SB-index and the HSB-index mainly depends on the query selectivity of spatial predicates. While low query selectivity benefits the HSB-index, the SB-index provides better performance for higher query selectivity.  相似文献   

5.
近年来,众包查询优化得到了数据库领域的广泛关注。主要研究了众包多谓词选择查询问题--借助于人力找到满足多谓词查询条件的对象。一种简单的方法是枚举数据集中的对象,对于每个对象判断是否满足每条谓词。它产生的代价是[|R|?n],其中[|R|]为数据集中对象的数量,[n]为谓词的数量。很显然,当处理大数据集或者查询包含较多谓词的时候,简单方法的代价是非常昂贵的。由于不同的谓词具有不同的选择性,如果首先验证高选择性的谓词,那么在验证剩余谓词的时候,就可以避免验证不满足高选择性谓词的对象。因此,采用一个好的谓词顺序实现众包选择查询可以显著减少人工代价。然而,实际中很难获得最佳的谓词序列。针对该问题,提出了一种基于采样的框架来获得高质量的查询序列。为了控制查询序列生成的成本,设计了基于随机序列的最优选择方法,该方法通过随机选择序列获得最终的谓词顺序。由于基于随机序列的选择方法可能产生较大的代价,为了减少开销,提出了一种基于过滤的序列选择方法。通过在众包平台上使用真实数据集评测了提出的方法,实验结果表明,该方法能够显著减少查询序列生成的代价,同时获得高质量的查询序列。  相似文献   

6.
Explicit graphs in a functional model for spatial databases   总被引:1,自引:0,他引:1  
Observing that networks are ubiquitous in applications for spatial databases, we define a new data model and query language that especially supports graph structures. This model integrates concepts of functional data modeling with order-sorted algebra. Besides object and data type hierarchies, graphs are available as an explicit modeling tool, and graph operations are part of the query language. Graphs have three classes of components, namely, nodes, edges, and explicit paths. These are at the same time object types within the object type hierarchy and can be used like any other type. Explicit paths are useful because real-world objects often correspond to paths in a network. Furthermore, a dynamic generalization concept is introduced to handle heterogeneous collections of objects in a query. In connection with spatial data types, this leads to powerful modeling and querying capabilities for spatial databases, in particular for spatially embedded networks such as highways, rivers, public transport, and so forth. We use multilevel order-sorted algebra as a formal framework for the specification of our model. Roughly spoken, the first-level algebra defines types and operations of the query language, whereas the second-level algebra defines kinds (collections of types) and type constructors as functions between kinds, and so provides the types that can be used at the first level  相似文献   

7.
Fragmentation has been used to distribute the contents of a database across the sites of a distributed database system. During run time, the system must determine which fragments can be used to answer each query. This process requires solving the predicate implication problem. In order to speed processing, it is desirable to do as much preprocessing as possible on the prestored fragments, without knowledge of the run-time query. In this paper, performing preprocessing on database fragments to speed later run-time implication checking is investigated. The investigation is based on a new concept, separation among predicates. When two predicates are properly separated, their union cannot be implied by any other conjunctive predicate unless one of them is implied by the conjunctive predicate. A polynomial time algorithm for checking the pair-wise separation among a collection of fragment predicates is introduced and its complexity is theoretically analyzed. The separation checking algorithm is accompanied by a query processing algorithm which makes use of the result of the separation properties of the fragments to speed real time query processing. The two algorithms presented are scalable according to available preprocessing time in the sense that the preprocessing algorithm can be run for shorter periods to produce partial preprocessing that can still be used by the query processing algorithm.  相似文献   

8.
Bottom-up query-answering procedures tend to explore a much larger search space than what is strictly needed. Top-down processing methods use the query to perform a more focused search that can result in more efficient query answering. Given a disjunctive deductive database, DB, and a query, Q, we establish a strong connection between model generation and clause derivability in two different representations of DB and Q. This allows us to use a bottom-up procedure for evaluating Q against DB in a top-down fashion. The approach requires no extensive rewriting of the input theory and introduces no new predicates. Rather, it is based on a certain duality principle for interpreting logical connectives. The duality transformation is achieved by reversing the direction of implication arrows in the clauses representing both the theory and the negation of the query. The application of a generic bottom-up procedure to the transformed clause set results in top-down query answering. Under favorable conditions efficiency gains are substantial, as shown by our preliminary testing. We give the logical meaning of the duality transformation and point to the conditions and sources of improved efficiency. We show how the duality approach can be used for refined query answering by specifying the minimal conditions (weakest updates) to DB under which Q becomes derivable. This is shown to be useful for view updates in disjunctive deductive databases as well as for other interesting applications.  相似文献   

9.
User defined topological predicates in database systems   总被引:1,自引:0,他引:1  
Current database systems cannot only store standard data like integer\underline{integer}, string\underline{string}, and real\underline{real} values, but also spatial data like points\underline{points}, lines\underline{lines}, and regions\underline{regions}. The importance of topological relationships between spatial objects has been recognized a long time ago. Using the well known 9-intersection model for describing such relationships, a lot of different topological relationships can be distinguished. For the query language of a database system it is not desirable to have such a large number of topological predicates. Particularly the query language should not be extended by a lot of predicate names. It is desirable to build new relationships from existing ones, for example to coarse the granularity. This paper describes how a database system user can define and use her own topological predicates. We show algorithms for computing such predicates in an efficient way. Last, we compare these general versions with specialized implementations of topological predicates.  相似文献   

10.
Optimization and evaluation of disjunctive queries   总被引:2,自引:0,他引:2  
It is striking that the optimization of disjunctive queries-i.e. those which contain at least one OR-connective in the query predicate-has been vastly neglected in the literature, as well as in commercial systems. In this paper, we propose a novel technique, called bypass processing, for evaluating such disjunctive queries. The bypass processing technique is based on new selection and join operators that produce two output streams: the TRUE-stream with tuples satisfying the selection (join) predicate and the FALSE-stream with tuples not satisfying the corresponding predicate. Splitting the tuple streams in this way enables us to “bypass” costly predicates whenever the “fate” of the corresponding tuple (stream) can be determined without evaluating this predicate. In the paper, we show how to systematically generate bypass evaluation plans utilizing a bottom-up building-block approach. We show that our evaluation technique allows us to incorporate the standard SQL semantics of null values. For this, we devise two different approaches: one is based on explicitly incorporating three-valued logic into the evaluation plans; the other one relies on two-valued logic by “moving” all negations to atomic conditions of the selection predicate. We describe how to extend an iterator-based query engine to support bypass evaluation with little extra overhead. This query engine was used to quantitatively evaluate the bypass evaluation plans against the traditional evaluation techniques utilizing a CNFor DNF-based query predicate  相似文献   

11.
This paper presents an enhanced ontology formalization, combining previous work in Conceptual Structure Theory and Order-Sorted Logic. Most existing ontology formalisms place greater importance on concept types, but in this paper we focus on relation types, which are in essence predicates on concept types. We formalize the notion of ‘predicate of predicates’ as meta-relation type and introduce the new hierarchy of meta-relation types as part of the ontology definition. The new notion of closure of a relation or meta-relation type is presented as a means to complete that relation or meta-relation type by transferring extra arguments and properties from other related types. The end result is an expanded ontology, called the closure of the original ontology, on which automated inference could be more easily performed. Our proposal could be viewed as a novel and improved ontology formalization within Conceptual Structure Theory and a contribution to knowledge representation and formal reasoning (e.g., to build a query-answering system for legal knowledge).  相似文献   

12.
Refinement-checking, as embodied in tools like FDR, PAT and ProB, is a popular approach for model-checking refinement-closed predicates of CSP processes. We consider the limits of this approach to model-checking these kinds of predicates. By adopting Clarkson and Schneider’s hyperproperties framework, we show that every refinement-closed denotational predicate of finitely-nondeterministic, divergence-free CSP processes can be written as the conjunction of a safety predicate and the refinement-closure of a liveness predicate. We prove that every safety predicate is refinement-closed and that the safety predicates correspond precisely to the CSP refinement checks in finite linear observations models whose left-hand sides (i.e. specification processes) are independent of the systems to which they are applied. We then show that there exist important liveness predicates whose refinement-closures cannot be expressed as refinement checks in any finite linear observations model ${\mathcal{M}}$ , divergence-strict model ${\mathcal{M}^\Downarrow}$ or non-divergence-strict divergence-recording model ${\mathcal{M}^\#}$ , i.e. in any standard CSP model suitable for reasoning about the kinds of processes that FDR can handle, namely finitely-branching ones. These liveness predicates include liveness properties under intuitive fairness assumptions, branching-time liveness predicates and non-causation predicates for reasoning about authority. We conclude that alternative verification approaches, besides refinement-checking, currently under development for CSP should be further pursued.  相似文献   

13.
The authors’ previous work discussed a scalable abstract knowledge representation and reasoning scheme for Pervasive Computing Systems, where both low-level and abstract knowledge is maintained in the form of temporal first-order logic (TFOL) predicates. Furthermore, we introduced a novel concept of a generalised event, an abstract event, which we define as a change in the truth value of an abstract TFOL predicate. Abstract events represent real-time knowledge about the system and they are defined with the help of well-formed TFOL expressions whose leaf nodes are concrete, low-level events using our AESL language.In this paper, we propose to simulate pervasive systems by providing estimated knowledge about its entities and situations that involve them. To achieve this goal, we enhance AESL with higher-order function predicates that denote approximate knowledge about the likelihood of a predicate instance having the value True with respect to a time reference. We define a mapping function between a TFOL predicate and a Bayesian network that calculates likelihood estimates for that predicate as well as a confidence level, i.e., a metric of how reliable the likelihood estimation is for that predicate.Higher-order likelihood predicates are implemented by a novel middleware component, the Likelihood Estimation Service (LES). LES implements the above mapping; first, for each abstract predicate, it learns a Bayesian network that corresponds to that predicate from the knowledge stored in the sensor-driven system. Once trained and validated, the Bayesian networks generate a likelihood estimate and a confidence level. This new knowledge is maintained in the middleware as approximate knowledge therefore providing a simulation of the pervasive system, in the absence of real-time data. Last but not least, we describe an experimental evaluation of our system using the Active BAT location system.  相似文献   

14.
带谓词的XPath查询的即时处理   总被引:1,自引:0,他引:1       下载免费PDF全文
吴年  张昱 《计算机工程》2006,32(13):58-60
介绍了一种立即计算谓词并即时输出的XML流数据查询系统XSIEQ。XSIEQ采用修改了的下推自动机技术,对多个XPath式按前缀共享的方式构造NFA,并对NFA状态进行类型标记和添加索引;从而在运行时能快速确定谓词计算和数据缓存等动作的时机,实现了即时处理;最后给出了XSIEQ和YFilter的查询性能对比及分析。  相似文献   

15.
郑冬冬  崔志明 《计算机应用》2006,26(9):2024-2027
越来越多的信息隐藏在Web查询接口之后,在此情况下如何寻找与用户查询最相关的数据源接口就变得越来越重要。文中提出了一种Deep Web查询接口选择算法,该算法是完全依赖于查询接口特征的。给定大量异构的Deep Web数据源,目标是选择与用户查询最相关的查询接口集。通过对实际查询接口特征的观察,发现了查询接口上谓词间的相关性。基于此发现,设计了一种基于共同出现谓词相关度模型的数据源选择算法,用于选择与用户查询最相关的查询接口集。  相似文献   

16.
It has been shown that global predicate detection in a distributed computation is an NP-complete problem in general. However, efficient predicate detection algorithms exist for some subclasses of predicates, such as stable predicates, observer-independent predicates, conjunctions of local predicates, channel predicates, etc. We show here that the problem of deciding whether a given predicate is a member of any of these tractable subclasses is NP-hard in general.We also explore the tractability of linear and regular predicates. In particular, we show that, unless RP=NP, there is no polynomial-time algorithm to detect for linear and regular predicates B.  相似文献   

17.
Detection of weak unstable predicates in distributed programs   总被引:1,自引:0,他引:1  
This paper discusses detection of global predicates in a distributed program. Earlier algorithms for detection of global predicates proposed by Chandy and Lamport (1985) work only for stable predicates. A predicate is stable if it does not turn false once it becomes true. Our algorithms detect even unstable predicates, without excessive overhead. In the past, such predicates have been regarded as too difficult to detect. The predicates are specified by using a logic described formally in this paper. We discuss detection of weak conjunctive predicates that are formed by conjunction of predicates local to processes in the system. Our detection methods will detect whether such a predicate is true for any interleaving of events in the system, regardless of whether the predicate is stable. Also, any predicate that can be reduced to a set of weak conjunctive predicates is detectable. This class of predicates captures many global predicates that are of interest to a programmer. The message complexity of our algorithm is bounded by the number of messages used by the program. The main applications of our results are in debugging and testing of distributed programs. Our algorithms have been incorporated in a distributed debugger that runs on a network of Sun workstations in UNIX  相似文献   

18.
不透明谓词是一类轻量级的代码混淆方法,能以单向的执行复杂度对抗程序的逆向分析。广义不透明谓词扩展狭义不透明谓词的值恒定属性至逻辑恒定属性,已经应用于部分恶意代码中以提升抗查杀能力。为消除不透明谓词对程序恶意性判定的影响,以广义不透明谓词后趋依赖的属性为依据,结合逻辑恒定判定,提出了基于逻辑一致性的广义不透明谓词检测方法。通过静态分析提取谓词前置条件约束、后趋逻辑约束和谓词判定表达式,以相交基本块搜寻初筛谓词,并依据约束求解方法判定广义不透明谓词。构造原型系统并进行测试,结果表明该方法能精准高效地检测出恶意代码中的不透明谓词。  相似文献   

19.
A formalism adequate for the specification of behavioral properties of data bases is proposed. The formalism is a many-sorted first order predicate calculus, including a formalized notion of data base state. Both update and query requests are modeled through expressions by the use of predicates supplied in the language of the formal system, and are treated uniformly as a theorem proving process. The process consists of using the axioms defining the data base for either synthesizing a valid sequence of update operations (if such exists) or for answering the query.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号