首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Spatio-temporal predicates   总被引:10,自引:0,他引:10  
Investigates temporal changes of topological relationships and thereby integrates two important research areas: first, 2D topological relationships that have been investigated quite intensively, and second, the change of spatial information over time. We investigate spatio-temporal predicates, which describe developments of well-known spatial topological relationships. A framework is developed in which spatio-temporal predicates can be obtained by temporal aggregation of elementary spatial predicates and sequential composition. We compare our framework with two other possible approaches: one is based on the observation that spatio-temporal objects correspond to 3D spatial objects for which existing topological predicates can be exploited. The other approach is to consider possible transitions between spatial configurations. These considerations help to identify a canonical set of spatio-temporal predicates  相似文献   

2.
Fragmentation has been used to distribute the contents of a database across the sites of a distributed database system. During run time, the system must determine which fragments can be used to answer each query. This process requires solving the predicate implication problem. In order to speed processing, it is desirable to do as much preprocessing as possible on the prestored fragments, without knowledge of the run-time query. In this paper, performing preprocessing on database fragments to speed later run-time implication checking is investigated. The investigation is based on a new concept, separation among predicates. When two predicates are properly separated, their union cannot be implied by any other conjunctive predicate unless one of them is implied by the conjunctive predicate. A polynomial time algorithm for checking the pair-wise separation among a collection of fragment predicates is introduced and its complexity is theoretically analyzed. The separation checking algorithm is accompanied by a query processing algorithm which makes use of the result of the separation properties of the fragments to speed real time query processing. The two algorithms presented are scalable according to available preprocessing time in the sense that the preprocessing algorithm can be run for shorter periods to produce partial preprocessing that can still be used by the query processing algorithm.  相似文献   

3.
Predicates are used in the Revised ALGOL 68 Report to indicate blind alleys and to reduce the number of rules in the Report. They may also be used by the compiler writer to implement some of the non-context-free aspects of the language. This paper shows how the predicates can be transformed in a relatively straightforward way into ALGOL 68 procedures to form part of an analyser for the revised language.  相似文献   

4.
In a large-scale locality-driven network such as in modular robotics and wireless sensor networks, knowing the state of a local area is sometimes necessary due to either interactions being local and driven by neighborhood proximity or the users being interested in the state of a certain region. We define locality-aware predicates (LAP) that aim at detecting a predicate within a specified area. We model the area of interest as the set of processes that are within a breadth-first search tree (BFST) of height kk rooted at the initiator process. Although a locality-aware predicate specifies a predicate only within a local area, observing the area consistently requires considering the entire system in a consistent manner. This raises the challenge of making the complexities of the corresponding predicate detection algorithms scale-free, i.e., independent of the size of the system. Since all existing algorithms for getting a consistent view of the system require either a global snapshot of the entire system or vector clocks of the size of the system, a new solution is needed. We focus on stable LAP, which are those LAP that remain true once they become true. We propose a scale-free algorithm to detect stable LAP within a kk-height BFST. Our algorithm can detect both stable conjunctive LAP and stable relational LAP. In the process of designing our algorithm, we also propose the first distributed algorithm for building a BFST within an area of interest in a graph, and the first distributed algorithm for recording a consistent sub-cut within the area of interest. This paper demonstrates that LAPs are a natural fit for detecting distributed properties in large-scale distributed systems, and stable LAPs can be practically detected at low cost.  相似文献   

5.
We consider the problem of determining the maximum and minimum elements of a setX={x1...,x n }, drawn from some finite universeU of real numbers, using only unary predicates of the inputs. It is shown that (n+ log¦U¦) unary predicate evaluations are necessary and sufficient, in the worst case. Results are applied to (i) the problem of determining approximate extrema of a set of real numbers, in the same model, and (ii) the multiparty broadcast communication complexity of determining the extrema of an arbitrary set of numbers held by distinct processors.  相似文献   

6.
Compositions of partial predicates of the highest abstraction level that form the basis of infinitary propositional logics are defined and investigated. The compositions are completely described in terms of special algebras called Klee ne algebras, and a complete system of identities is constructed for such compositions. Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 3–19, March–April, 2000.  相似文献   

7.
The least general generalization (LGG) of strings may cause an over-generalization in the generalization process of the clauses of predicates with string arguments. We propose a specific generalization (SG) for strings to reduce over-generalization. SGs of strings are used in the generalization of a set of strings representing the arguments of a set of positive examples of a predicate with string arguments. In order to create a SG of two strings, first, a unique match sequence between these strings is found. A unique match sequence of two strings consists of similarities and differences to represent similar parts and differing parts between those strings. The differences in the unique match sequence are replaced to create a SG of those strings. In the generalization process, a coverage algorithm based on SGs of strings or learning heuristics based on match sequences are used. Ilyas Cicekli received a Ph.D. in computer science from Syracuse University in 1991. He is currently a professor of the Department of Computer Engineering at Bilkent University. From 2001 till 2003, he was a visiting faculty at University of Central Florida. His current research interests include example-based machine translation, machine learning, natural language processing, and inductive logic programming. Nihan Kesim Cicekli is an Associate Professor of the Department of Computer Engineering at the Middle East Technical University (METU). She graduated in computer engineering at the Middle East Technical University in 1986. She received the MS degree in computer engineering at Bilkent University in 1988; and the PhD degree in computer science at Imperial College in 1993. She was a visiting faculty at University of Central Florida from 2001 till 2003. Her current research interests include multimedia databases, semantic web, web services, data mining, and machine learning.  相似文献   

8.
Composition nominative logics of quasiary predicates are studied. The spectrum of composition nominative logics is considered and various classes of first-order logics of quasiary predicates are described. Sequent calculi are constructed for the general case of logics of quantifier-level quasiary predicates, and soundness and completeness theorems are proved.  相似文献   

9.
10.
Wordnets, which are repositories of lexical semantic knowledge containing semantically linked synsets and lexically linked words, are indispensable for work on computational linguistics and natural language processing. While building wordnets for Hindi and Marathi, two major Indo-European languages, we observed that the verb hierarchy in the Princeton Wordnet was rather shallow. We set to constructing a verb knowledge base for Hindi, which arranges the Hindi verbs in a hierarchy of is-a (hypernymy) relation. We realized that there are unique Indian language phenomena that bear upon the lexicalization vs. syntactically derived choice. One such example is the occurrence of conjunct and compound verbs (called Complex Predicates) which are found in all Indian languages. This paper presents our experience in the construction of lexical knowledge bases for Indian languages with special attention to Hindi. The question of storing versus deriving complex predicates has been dealt with linguistically and computationally. We have constructed empirical tests to decide if a combination of two words, the second of which is a verb, is a complex predicate or not. Such tests provide a principled way of deciding the status of complex predicates in Indian language wordnets.  相似文献   

11.
An important problem in pervasive environments is detecting predicates on sensed variables in an asynchronous distributed setting to determine context and to respond. We do not assume the availability of synchronized physical clocks because they may not be available or may be too expensive for predicate detection in such environments with a (relatively) low event occurrence rate. We address the problem of detecting each occurrence of a global predicate, at the earliest possible instant, by proposing a suite of three on-line middleware protocols having varying degrees of accuracy. We analyze the degree of accuracy for the proposed protocols. The extent of false negatives and false positives is determined by the run-time message processing latencies.  相似文献   

12.
Production Control Systems (PCS) belong to the multilevel hierarchical complex systems. This article presents a four level decomposition of PCS adapted to the Decision Aid and a method for their analysis. The aim of the present analysis is to determine decision centers in order to build decision aid systems (DAS) adapted to the system controlled by decision makers. The conception of such DAS must use decision aid procedures and Artificial Intelligence techniques. This is consisting particularly in formalizing the Decision Aid Problem by using predicates, such procedures making the use of computer possible.  相似文献   

13.
Highly frequent and highly polysemous verbs, such as give, take, and make, pose a challenge to automatic lexical acquisition methods. These verbs widely participate in multiword predicates (such as light verb constructions, or LVCs), in which they contribute a broad range of figurative meanings that must be recognized. Here we focus on two properties that are key to the computational treatment of LVCs. First, we consider the degree of figurativeness of the semantic contribution of such a verb to the various LVCs it participates in. Second, we explore the patterns of acceptability of LVCs, and their productivity over semantically related combinations. To assess these properties, we develop statistical measures of figurativeness and acceptability that draw on linguistic properties of LVCs. We demonstrate that these corpus-based measures correlate well with human judgments of the relevant property. We also use the acceptability measure to estimate the degree to which a semantic class of nouns can productively form LVCs with a given verb. The linguistically-motivated measures outperform a standard measure for capturing the strength of collocation of these multiword expressions.
Ryan NorthEmail:
  相似文献   

14.
This paper extends Reiter’s closed world assumption to cases where the assumption is applied in a precedence order between predicates. The extended assumptions are: thepartial closed world assumption, thehierarchical closed world assumption and thestepwise closed world assumption. The paper also defines an extension of Horn formulas and shows several consistency results about the theory obtained from the extended Horn formulas by applying the proposed assumptions. In particular, the paper shows that both the hierarchical closed world assumption and the stepwise closed world assumption characterize the perfect model of stratified programs.  相似文献   

15.
User defined topological predicates in database systems   总被引:1,自引:0,他引:1  
Current database systems cannot only store standard data like integer\underline{integer}, string\underline{string}, and real\underline{real} values, but also spatial data like points\underline{points}, lines\underline{lines}, and regions\underline{regions}. The importance of topological relationships between spatial objects has been recognized a long time ago. Using the well known 9-intersection model for describing such relationships, a lot of different topological relationships can be distinguished. For the query language of a database system it is not desirable to have such a large number of topological predicates. Particularly the query language should not be extended by a lot of predicate names. It is desirable to build new relationships from existing ones, for example to coarse the granularity. This paper describes how a database system user can define and use her own topological predicates. We show algorithms for computing such predicates in an efficient way. Last, we compare these general versions with specialized implementations of topological predicates.  相似文献   

16.
17.
This paper presents an on-line distributed algorithm for detection of Definitely(φ) for the class of conjunctive global predicates. The only known algorithm for detection of Definitely(φ) uses a centralized approach. A method for decentralizing the algorithm was also given, but the work load is not fairly distributed and the method uses a hierarchical structure. The centralized approach has a time, space, and total message complexity of O(n2m), where n is the number of processes and m is the maximum number of messages sent by any process. The proposed on-line distributed algorithm uses the concept of intervals rather than events, and assumes p is the maximum number of intervals at any process. The worst-case time complexity across all the processes is O(min(pn2,mn2)). The worst-case space overhead across all the processes is min(2mn2,2pn2).  相似文献   

18.
流线可视化技术研究与进展   总被引:6,自引:0,他引:6  
流线可视化技术是普遍采用的CFD矢量场可视化工具。简要回顾了流线可视化技术的理论和实际意义,以及流线生成的两类技术;重点针对流场数据和流线表达方法的特点,介绍了流线可视化研究中一些具有共性的问题,以及近年来流线可视化技术的研究热点和动态。  相似文献   

19.
Many recent axiomatic definitions for structured programming languages include control predicates,at(S), in(S), andafter(S), which are an abstraction of location counters. The usual axioms identify control locations so as to imply that no time (i.e., no state transition) is needed to pass from the end of one statement to the next, and in particular from the end of a loop body back to the test at the head of the loop. Here, an axiomatic framework for control predicates is examined. It is shown that if all the axioms are to be maintained with common representation mappings, there are difficult new requirements which need to be satisfied by an implementation for fair concurrent models of computation. Several approaches to resolving the difficulty are considered, and in particular it is suggested to replace some axioms of the formPQ byPeventually(Q), whereP andQ are control predicates, thereby separating control states previously identified.The North has receded, but the South has not yet arrived.-Reuven Miran, 42 Degrees in the Shade Every three lines intersect at a point, if the point is thick enough.-Folk theoremNote: A talk based on this paper was presented at the Colloquium on Temporal Logic and Specification, Altrincham, Cheshire, April 1987.C.R. Categories: D.3.1 [Programming languages] Formal definitions and theory: semantics; D..3.3 [Programming languages] Language constructs: control structures; F.3.1. [Logics and meanings of programs] Specifying and verifying and reasoning about programs.  相似文献   

20.
二元互关联后继树精简索引模型研究   总被引:1,自引:0,他引:1  
全文检索领域的关键问题是索引模型以及索引的创建与检索算法.基于二元互关联后继树模型,提出一个实用性能好的后继节点有序的后继树精简索引模型(SIRST),并给出此模型下索引的创建与检索算法.通过将该模型与使用广泛的倒排文件模型(IF)进行比较,表明SIRST的检索效率远远高于IF,同时,随着文本集规模越来越大,SIRST的创建效率优势愈发明显.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号