首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article describes the design decisions taken in implementing a processing model for understanding goal-oriented discourse. This model analyzes a restricted form of discourse known as arguments. Two main contributions are: (i) an integrated processing algorithm, which combines basic processing constraints with an interpretation of clue words–words and phrases which serve to indicate the structure of the discourse; (ii) a working version of the “evidence oracle,” which establishes connections between utterances in the discourse. This oracle determines if an “evidence” relation is intended between two utterances, and builds a model of the speaker based on the evidence relations found. This article thus emphasizes the general insights gained from the implementation exercise, both for the specification of a discourse analysis model, and for the general problem of recognizing a speaker's intentions and plans.  相似文献   

2.
A logical framework is presented for defining semantics of programs that satisfy Hoare postulates. The two families of logical systems are given: modal systems and relational systems. In the modal systems semantics of Hoare-style programming languages is provided in terms of relations and sets, and in relational systems in terms of relations only. Proof theory for the given logics is presented.  相似文献   

3.
This paper studies the multifunctionality of dialogue utterances, i.e. the phenomenon that utterances in dialogue often have more than one communicative function. It is argued that this phenomenon can be explained by analyzing the participation in dialogue as involving the performance of several types of activity in parallel, relating to different dimensions of communication. The multifunctionality of dialogue utterances is studied by (1) redefining the notion of ‘utterance’ in a rigorous manner (calling the revised notion ‘functional segment’), and (2) empirically investigating the multifunctionality of functional segments in a corpus of dialogues, annotated with a rich, multidimensional annotation schema. It is shown that, when communicative functions are assigned to functional segments, thereby eliminating every form of segmentation-related multifunctionality, an average multifunctionality is found between 1.8 and 3.6, depending on what is considered to count as a segment's communicative function. Moreover, a good understanding of the nature of the relations among the various multiple functions that a segment may have, and of the relations between functional segments and other units in dialogue segmentation, opens the way for defining a multidimensional computational update semantics for dialogue interpretation.  相似文献   

4.
Algorithms on abelian groups represented by an explicit set of generators are presented here. An algorithm for computing a set of defining relations and an algorithm for computing a complete basis of an abelian group are given. Also an algorithm for computing a basis for the (abelian) intersection of two abelian groups is given. All algorithms have worst-case time complexity polynomial in terms of the order of the group.  相似文献   

5.
Incomplete Information Tables and Rough Classification   总被引:24,自引:0,他引:24  
The rough set theory, based on the original definition of the indiscernibility relation, is not useful for analysing incomplete information tables where some values of attributes are unknown. In this paper we distinguish two different semantics for incomplete information: the "missing value" semantics and the "absent value" semantics. The already known approaches, e.g. based on the tolerance relations, deal with the missing value case. We introduce two generalisations of the rough sets theory to handle these situations. The first generalisation introduces the use of a non symmetric similarity relation in order to formalise the idea of absent value semantics. The second proposal is based on the use of valued tolerance relations. A logical analysis and the computational experiments show that for the valued tolerance approach it is possible to obtain more informative approximations and decision rules than using the approach based on the simple tolerance relation.  相似文献   

6.
As probabilistic data management is becoming one of the main research focuses and keyword search is turning into a more popular query means, it is natural to think how to support keyword queries on probabilistic XML data. With regards to keyword query on deterministic XML documents, ELCA (Exclusive Lowest Common Ancestor) semantics allows more relevant fragments rooted at the ELCAs to appear as results and is more popular compared with other keyword query result semantics (such as SLCAs). In this paper, we investigate how to evaluate ELCA results for keyword queries on probabilistic XML documents. After defining probabilistic ELCA semantics in terms of possible world semantics, we propose an approach to compute ELCA probabilities without generating possible worlds. Then we develop an efficient stack-based algorithm that can find all probabilistic ELCA results and their ELCA probabilities for a given keyword query on a probabilistic XML document. Finally, we experimentally evaluate the proposed ELCA algorithm and compare it with its SLCA counterpart in aspects of result probability, time and space efficiency, and scalability.  相似文献   

7.
In the field of complex problem optimization with metaheuristics, semantics has been used for modeling different aspects, such as: problem characterization, parameters, decision-maker’s preferences, or algorithms. However, there is a lack of approaches where ontologies are applied in a direct way into the optimization process, with the aim of enhancing it by allowing the systematic incorporation of additional domain knowledge. This is due to the high level of abstraction of ontologies, which makes them difficult to be mapped into the code implementing the problems and/or the specific operators of metaheuristics. In this paper, we present a strategy to inject domain knowledge (by reusing existing ontologies or creating a new one) into a problem implementation that will be optimized using a metaheuristic. Thus, this approach based on accepted ontologies enables building and exploiting complex computing systems in optimization problems. We describe a methodology to automatically induce user choices (taken from the ontology) into the problem implementations provided by the jMetal optimization framework. With the aim of illustrating our proposal, we focus on the urban domain. Concretely, we start from defining an ontology representing the domain semantics for a city (e.g., building, bridges, point of interest, routes, etc.) that allows defining a-priori preferences by a decision maker in a standard, reusable, and formal (logic-based) way. We validate our proposal with several instances of two use cases, consisting in bi-objective formulations of the Traveling Salesman Problem (TSP) and the Radio Network Design problem (RND), both in the context of an urban scenario. The results of the experiments conducted show how the semantic specification of domain constraints are effectively mapped into feasible solutions of the tackled TSP and RND scenarios. This proposal aims at representing a step forward towards the automatic modeling and adaptation of optimization problems guided by semantics, where the annotation of a human expert can be now considered during the optimization process.  相似文献   

8.
Declarative semantics gives the meaning of a logic program in terms of properties,while the procedural semantics gives the meaning in terms of the execution or evaluation of the program.From the database point of view,the procedural semantics of the program is equally important.This paper focuses on the study of the bottom-up evaluation of the WFM semantics of datalog‘ programs.To compute the WFM,first,the stability transformation is revisited,and a new operator Op and its fixpoint are defined. Based on this,a fixpoint semantics,called oscillating fixpoint model semantics,is defined.Then,it is shown that for any datalog‘ program the oscillating fixpoint model is identical to its WFM.So,the oscillating fixpoint model can be viewed as an alternative (constructive) definition of WFM.The underlying operation (or transformation) for reaching the oscillating fixpoint provides a potential of bottom-up evaluation.For the sake of computational feasibility,the strongly range-restricted program is considered,and an algorithm used to compute the oscillating fixpoint is described.  相似文献   

9.
An entity relationship oriented model, that includes the notion of class, together with different types of assertions on classes, is presented. The assertions are used to model IS-A and disjointness relations both between entities and between relationships, part-of relations between entities and relationships, mandatory participation of an entity in a relationship, and interdependencies between the projections of relationships. The semantics of the model are defined in terms of first-order logic, and a sound and complete inference algorithm for such a model is presented. The algorithm is shown to have polynomial time complexity in the case where interdependencies on the projections of relationships are not taken into account. It is suggested that the model and the associated inference capabilities provide a suitable formal basis for designing an effective environment supporting conceptual modeling  相似文献   

10.
This paper describes an implemented computational model that generates intonation contours for dialogue systems. We concentrate on the relationship between pragmatics and two aspects of intonation: pitch range and pitch accent placement. Pitch range is computed based on the position of an utterance in the discourse structure: utterances that introduce a new topic have an expanded register compared to utterances that continue a topic. Pitch accent placement is based on two pragmatic factors: cognitive status (what the speaker assumes the hearer is attending to) and informativeness (what the speaker assumes to be the interesting or informative component of a phrase). This work suggests that even simple models of discourse topic structure, cognitive status, and informativeness will lead to improved register determination and pitch accent placement in practical conversational systems.  相似文献   

11.
12.
We develop an analysis of discourse anaphora—the relationship between a pronoun and an antecedent earlier in the discourse—using games of partial information. The analysis is extended to include information from a variety of different sources, including lexical semantics, contrastive stress, grammatical relations, and decision theoretic aspects of the context.  相似文献   

13.
14.
The discourse analysis task, which focuses on understanding the semantics of long text spans, has received increasing attention in recent years. As a critical component of discourse analysis, discourse relation recognition aims to identify the rhetorical relations between adjacent discourse units (e.g., clauses, sentences, and sentence groups), called arguments, in a document. Previous works focused on capturing the semantic interactions between arguments to recognize their discourse relations, ignoring important textual information in the surrounding contexts. However, in many cases, more than capturing semantic interactions from the texts of the two arguments are needed to identify their rhetorical relations, requiring mining more contextual clues. In this paper, we propose a method to convert the RST-style discourse trees in the training set into dependency-based trees and train a contextual evidence selector on these transformed structures. In this way, the selector can learn the ability to automatically pick critical textual information from the context (i.e., as evidence) for arguments to assist in discriminating their relations. Then we encode the arguments concatenated with corresponding evidence to obtain the enhanced argument representations. Finally, we combine original and enhanced argument representations to recognize their relations. In addition, we introduce auxiliary tasks to guide the training of the evidence selector to strengthen its selection ability. The experimental results on the Chinese CDTB dataset show that our method outperforms several state-of-the-art baselines in both micro and macro F1 scores.  相似文献   

15.
Backward compatibility is the property that an old version of a library can safely be replaced by a new version without breaking existing clients. Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations. In the object-oriented setting with inheritance and callbacks, such a model must account for the complex interface between library implementations and clients.In this paper, we develop a fully abstract trace-based semantics for class libraries in object-oriented languages, in particular for Java-like sealed packages. Our approach enhances a standard operational semantics such that the change of control between the library and the client context is made explicit in terms of interaction labels. By using traces over these labels, we abstract from the data representation in the heap, support class hiding, and provide fully abstract package denotations. Soundness and completeness of the trace semantics is proven using specialized simulation relations on the enhanced operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.  相似文献   

16.
Strategies are a powerful mechanism to control rule application in rule-based systems. For instance, different transition relations can be defined and then combined by means of strategies, giving rise to an effective tool to define the semantics of programming languages. We have endowed the Maude MSOS Tool (MMT), an executable environment for modular structural operational semantics, with the possibility of defining strategies over its transition rules, by combining MMT with the Maude strategy language interpreter prototype. The combination was possible due to Maude's reflective capabilities. One possible use of MMT with strategies is to execute Ordered SOS specifications. We show how a particular form of strategy can be defined to represent an OSOS order and therefore execute, for instance, SOS specifications with negative premises. In this context, we also discuss how two known techniques for the representation of negative premises in OSOS become simplified in our setting.  相似文献   

17.
Data refinement in a state-based language such as Z is defined using a relational model in terms of the behaviour of abstract programs. Downward and upward simulation conditions form a sound and jointly complete methodology to verify relational data refinements, which can be checked on an event-by-event basis rather than per trace. In models of concurrency, refinement is often defined in terms of sets of observations, which can include the events a system is prepared to accept or refuse, or depend on explicit properties of states and transitions. By embedding such concurrent semantics into a relational framework, eventwise verification methods for such refinement relations can be derived. In this paper, we continue our program of deriving simulation conditions for process algebraic refinement by defining further embeddings into our relational model: traces, completed traces, failure traces and extension. We then extend our framework to include various notions of automata based refinement.  相似文献   

18.
There are numerous methods of formally defining the semantics of computer languages. Each method has been designed to fulfil a different purpose. For example, some have been designed to make reasoning about languages as easy as possible; others have been designed to be accessible to a large audience and some have been designed to ease implementation of languages. Given two semantics definitions of a language written using two separate semantics definition methods, we must be able to show that the two are in fact equivalent. If we cannot do this then we either have an error in one of the semantics definitions, or more seriously we have a problem with the semantics definition methods themselves.Three methods of defining the semantics of computer languages have been considered, i.e. Denotational Semantics, Structural Operational Semantics and Action Semantics. An equivalence between these three is shown for a specific example language by first defining its semantics using each of the three definition methods. The proof of the equivalence is then constructed by selecting pairs of the semantics definitions and showing that they define the same language.A full version of this paper can be accessed via our web page http://www.cs.man.ac.uk/fmethods/ facj.html  相似文献   

19.
An Operational Semantics for Timed CSP   总被引:1,自引:0,他引:1  
An operational semantics is defined for the language of timed CSP, in terms of two relations: an evolution relation, which describes when a process becomes another simply by allowing time to pass; and a timed transition relation, which describes when a process may become another by performing an action at a particular time. It is shown how the timed behaviours used as the basis for the denotational models of the language may be extracted from the operational semantics. Finally, the failures model for timed CSP is shown to be equivalent to may-testing and, thus, to trace congruence.  相似文献   

20.
Abstract

This paper discusses the semantics and usage of reification as applied to relations and tuples. The reification of a tuple is a proposition object possessing a case role for each domain attribute in the tuple. The reification of a set of fillers of a role is an object sometimes referred to as a ‘roleset’. In the course of defining reification mechanisms for the Loom knowledge representation system, we have unearthed several open issues that come into focus when considering equivalence relations between these kinds of reified objects. Another type of reification produces an individual that represents a view of another individual filling a particular role. We present a number of semantic variations of this reification operation, and argue that the unbridled application of such reification operators has the potential to overwhelm the representation mechanism. We suggest that a regimen that merges various similar but non-equivalent classes of individuals might be preferable to a system that insists on unique representations for each possible abstraction of an individual.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号