首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In this paper I introduce a formalism for natural language understandingbased on a computational implementation of Discourse RepresentationTheory. The formalism covers a wide variety of semantic phenomena(including scope and lexical ambiguities, anaphora and presupposition),is computationally attractive, and has a genuine inference component. Itcombines a well-established linguistic formalism (DRT) with advancedtechniques to deal with ambiguity (underspecification), and isinnovative in the use of first-order theorem proving techniques.The architecture of the formalism for natural language understandingthat I advocate consists of three levels of processing:underspecification, resolution, andinference. Each of these levels has a distinct function andtherefore employs a different kind of semantic representation. Themappings between these different representations define the interfacesbetween the levels.I show how underspecified semantic representations can be built in acompositional way (for a fragment of English Grammar) using standardtechniques borrowed from the -calculus, how inferences can becarried out on discourse representations using a translation tofirst-order logic, and how existing research prototypes (discourseprocessing and spoken-dialogue systems) implement the formalism.  相似文献   

3.
In this paper we present a dynamic assignment language which extends the dynamic predicate logic of Groenendijk and Stokhof [1991: 39–100] with assignment and with generalized quantifiers. The use of this dynamic assignment language for natural language analysis, along the lines of o.c. and [Barwise, 1987: 1–29], is demonstrated by examples. We show that our representation language permits us to treat a wide variety of donkey sentences: conditionals with a donkey pronoun in their consequent and quantified sentences with donkey pronouns anywhere in the scope of the quantifier. It is also demonstrated that our account does not suffer from the so-called proportion problem.Discussions about the correctness or incorrectness of proposals for dynamic interpretation of language have been hampered in the past by the difficulty of seeing through the ramifications of the dynamic semantic clauses (phrased in terms of input-output behaviour) in non-trivial cases. To remedy this, we supplement the dynamic semantics of our representation language with an axiom system in the style of Hoare. While the representation languages of barwise and Groenendijk and Stokhof were not axiomatized, the rules we propose form a deduction system for the dynamic assignment language which is proved correct and complete with respect to the semantics.Finally, we define the static meaning of a program of the dynamic assignment language as the weakest condition such that terminates successfully on all states satisfying , and we show that our calculus gives a straightforward method for finding static meanings of the programs of the representation language.  相似文献   

4.
5.
KRISP is a representation system and set of interpretation protocols that is used in the Sparser natural language understanding system to embody the meaning of texts and their pragmatic contexts. It is based on a denotational notion of semantic interpretation, where the phrases of a text are directly projected onto a largely pre-existing set of individuals and categories in a model, rather than first going through a level of symbolic representation such as a logical form. It defines a small set of semantic object types, grounded in the lambda calculus, and it supports the principle of uniqueness and supplies first class objects to represent partially-saturated relationships.KRISP is being used to develop a core set of concepts for such things as names, amounts, time, and modality, which are part of a few larger models for domains including Who's News and joint ventures. It is targeted at the task of information extraction, emphasizing the need to relate entities mentioned in new texts to a large set of pre-defined entities and those read about in earlier articles or in the same article.  相似文献   

6.
The “explicit-implicit” distinction   总被引:3,自引:3,他引:0  
Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980's a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, based upon the criterion ofconstant time processing.The present paper examines Kirsh's claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh's new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists.  相似文献   

7.
Delia Neuman is an assistant professor in the College of Library and Information Services at the University of Maryland, College Park. She has written several articles on the use of naturalistic inquiry to study electronic environments, including two published in Educational Technology Research and Development: Naturalistic Inquiry and Computer-Based Instruction: Rationale, Procedures and Potential (Spring 1989) and Learning Disabled Students' Interactions with Commercial Courseware: A Naturalistic Study (Spring 1991). She is currently involved in a study funded by the American Library Association to use naturalistic methods to investigate high school students' use of CD-ROM and online databases.  相似文献   

8.
In many language processing tasks, most of the sentences generally convey rather simple meanings. Moreover, these tasks have a limited semantic domain that can be properly covered with a simple lexicon and a restricted syntax. Nevertheless, casual users are by no means expected to comply with any kind of formal syntactic restrictions due to the inherent spontaneous nature of human language. In this work, the use of error-correcting-based learning techniques is proposed to cope with the complex syntactic variability which is generally exhibited by natural language. In our approach, a complex task is modeled in terms of a basic finite state model, F, and a stochastic error model, E. F should account for the basic (syntactic) structures underlying this task, which would convey the meaning. E should account for general vocabulary variations, word disappearance, superfluous words, and so on. Each natural user sentence is thus considered as a corrupted version (according to E) of some simple sentence of L(F). Adequate bootstrapping procedures are presented that incrementally improve the structure of F while estimating the probabilities for the operations of E. These techniques have been applied to a practical task of moderately high syntactic variability, and the results which show the potential of the proposed approach are presented.  相似文献   

9.
Collective entities and collective relations play an important role in natural language. In order to capture the full meaning of sentences like The Beatles sing Yesterday, a knowledge representation language should be able to express and reason about plural entities — like the Beatles — and their relationships — like sing — with any possible reading (cumulative, distributive or collective).In this paper a way of including collections and collective relations within a concept language, chosen as the formalism for representing the semantics of sentences, is presented. A twofold extension of theAC concept language is investigated: (1) special relations introduce collective entities either out of their components or out of other collective entities, (2) plural quantifiers on collective relations specify their possible reading. The formal syntax and semantics of the concept language is given, together with a sound and complete algorithm to compute satisfiability and subsumption of concepts, and to compute recognition of individuals.An advantage of this formalism is the possibility of reasoning and stepwise refining in the presence of scoping ambiguities. Moreover, many phenomena covered by the Generalized Quantifiers Theory are easily captured within this framework. In the final part a way to include a theory of parts (mereology) is suggested, allowing for a lattice-theoretical approach to the treatment of plurals.  相似文献   

10.
Cartesian differential invariants in scale-space   总被引:2,自引:0,他引:2  
We present a formalism for studying local image structure in a systematic, coordinate-independent, and robust way, based on scale-space theory, tensor calculus, and the theory of invariants. We concentrate ondifferential invariants. The formalism is of general applicability to the analysis of grey-tone images of various modalities, defined on aD-dimensional spatial domain.We propose a diagrammar of differential invariants and tensors, i.e., a diagrammatic representation of image derivatives in scale-space together with a set of simple rules for representing meaningful local image properties. All local image properties on a given level of inner scale can be represented in terms of such diagrams, and, vice versa, all diagrams represent coordinate-independent combinations of image derivatives, i.e., true image properties.  相似文献   

11.
The data structures which form an integral part of the Madcap VI programming language are described. The initialization (declarationand constructor) expressions and selector expressions of these structures are defined and their implementation using codewords is discussed. Structures, since they can contain references to other structures (including themselves), have the form of directed trees (graphs). Variables of primitive data type (real, complex, etc.) are naturally considered as degenerate graphs, merely single nodes. The possibility for both multiword and fractional-word representation of structures is evident, but the language itself is implementation-independent. Thus a field is simply a substructure. The Madcap VI data structures are compared to data structure concepts in PL/I.This work supported by the United States Atomic Energy Commission.  相似文献   

12.
This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a subspace constancy assumption that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this EigenTracking technique to track and recognize the gestures of a moving hand.  相似文献   

13.
Book reviews     
Professor Richard Ennals is author of Artificial Intelligence and Human Institutions (Springer 1991) and co-editor, with Phil Molyneux, of Managing with Information Technology (Springer 1993).  相似文献   

14.
I begin by tracing some of the confusions regarding levels and reduction to a failure to distinguish two different principles according to which theories can be viewed as hierarchically arranged — epistemic authority and ontological constitution. I then argue that the notion of levels relevant to the debate between symbolic and connectionist paradigms of mental activity answers to neither of these models, but is rather correlative to the hierarchy of functional decompositions of cognitive tasks characteristic of homuncular functionalism. Finally, I suggest that the incommensurability of the intentional and extensional vocabularies constitutes a strongprima facie reason to conclude that there is little likelihood of filling in the story of Bechtel's missing level in such a way as to bridge the gap between such homuncular functionalism and his own model of mechanistic explanation.  相似文献   

15.
16.
Common Lisp [25],[26] includes a dynamic datatype system of moderate complexity, as well as predicates for checking the types of language objects. Additionally, an interesting predicate of two type specifiers—SUBTYPEP—is included in the language. Thissubtypep predicate provides a mechanism with which to query the Common Lisp type system regarding containment relations among the various built-in and user-defined types. Whilesubtypep is rarely needed by an applications programmer, the efficiency of a Common Lisp implementation can depend critically upon the quality of itssubtypep predicate: the run-time system typically calls uponsubtypep to decide what sort of representations to use when making arrays; the compiler calls uponsubtypep to interpret userdeclarations, on which efficient data representation and code generation decisions are based.As might be expected due to the complexity of the Common Lisp type system, there may be type containment questions which cannot be decided. In these casessubtypep is expected to return can't determine, in order to avoid giving an incorrect answer. Unfortunately, most Common Lisp implementations have abused this license by answering can't determine in all but the most trivial cases.In particular, most Common Lisp implementations of SUBTYPEP fail on the basic axioms of the Common Lisp type system itself [25][26]. This situation is particularly embarrassing for Lisp-the premier symbol processing language—in which the implementation of complex symbolic logical operations should be relatively easy. Sincesubtypep was presumably included in Common Lisp to answer thehard cases of type containment, this lazy evaluation limits the usefulness of an important language feature.  相似文献   

17.
Traditionally, when the term integration is used to refer to the interoperability of disparate, heterogeneous computer systems, it means the ability to exchange digital data between the systems. For the more sophisticated systems designers, integration may mean shared, distributed databases or a federated database system. Within the development of the STandard for the Exchange of Product model data (STEP — ISO 10303), integration refers to an information architecture composed of conceptual constructs that is independent of implementation considerations.The Integration Information Architecture of STEP is presented and explained. Instead of a flat representation of abstract (i.e., conceptual) data structures, integration within STEP takes place at four different levels:
1.  intra-resource integration;
2.  structural integration of application protocols through integrated resources;
3.  semantic integration of application protocols through application interpreted constructs (AICs);
4.  operational integration through application protocols.
  相似文献   

18.
This paper presents a new language for isomorphic representations of legalknowledge in feature structures. The language includes predefinedstructures based on situation theory for common-sense categories, andpredefined structures based on Van Kralingens (1995) frame-based conceptualmodelling language for legal rules. It is shown that the flexibility of thefeature-structure formalism can exploited to allow for structure-preservingrepresentations of non-primitive concepts, and to enable various types ofinteraction and cross-reference between language elements. A fragment of theDutch Opium Act is used to illustrate how modelling and reasoning proceed in practice.  相似文献   

19.
We describe a uniform technique for representing both sensory data and the attentional state of an agent using a subset of modal logic with indexicals. The resulting representation maps naturally into feed-forward parallel networks or can be implemented on stock hardware using bit-mask instructions. The representation has circuit-semantics (Nilsson, 1994, Rosenschein and Kaelbling, 1986), but can efficiently represent propositions containing modals, unary predicates, and functions. We describe an example using Kludge, a vision-based mobile robot programmed to perform simple natural language instructions involving fetching and following tasks.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号