共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Elisabeth 《Data & Knowledge Engineering》2002,41(2-3):247-272
Natural language and databases are core components of information systems. They are related to each other because they share the same purpose: the conceptualization aspects of the real world in order to deal with them in some way. Natural language processing (NLP) techniques may substantially enhance most phases of the information system lifecycle, starting with requirements analysis, specification and validation, and going up to conflict resolution, result processing and presentation. Furthermore, natural language based query languages and user interfaces facilitate the access to information for anyone and allow for new paradigms in the usage of computerized services. This paper investigates the use of NLP techniques in the design phase of information systems. Then, it reports on data base querying and information retrieval enhanced with NLP. 相似文献
3.
This paper introduces a well defined co-operation between domain expert, knowledge engineer, and knowledge acquisition and transformation tools. First, the domain expert supported by a hypertext tool generates an intermediate representation from parts of authentic texts of a domain. As a side effect, this representation serves as human readable documentation. In subsequent steps, this representation is semi-automatically transformed into a formal representation by knowledge acquisition tools. These tools are fully adapted to the expert's domain both in terminology and model structure which are developed by the knowledge engineer from a library of generic models and with preparation tools. 相似文献
4.
We describe a new approach for exploiting relevance feedback in content-based image retrieval (CBIR). In our approach to relevance feedback we try to capture more of the users’ relevance judgments by allowing the use of natural language like comments on the retrieved images. Using methods from fuzzy logic and computational intelligence we are able to reflect these comments into new targets for searching the image database. Such enhanced information is utilized to develop a system that can provide more effective and efficient retrieval. 相似文献
5.
When human experts express their ideas and thoughts, human words are basically employed in these expressions. That is, the experts with much professional experiences are capable of making assessment using their intuition and experiences. The measurements and interpretation of characteristics are taken with uncertainty, because most measured characteristics, analytical result, and field data can be interpreted only intuitively by experts. In such cases, judgments may be expressed using linguistic terms by experts. The difficulty in the direct measurement of certain characteristics makes the estimation of these characteristics imprecise. Such measurements may be dealt with the use of fuzzy set theory. As Professor L. A. Zadeh has placed the stress on the importance of the computation with words, fuzzy sets can take a central role in handling words [12, 13]. In this perspective fuzzy logic approach is offten thought as the main and only useful tool to deal with human words. In this paper we intend to present another approach to handle human words instead of fuzzy reasoning. That is, fuzzy regression analysis enables us treat the computation with words. In order to process linguistic variables, we define the vocabulary translation and vocabulary matching which convert linguistic expressions into membership functions on the interval [0–1] on the basis of a linguistic dictionary, and vice versa. We employ fuzzy regression analysis in order to deal with the assessment process of experts from linguistic variables of features and characteristics of an objective into the linguistic expression of the total assessment. The presented process consists of four portions: (1) vocabulary translation, (2) estimation, (3) vocabulary matching and (4) dictionary. We employed fuzzy quantification theory type 2 for estimating the total assessment in terms of linguistic structural attributes which are obtained from an expert.This research was supported in part by Grant-in Aid for Scientific Research(C-2); Grant No.11680459 of Ministry of Education of Science, Sports and Culture. 相似文献
6.
本体是共享概念模型的明确的形式化规范说明,通过概念之间的关系来描述概念的语义。药业本体是制药企业中或企业间公共的知识化的描述,不仅可以促进制药企业知识的重用和共享,还大大方便了制药企业内部以及企业间知识管理和信息交换。本文讨论了本体的概念,介绍了本体的构建原则和方法,以及各种本体表示语言,着重介绍了OWL本体语言,分析了药业本体应用领域,给出了一个简单的基于OWL的本体示例片段,最后简单介绍了本体编辑工具protege2000及其使用方法。 相似文献
7.
Robin Collier 《Artificial Intelligence Review》1994,8(1):17-54
A fundamental issue in natural language processing is the prerequisite of an enormous quantity of preprogrammed knowledge concerning both the language and the domain under examination. Manual acquisition of this knowledge is tedious and error prone. Development of an automated acquisition process would prove invaluable.This paper references and overviews a range of the systems that have been developed in the domain of machine learning and natural language processing. Each system is categorised into either a symbolic or connectionist paradigm, and has its own characteristics and limitations described. 相似文献
8.
由语义不一致性导致的知识库难以复用与共享是影响专家系统开发与应用的关键问题.针对鱼病诊断专家系统中诊断知识的特点,根据鱼病诊断知识模型,分析了鱼病诊断知识模式,定义了鱼病诊断知识基本单元,构建了鱼病诊断知识本体模型,并在此基础上采用OWL语言对其进行了形式化描述,为促进鱼病诊断知识共享、复用提供参考. 相似文献
9.
基于自然语言处理的计算机几何作图 总被引:1,自引:0,他引:1
如何将自然语言表述的初等几何命题自动转化为计算机可理解的作图语言是自然语言处理中的空白,也是实现教育软件人机交互的难点。文中通过对几何范围内的受限自然语言的研究,建立了有效可行的语言理解模型,实现了从自然语言到形式化规则的自动转化,并且设计出相应的软件。 相似文献
10.
One of the most difficult problems within the field of Artificial Intelligence (AI) is that of processing language by computer, or natural-language processing. A major problem in natural-language processing is to build theories and models of how individual utterances cling together into a coherent discourse. The problem is important because, to properly understand natural language, a computer should have some sense of what it means for a discourse to be coherent and rational. Theories, models and implementations of natural-language processing argue for a measure of coherence based on three themes: meaning, structure, and intention. Most approaches stress one theme over all the others. Their future lies in the integration of components of all approaches. A theory of intention analysis solves, in part, the problem of natural-language dialogue processing. A central principle of the theory is that coherence of natural-language dialogue can be modelled by analysing sequences of intention. The theory of intention analysis has been incorporated within a computational model, called Operating System CONsultant (OSCON), implemented in Quintus Prolog, which understands, and answers in English, English questions about computer operating systems. Theories and implementations of discourse processing will not only enable people to communicate better with computers, but also enable computers to better communicate with people. 相似文献
11.
This paper presents an experiment with a linguistic support tool for consolidation of requirements sets. The experiment is
designed based on the requirements management process at a large market-driven software development company that develops
generic solutions to satisfy many different customers. New requirements and requests for information are continuously issued,
which must be analyzed and responded to. The new requirements should first be consolidated with the old to avoid reanalysis
of previously elicited requirements and to complement existing requirements with new information. In the presented experiment,
a new open-source tool is evaluated in a laboratory setting. The tool uses linguistic engineering techniques to calculate
similarities between requirements and presents a ranked list of suggested similar requirements, between which links may be
assigned. It is hypothesized that the proposed technique for finding and linking similar requirements makes the consolidation
more efficient. The results show that subjects that are given the support provided by the tool are significantly more efficient
and more correct in consolidating two requirements sets, than are subjects that do not get the support. The results suggest
that the proposed techniques may give valuable support and save time in an industrial requirements consolidation process. 相似文献
12.
This article describes the natural language processing techniques used in two computer-assisted language instruction programs:
VERBCON and PARSER. VERBCON is a template-type program which teaches students how to use English verb forms in written texts.
In the exercises verbs have been put into the infinitive, and students are required to supply appropriate verb forms. PARSER
is intended to help students learn English sentence structure. Using a lexicon and production rules, it generates sentences
and asks students to identify their grammatical parts. The article contends that only by incorporating natural language processing
techniques can these programs offer a substantial number of exercises and at the same time provide students with informative
feedback.
Alan Bailin is director of the Effective Writing Program at the University of Western Ontario, London, Ontario, Canada. Philip
Thomson is a programmer in the Faculty of Medecine, University of Western Ontario. 相似文献
13.
Lucja Iwańska 《Minds and Machines》1993,3(4):475-510
A formal, computational, semantically clean representation of natural language is presented. This representation captures the fact that logical inferences in natural language crucially depend on the semantic relation of entailment between sentential constituents such as determiner, noun, adjective, adverb, preposition, and verb phrases.The representation parallels natural language in that it accounts for human intuition about entailment of sentences, it preserves its structure, it reflects the semantics of different syntactic categories, it simulates conjunction, disjunction, and negation in natural language by computable operations with provable mathematical properties, and it allows one to represent coordination on different syntactic levels.The representation demonstrates that Boolean semantics of natural language can be successfully modeled in terms of representation and inference by knowledge representation formalisms with Boolean semantics. A novel approach to the problem of automatic inferencing in natural language is addressed. The algorithm for updating a computer knowledge base and reasoning with explicit negative, disjunctive, and conjunctive information based on computing subsumption relation between the representations of the appropriate sentential constituents is discussed with examples. 相似文献
14.
This paper describes on-going research into the applications of some techniques normally used to formally specify and analyze the context-sensitive syntax of programming languages to the specification and analysis of the syntax of a natural language, namely English. The specific formal methods presently being investigated are two-level grammar (2LG) and the Vienna Definition Language (VDL). A preliminary subset of English has been established consisting of fifteen basic sentence patterns. 2LG and VDL specifications are given for one of these sentence types and the syntactic analysis of an English sentence using each of the two specifications is illustrated through an example. 相似文献
15.
Computer animation and visualization can facilitate communication between the hearing impaired and those with normal speaking capabilities. This paper presents a model of a system that is capable of translating text from a natural language into animated sign language. Techniques have been developed to analyse language and transform it into sign language in a systematic way. A hand motion coding method as applied to the hand motion representation, and control has also been investigated. Two translation examples are also given to demonstrate the practicality of the system. 相似文献
16.
Anastasia Karanastasi Fotis G. Kazasis Stavros Christodoulakis 《Personal and Ubiquitous Computing》2005,9(5):262-272
The TV-Anytime standard describes the structures of categories of digital TV program metadata, as well as user profile metadata for TV programs. We describe a natural language (NL) model for the users to interact with the TV-Anytime metadata and preview TV programs from their mobile devices. The language utilises completely the TV-Anytime metadata specifications (upper ontologies), as well as domain-specific ontologies. The interaction model does not use clarification dialogues, but it uses the user profiles as well as TV-Anytime metadata information and ontologies to rank the possible responses in case of ambiguities. We describe implementations of the model that run on a PDA and on a mobile phone, and manage the metadata on a remote TV-Anytime-compatible TV set. We present user evaluations of the approach. Finally, we propose a generalised implementation framework that can be used to easily provide NL interfaces for mobile devices for different applications and ontologies. 相似文献
17.
Two research projects are described that explore the use of spoken natural language interfaces to virtual reality (VR) systems. Both projects combine off-the-shelf speech recognition and synthesis technology with in-house command interpreters that interface to the VR applications. Details about the interpreters and other technical aspects of the projects are provided, together with a discussion of some of the design decisions involved in the creation of speech interfaces. Questions and issues raised by the projects are presented as inspiration for future work. These issues include: requirements for object and information representation in VR models to support natural language interfaces; use of the visual context to establish the interaction context; difficulties with referencing events in the virtual world; and problems related to the usability of speech and natural language interfaces in general. 相似文献
18.
An enterprise resource planning (ERP) is an enterprise-wide application software package that integrates all necessary business functions into a single system with a common database. In order to implement an ERP project successfully in an organization, it is necessary to select a suitable ERP system. This paper presents a new model, which is based on linguistic information processing, for dealing with such a problem. In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets. The consistency and inconsistency indices are defined by considering the subject information obtained from internal interviews with ERP vendors, and then a linear programming model is established for selecting the most suitable ERP system. Finally, a numerical example is given to demonstrate the application of the proposed method. 相似文献
19.
This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented. 相似文献
20.
John Dinsmore 《New Generation Computing》1991,9(1):39-68
A logic-based system of knowledge representation for natural language discourse has three primary advantages:
On the other hand, a standard logic-based system has the following disadvantages:
相似文献
– | • It has adequate expressive power, |
– | • it has a well-defined semantics, and |
– | • it uses simple, sound, general rules of inference. |
– | • It supports only an exceedingly complex mapping from surface discourse sentences to internal representations, and |
– | • reasoning about the content of semantically complex discourses is difficult because of the incommodious complexity of the internalized formulas. |