首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
基于自然语言计算模型的汉语理论解系统   总被引:4,自引:0,他引:4  
周经野 《软件学报》1993,4(6):41-46
本文首先给出了一种自然语言计算模型,该模型把自然语言交流过程划分为三个层次:语言形式,表层语义和深层语义,从而将自然语言理解抽象为一个复合函数UP。依据这个模型,我们设计了一个汉语理解系统,这个系统具有良好的扩展性和可移植性,该系统采用汉语语义结构文法来分析汉语句子,反语法分析和语义分析有机地结合在一卢。文中形式定义了词语的深层语义以及深层语义的基本运算,给出了分析器,理解器以及生成器的算法。  相似文献   

2.
自然语言理解中的音字流自动分词   总被引:2,自引:0,他引:2  
本文讨论了自然语言理解中的语音流和文字流的自动分词问题;构造了汉语理解的层次化模型;提出了把反馈信息限定为最简形式从而使分词层与语义无关的思想以及词串排序的三种策略:按可能性大小排序, 按运算时间长短排序,以及上述两种的综合;介绍了一种分词精度极高的分词方法FWF;并且给出了实现算法和实验结果。FWF分词方法已经在语句级键盘输入、声音输入、手写汉字输入系统上使用。  相似文献   

3.
要准确理解自然语言,不仅需要对自然语言进行语义理解,更需要把语义理解和具体的语境结合,即要进行语用方面的研究。在教学智能辅助系统中,我们对语用的实现进行了一些尝试,把自然语言理解的语义理解结果和教学中的某个领域(这里是行程领域)结合研究,通过语用推理得到深层隐含信息,能够比较完整地理解自然语言的本质。  相似文献   

4.
基于领域的语用研究和实现   总被引:1,自引:1,他引:1  
要准确理解自然语言,不仅需要对自然语言进行语义理解,更需要把语义理解和具体的语境结合,即要进行语用方面的研究。在教学智能辅助系统中,我们对语用的实现进行了一些尝试,把自然语言理解的语义理解结果和教学中的某个领域(这里是行程领域)结合研究,通过语用推理得到深层隐含信息,能够比较完整地理解自然语言的本质。  相似文献   

5.
汉语句子语义三维表示模型   总被引:1,自引:0,他引:1  
如何表示和计算汉语句子的语义一直是自然语言理解的主要目标之一.在分析现有国内外关于语义表示研究成果基础上,提出了汉语句子语义的三维表示模型,即“义面—义原—义境”模型.该模型可以使句子包含的信息更准确、更全面地表示出来,为汉语语义知识建模和语义计算的研究提供一种新的思路.  相似文献   

6.
本文描述了一个用于智能机器人的语音系统,它在机器人的有限环境中,能够理解用汉语口语形式输入的自然语言。该系统由语音识别、自然语言理解、语音合成三部分组成。系统基于 Prolog 谓词逻辑对输入语句进行句法分析及语义分析,以此得到它的内部表达式,绘出它的推导树。根据输入语句的形式,或执行命令,或回答问题。整个系统用 Prolog 语言完成,在 IBM—PC/XT 机上运行。  相似文献   

7.
8.
1.概述作为一个高效、鲁棒的自然语言查询接口系统(NLIDB),在进行了自然语言查询的分析之后,要有及时的反馈功能帮助用户理解系统的处理结果,避免用户对结果理解的偏差。具体来说,NLIDB的反馈功能,包括两个层次的含义:一是中间结果的反馈,即把系统对自然语言处理的中间结果(可以认为是系统对语言的一种理解)以某种用户可以接受的形式反馈给用户,常见的形式是把中间结果重新转换为自然语言,即转述(paraphrasi-ng);另一种是查询结果的反馈,即对查询结果作出明确的分析和解释,帮助用户理解系统的处理结果,我们称这一部分为结果语义分析。 NChiql在将自然语言查询转化为数据库查询语言之前,提供一套交互机制来确保系统对自然语言查询的理解和用户真正查询要求一致是极为必要的。 NChiql系统的自然语言理解过程可分为两阶段,即首先将自然语言转化为无歧义的中间形式——语义依存树,然后再将语义依存树转换成为SQL语句。  相似文献   

9.
拼音汉字计算机自动转换系统   总被引:4,自引:0,他引:4  
本文介绍了一种运用自然语言理解,完成汉语拼音到汉字的计算机自动转换系统。该系统利用汉语词法知识、句法知识、语义和语用知识,构造了一个层次结构的知识基,对汉语拼音形式的文章,逐句进行切词、词法分析、语法分析、语义和语用处理,最后形成正确的汉字句子、文章。 该系统已在IBM PC/XT微型计算机上实现。  相似文献   

10.
王缓缓  李虎  石永 《计算机科学》2011,38(2):187-190,240
虽然相关研究组织提供了语义Web的一些简化工具,但是对不具备相关背景知识的领域专家来说,语义Web的可用性较低。提出了基于语义Web的受控自然语言系统推理模型,以解决这个问题。首先给出受控自然语言系统推理模型框架;然后分析受控自然语言的语言处理部分,提出基于WordNct的受控自然语言系统的本体词库模型和基于本体词库的受控自然语言解释器,把受控自然语言转换成中间表达语言篇章表述结构;最后通过推理部分把篇章表述结构转换成语义Web的本体和规则,通过模板工具映射成Jess的事实和规则,根据预定义的语义Web的公理和定理对受控自然语言进行推理。试验证明此模型大大提高了知识表示建模的效率,也基本满足简单推理任务,具有实用价值。  相似文献   

11.
This paper describes the design and implementation of a computational model for Arabic natural language semantics, a semantic parser for capturing the deep semantic representation of Arabic text. The parser represents a major part of an Interlingua-based machine translation system for translating Arabic text into Sign Language. The parser follows a frame-based analysis to capture the overall meaning of Arabic text into a formal representation suitable for NLP applications that need for deep semantics representation, such as language generation and machine translation. We will show the representational power of this theory for the semantic analysis of texts in Arabic, a language which differs substantially from English in several ways. We will also show that the integration of WordNet and FrameNet in a single unified knowledge resource can improve disambiguation accuracy. Furthermore, we will propose a rule based algorithm to generate an equivalent Arabic FrameNet, using a lexical resource alignment of FrameNet1.3 LUs and WordNet3.0 synsets for English Language. A pilot study of motion and location verbs was carried out in order to test our system. Our corpus is made up of more than 2000 Arabic sentences in the domain of motion events collected from Algerian first level educational Arabic books and other relevant Arabic corpora.  相似文献   

12.
自然语言的语义理解涉及多个层面的问题,包括以谓词为中心的基本命题义、命题义之外的概念义、逻辑补足义等。目前主流的浅层语义分析主要集中在对命题义的分析上,缺少对概念义和逻辑义的支持,难以辅助计算机对文本的深度理解与推理。该文借鉴论元结构理论、事件语义学等相关语言学理论,突破语义角色标注等浅层语义分析的局限,建立了一种融合概念与逻辑的中文深层语义描述体系;并在该体系基础上,采用层层渲染的标注策略,构建了基于真实语料的大规模中文深层语义标注语料库,通过语言工程实践验证该描述体系的完备性和覆盖度。这一理论体系的建立和语言资源的构建,有望推动中文自动语义分析技术和人工智能等相关工作的创新发展。  相似文献   

13.
We present MARS (Multilingual Automatic tRanslation System), a research prototype speech-to-speech translation system. MARS is aimed at two-way conversational spoken language translation between English and Mandarin Chinese for limited domains, such as air travel reservations. In MARS, machine translation is embedded within a complex speech processing task, and the translation performance is highly effected by the performance of other components, such as the recognizer and semantic parser, etc. All components in the proposed system are statistically trained using an appropriate training corpus. The speech signal is first recognized by an automatic speech recognizer (ASR). Next, the ASR-transcribed text is analyzed by a semantic parser, which uses a statistical decision-tree model that does not require hand-crafted grammars or rules. Furthermore, the parser provides semantic information that helps further re-scoring of the speech recognition hypotheses. The semantic content extracted by the parser is formatted into a language-independent tree structure, which is used for an interlingua based translation. A Maximum Entropy based sentence-level natural language generation (NLG) approach is used to generate sentences in the target language from the semantic tree representations. Finally, the generated target sentence is synthesized into speech by a speech synthesizer.Many new features and innovations have been incorporated into MARS: the translation is based on understanding the meaning of the sentence; the semantic parser uses a statistical model and is trained from a semantically annotated corpus; the output of the semantic parser is used to select a more specific language model to refine the speech recognition performance; the NLG component uses a statistical model and is also trained from the same annotated corpus. These features give MARS the advantages of robustness to speech disfluencies and recognition errors, tighter integration of semantic information into speech recognition, and portability to new languages and domains. These advantages are verified by our experimental results.  相似文献   

14.
本文提出了一个面向古代建筑领域的自然语言处理的系统模型,它被用于古建筑动画自动生成系统之中,承担着从简单中文描述到古建筑领域语义结果的计算工作。该模型分为三部分,分别为预处理过程,一般语义计算和面向古建筑领域的语义计算。通过调用Stanford大学的中文分词、语法分析程序完成分词、语法分析任务,使用Prolog语言完成一般语义计算,最终计算出古建筑构件以及它的搭建顺序、尺寸和位置,即所谓的面向古建筑领域的语义计算。  相似文献   

15.
数据库自然语言查询界面   总被引:9,自引:2,他引:7  
数据库技术的普及使得用户对数据库应用界面的要求越来越高,以往的几类接口都需要用户有较高计算机知识水平,而且必须经过一定的培训,这样就会造成人力物力的浪费而且不利于计算机的普及。本文探讨的是一种更为方便简洁不秀学习即可操作的自然语言界面。  相似文献   

16.
校园导航系统Easy Nav的设计与实现   总被引:10,自引:0,他引:10  
本文介绍了校园导航口语对话系统EasyNav的设计与实现。在分析了口语对话系统的特点和要求之后,我们提出了适合于对话系统的基于规则的语言理解流程。在这一流程中,句法分析使用GLR分析器处理上下文无关文法(CFG),获取句子结构特征以便为语义分析服务,句法规则照顾到覆盖率和准确率间的平衡。语义分析使用考虑句法约束条件的模板匹配方法,以获取话者意图为目标,并消除句法分析引入的歧义。这一设计的优点是系统容易搭建,也容易扩展。  相似文献   

17.
PAU is an all-paths chart-based unification parser that uses the same uniform representation for regular syntax, irregular syntax such as idioms, and semantics. PAU's representation has very little redundancy, simplifying the task of adding new semantics and syntax fo PAU's knowledge base. PAU uses relations between the syntax and semantics to avoid the proliferation of rules found in semantic grammars. By encoding semantics at the same level of representation as syntax, PAU is able to use semantic constraints early in the parse to eliminate semantically anomalous syntactic interpretations. Examples are given to show how PAU can handle the many eccentricities of different idioms using the same mechanisms as are used to handle regular syntax and semantics. These include the ability of some idioms, but not other idioms of the same syntactic form to undergo passivization, particle movement, action nominalization, indirect object movement, modification by adjectives, gerundive nominalization, prepositional phrase preposing, and topicalization. PAU's representation is bidirectional and is also used by a companion generator. PAU is designed to be efficient, runs in real time on typical workstations, and is being used in a number of natural language systems.  相似文献   

18.
Parsers, whether constructed by hand or automatically via a parser generator tool, typically need to compute some useful semantic information in addition to the purely syntactic analysis of their input. Semantic actions may be added to parsing code by hand, or the parser generator may have its own syntax for annotating grammar rules with semantic actions. In this paper, we take a functional programming view of such actions. We use concepts from the semantics of mostly functional programming languages and adapt them to give meaning to the actions of the parser. Specifically, the semantics is inspired by the categorical semantics of lambda calculi and the use of premonoidal categories for the semantics of effects in programming languages. This framework is then applied to our leading example, the transformation of grammars to eliminate left recursion. The syntactic transformation of left-recursion elimination leads to a corresponding semantic transformation of the actions for the grammar. We prove the semantic transformation correct and relate it to continuation passing style, a widely studied transformation in lambda calculi and functional programming. As an idealization of the input language of parser generators, we define a call-by-value calculus with first-order functions and a type-and-effect system where the effects are given by sequences of grammar symbols. The account of left-recursion elimination is then extended to this calculus.  相似文献   

19.
The video databases have become popular in various areas due to the recent advances in technology. Video archive systems need user-friendly interfaces to retrieve video frames. In this paper, a user interface based on natural language processing (NLP) to a video database system is described. The video database is based on a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, a natural language interface enables flexible querying. The queries, which are given as English sentences, are parsed using link parser. The semantic representations of the queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying video database system to return the results of the queries. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet.  相似文献   

20.
中文数据库自然语言查询系统Nchiql设计与实现   总被引:15,自引:0,他引:15  
中文数据库自然语言查询的研究有两个基本目标,首先解决NLIDB面临的可移植性和可用性的问题,其次提出适合中文自然语言查询处理的特有方法,为此开了了中文数据库自然语言查询系统NChiql,从总体设计的角度,介绍了NChiql中的可移植性体系结构、中文自然语言查询分析、基于数据库语言的自然语言查询分析与转换和智能界面管理等内容,实验表明,该系统具有良好的可用性及高效和鲁棒的语言分析器。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号