共查询到19条相似文献,搜索用时 31 毫秒
1.
CAI开发平台的基本结构 总被引:4,自引:0,他引:4
陈一秀 《计算机工程与应用》2001,37(9):65-67
尽管人们已设计了很多具体的CAI系统,但是对整个CAI系统以及CAI开发平台的基本体系结构应如何组织却介绍不多,文中就此问题,尤其是其中两个最核心的子问题,即如何高效获取知识和CAI系统如何进行动态全局规划进行了卓有成效的研究。 相似文献
2.
基于Ontology的自然语言理解 总被引:9,自引:0,他引:9
本文分析传统意义上基于知识的自然语言理解(KB—NLU)和基于Ontolqgy的自然语言理解系统的基本模型,Ontology是概念化的描述,以及Ontology,与语言知识的结合方式的三种类型:世界知识型、词汇语义型、句法语义型。 相似文献
3.
基于自然语言理解的自动应答系统 总被引:2,自引:0,他引:2
自动应答系统(QAS)是一种基于因特网的高性能软件系统。它的核心技术是基于自然语言理解的相关技术,包括知识库和语料库的建设、文本的切分和标注、句子的语法分析和语义分析等。重点论述了自动应答系统中知识信息的语义网络表示和LSF随机化句法分析模型,以及自动应答系统的结构与组成,并对LSF模型进行了参数训练,实践证明这些技术是可行的。通过开发银行的“受限领域自动应答系统”项目,这些技术被证明是高效的、可推广的。 相似文献
4.
基于领域自然语言理解的知识库管理系统 总被引:1,自引:0,他引:1
文中根据数学领域自然语言理解的特点,主要实现了一个基于领域自然语言理解的知识库管理系统,给出了概念知识库异常检测的基本内容、流程和方法.重点分析丁知识库的冗余性、不一致性等方面的内容,给出了相应的检测方法,应用于知识库的具体组织与实现,在一定程度上解决了知识库的冗余和一致性问题.目的是向中学数学智能辅导系统提供自然语言接口,建立领域知识库作为理解建模系统的输入.该系统在教学领域智能辅导系统中得到了较好的应用. 相似文献
5.
6.
7.
8.
张翔;何世柱;张元哲;刘康;赵军 《中文信息学报》2024,38(12):1-17
语义是自然语言理解的核心研究对象,让机器掌握语义有多种途径,因此产生了不同的语义表示方法。但是,这些不同的途径之间往往缺乏联系,分别散落于不同的研究领域和研究任务中,如知识库问答中的查询图和SparQL、表格问答中的SQL、句子语义分析中的框架语义和AMR图等。虽然形式相近,但相关研究却缺乏协同。随着研究深入,语义表示之间缺少对比、具体任务中语义表示难以选择且性能差异大等缺点也显露出来。为了缓解这个问题,该文综述了常见于各类任务中的语义表示,并以世界和语言的关系为主线将它们重新划分为“外延世界语义表示”和“语言内部语义表示”两大类。总结了前者的技术研究热点及新型语义表示的设计和对比方法,并简单探讨了近期围绕后者是否真正含有语义的辩论。最后,该文综述了结合外延和内部两类语义表示的研究,发现这类方法有较强潜力。 相似文献
9.
10.
基于汉语自然语言信息查询的计算机理解实现 总被引:7,自引:0,他引:7
文中根据汉语的二层语义分析结构。深层语义结构-语意指向,表层语义结构-语义指向:针对四种汉语疑问句型进行具体分析其在计算机理解实现中的理论方法和规则;在进行正确的汉语词汇切分之后;根据语意指向与语义指向建立起各词汇的本体言语和本体行为标注,进行组合词汇生成符合语意的短语,再进行本体行为转化为本体言语的研究,归结为专业数据库的语义;最后通过实验系统得以验证。 相似文献
11.
Information sources such as relational databases, spreadsheets, XML, JSON, and Web APIs contain a tremendous amount of structured data that can be leveraged to build and augment knowledge graphs. However, they rarely provide a semantic model to describe their contents. Semantic models of data sources represent the implicit meaning of the data by specifying the concepts and the relationships within the data. Such models are the key ingredients to automatically publish the data into knowledge graphs. Manually modeling the semantics of data sources requires significant effort and expertise, and although desirable, building these models automatically is a challenging problem. Most of the related work focuses on semantic annotation of the data fields (source attributes). However, constructing a semantic model that explicitly describes the relationships between the attributes in addition to their semantic types is critical.We present a novel approach that exploits the knowledge from a domain ontology and the semantic models of previously modeled sources to automatically learn a rich semantic model for a new source. This model represents the semantics of the new source in terms of the concepts and relationships defined by the domain ontology. Given some sample data from the new source, we leverage the knowledge in the domain ontology and the known semantic models to construct a weighted graph that represents the space of plausible semantic models for the new source. Then, we compute the top k candidate semantic models and suggest to the user a ranked list of the semantic models for the new source. The approach takes into account user corrections to learn more accurate semantic models on future data sources. Our evaluation shows that our method generates expressive semantic models for data sources and services with minimal user input. These precise models make it possible to automatically integrate the data across sources and provide rich support for source discovery and service composition. They also make it possible to automatically publish semantic data into knowledge graphs. 相似文献
12.
While the early phase of the Semantic Web put emphasis on conceptual modeling through ontology classes, and the recent years saw the rise of loosely structured, instance-level knowledge graphs (used even for modeling concepts), in this paper, we focus on a third kind of concept modeling: via code lists, primarily those embedded in ontologies and vocabularies. We attempt to characterize the candidate structures for code lists based on our observations in OWL ontologies. Our main contribution is then an approach implemented as a series of SPARQL queries and a lightweight web application that can be used to browse and detect potential code lists in ontologies and vocabularies, in order to extract and enhance them, and to store them in a stand-alone knowledge base. The application allows inspecting query results coming from the Linked Open Vocabularies catalog dataset. In addition, we describe a complementary bottom-up analysis of potential code lists. We also provide in this paper a demonstration of the dominant nature of embedded codes from the aspect of ontological universals and their alternatives for modeling code lists. 相似文献
13.
传统的应用于未登录词语义研究的语料库包含许多限制,例如更新慢、语言相关等。为了解决此问题,提出了基于知识图谱的中文未登录词语义研究方法。知识图谱是一种包含实体、概念及语义关系的语义网络。它具有丰富的实体,并且实体及其关系的添加极为方便,使得弥补传统语料库更新慢的缺憾成为可能。在充分熟悉知识图谱的结构、数据获取方法及相关数据处理方法后,进行基于知识图谱的未登录词语义研究的探索工作,最后以百度百科(目前最大的中文知识图谱)为语料资源,在同一语义分析模型下分别进行基于知识图谱与传统语料的实验,对实验结果进行分析并提出改进方法。 相似文献
14.
Tabular data often refers to data that is organized in a table with rows and columns. We observe that this data format is widely used on the Web and within enterprise data repositories. Tables potentially contain rich semantic information that still needs to be interpreted. The process of extracting meaningful information out of tabular data with respect to a semantic artefact, such as an ontology or a knowledge graph, is often referred to as Semantic Table Interpretation (STI) or Semantic Table Annotation. In this survey paper, we aim to provide a comprehensive and up-to-date state-of-the-art review of the different tasks and methods that have been proposed so far to perform STI. First, we propose a new categorization that reflects the heterogeneity of table types that one can encounter, revealing different challenges that need to be addressed. Next, we define five major sub-tasks that STI deals with even if the literature has mostly focused on three sub-tasks so far. We review and group the many approaches that have been proposed into three macro families and we discuss their performance and limitations with respect to the various datasets and benchmarks proposed by the community. Finally, we detail what are the remaining scientific barriers to be able to truly automatically interpret any type of tables that can be found in the wild Web. 相似文献
15.
In this paper, we present LinkingPark, an automatic semantic annotation system for tabular data to knowledge graph matching. LinkingPark is designed as a modular framework which can handle Cell-Entity Annotation (CEA), Column-Type Annotation (CTA), and Columns-Property Annotation (CPA) altogether. It is built upon our previous SemTab 2020 system, which won the 2nd prize among 28 different teams after four rounds of evaluations. Moreover, the system is unsupervised, stand-alone, and flexible for multilingual support. Its backend offers an efficient RESTful API for programmatic access, as well as an Excel Add-in for ease of use. Users can interact with LinkingPark in near real-time, further demonstrating its efficiency. 相似文献
16.
With the advancement of scientific and engineering research, a huge number of academic literature are accumulated. Manually reviewing the existing literature is the main way to explore embedded knowledge, and the process is quite time-consuming and labor intensive. As the quantity of literature is increasing exponentially, it would be more difficult to cover all aspects of the literature using the traditional manual review approach. To overcome this drawback, bibliometric analysis is used to analyze the current situation and trend of a specific research field. In the bibliometric analysis, only a few key phrases (e.g., authors, publishers, journals, and citations) are usually used as the inputs for analysis. Information other than those phrases is not extracted for analysis, while that neglected information (e.g., abstract) might provide more detailed knowledge in the article. To tackle with this problem, this study proposed an automatic literature knowledge graph and reasoning network modeling framework based on ontology and Natural Language Processing (NLP), to facilitate the efficient knowledge exploration from literature abstract. In this framework, a representation ontology is proposed to characterize the literature abstract data into four knowledge elements (background, objectives, solutions, and findings), and NLP technology is used to extract the ontology instances from the abstract automatically. Based on the representation ontology, a four-space integrated knowledge graph is built using NLP technology. Then, reasoning network is generated according to the reasoning mechanism defined in the proposed ontology model. To validate the proposed framework, a case study is conducted to analyze the literature in the field of construction management. The case study proves that the proposed ontology model can be used to represent the knowledge embedded in the literatures’ abstracts, and the ontology elements can be automatically extracted by NLP models. The proposed framework can be an enhancement for the bibliometric analysis to explore more knowledge from the literature. 相似文献
17.
学科知识建模是一项巨大的工程,当前存在的主要问题有知识库不能很好地共享和重用,难以实现语义上的推理及检索等.把本体技术应用于学科知识建模中,构造了部分课程本体,并实现了对该本体的推理及查询. 相似文献
18.
《计算机科学》2025,52(1)
知识图谱通过将复杂的互联网信息转化为易于理解的结构化形式,极大地提高了信息的可访问性。知识图谱补全技术进一步增强了知识图谱的信息完整性,显著提升了智能问答和推荐系统等通用领域应用的性能与用户体验。然而,现有的知识图谱补全方法大多专注于关系类型较少和简单语义情景下的三元组实例,未能充分利用知识图谱在处理多元关系和复杂语义方面的潜力。针对此问题,提出了一种由大语言模型(LLM)驱动的多元关系知识图谱补全方法。将 LLM 的深层语言理解能力与知识图谱的结构特性相结合,有效捕捉多元关系,理解复杂语义情景。此外,还引入了一种基于思维链的提示工程策略,旨在提高补全任务的准确性。该方法在两个公开知识图谱数据集上的实验结果都取得了显著的提升。 相似文献
19.