首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
利用聚类优化语义Web服务发现   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统Web服务缺乏语义造成注册中心返回结果不精确的问题,提出了一种用OWL-S提供语义支持,并据语义相似度将Web服务聚类的解决方法。该方法应用OWL-S实现Web服务的语义描述,采用凝聚的层次聚类的Single-Link算法将相似Web服务聚类,快速定位并返回最合适的服务,提高了服务发现的精确性。  相似文献   

2.
基于WSMO服务质量的语义Web服务发现框架   总被引:1,自引:0,他引:1  
当前对Web服务进行语义描述最流行的两个框架是OWL-S和WSMO,但它们主要都是从服务功能的角度对服务进行描述,缺乏对服务质量的明确刻画,导致服务请求者难以得到最佳服务.针对此问题,对WSMO规范进行了扩展,定义了一个服务质量(QoS)的元模型--WSMO-QoS,给出了QoS的本体词汇.在此基础上提出了基于QoS的语义Web服务发现框架,并给出一个从基本描述、IOPE、QoS三个层次实现服务匹配的算法.最后,实验结果验证了该服务匹配算法的有效性.  相似文献   

3.
Ubiquitous computing paradigms involving social agents require principled selection of services, context-aware analysis, and satisfaction of requests, as well as dynamic interaction and negotiation with other agents. Synergies between semantic technologies and service discovery facilitate rich and formal representations of services and agent interactions as well as specialization and generalization of service needs. In this paper, we provide an extensive review of semantic Web service discovery, highlighting the state-of-the-art approaches, the key semantic formalisms employed, as well as benchmarks and testbeds for performance evaluation. Defining a generic framework for semantic service discovery, we describe the key tasks and criteria involved in agent-based computing. A detailed comparison of the popular discovery systems is performed with a discussion on trade-offs between existing approaches. We conclude by pointing out important research challenges to be addressed for next-generation service discovery by dynamic multi-agent systems in complex environments.  相似文献   

4.
5.
Patents are a type of intellectual property with ownership and monopolistic rights that are publicly accessible published documents, often with illustrations, registered by governments and international organizations. The registration allows people familiar with the domain to understand how to re-create the new and useful invention but restricts the manufacturing unless the owner licenses or enters into a legal agreement to sell ownership of the patent. Patents reward the costly research and development efforts of inventors while spreading new knowledge and accelerating innovation. This research uses artificial intelligence natural language processing, deep learning techniques and machine learning algorithms to extract the essential knowledge of patent documents within a given domain as a means to evaluate their worth and technical advantage. Manual patent abstraction is a time consuming, labor intensive, and subjective process which becomes cost and outcome ineffective as the size of the patent knowledge domain increases. This research develops an intelligent patent summarization methodology using artificial intelligence machine learning approaches to allow patent domains of extremely large sizes to be effectively and objectively summarized, especially for cases where the cost and time requirements of manual summarization is infeasible. The system learns to automatically summarize patent documents with natural language texts for any given technical domain. The machine learning solution identifies technical key terminologies (words, phrases, and sentences) in the context of the semantic relationships among training patents and corresponding summaries as the core of the summarization system. To ensure the high performance of the proposed methodology, ROUGE metrics are used to evaluate precision, recall, accuracy, and consistency of knowledge generated by the summarization system. The Smart machinery technologies domain, under the sub-domains of control intelligence, sensor intelligence and intelligent decision-making provide the case studies for the patent summarization system training. The cases use 1708 training pairs of patents and summaries while testing uses 30 randomly selected patents. The case implementation and verification have shown the summary reports achieve 90% and 84% average precision and recall ratios respectively.  相似文献   

6.
Natural language and databases are core components of information systems. They are related to each other because they share the same purpose: the conceptualization aspects of the real world in order to deal with them in some way. Natural language processing (NLP) techniques may substantially enhance most phases of the information system lifecycle, starting with requirements analysis, specification and validation, and going up to conflict resolution, result processing and presentation. Furthermore, natural language based query languages and user interfaces facilitate the access to information for anyone and allow for new paradigms in the usage of computerized services. This paper investigates the use of NLP techniques in the design phase of information systems. Then, it reports on data base querying and information retrieval enhanced with NLP.  相似文献   

7.
Presents some applications of evolutionary programming to different tasks of natural language processing (NLP). First of all, the work defines a general scheme of application of evolutionary techniques to NLP, which gives the mainstream for the design of the elements of the algorithm. This scheme largely relies on the success of probabilistic approaches to NLP. Secondly, the scheme has been illustrated with two fundamental applications in NLP: tagging, i.e., the assignment of lexical categories to words and parsing, i.e., the determination of the syntactic structure of sentences. In both cases, the elements of the evolutionary algorithm are described in detail, as well as the results of different experiments carried out to show the viability of this evolutionary approach to deal with tasks as complex as those of NLP.  相似文献   

8.
9.
面向空间数据处理的服务描述、部署、发现、调用过程是空间数据服务化处理的关键问题,直接关系到空间分析与相关数据处理计算的实现方式和执行效率。在标准网络服务模式之上,参照OGC规范设计空间数据网络过程处理服务的实现模型。并在空间数据分析和网络处理服务模型基础上,对网络服务的资源结构、服务调用模式、空间分析函数、数据处理流程等部分给出设计和定义。并以空间缓冲区分析算法为实例,实现过程处理服务模型实例,并给出分布式网络环境下空间数据处理服务的发布、调用与计算模式的完整实现方法。  相似文献   

10.
Web服务发现系统由一个或多个服务注册器连接组成一定的系统拓扑来存储和维护服务信息,服务注册器的连接拓扑直接关系到Web服务发现系统的可扩展性。采用层次模型在数据层定义统一的动态元组模型来解决各种数据源和数据模型的异构性,在抽象层定制统一的发布和查询API实现统一的访问方式,在网络层构造一种全新的双层结构保证整个系统的可扩展性、灵活性和鲁棒性,设计实现了一种分布式Web服务发现方法。给出了适应双层拓扑结构的两阶段查找算法,并对算法的时间、空间和消息复杂性进行了分析。实验结果表明,该Web服务发现方法具有明显的自组织特性和良好的可扩展性,适应Web服务自主,动态变化和强分布特点,同时两阶段查找算法具有较好的服务查找能力。  相似文献   

11.
The amount of information contained in databases available on the Web has grown explosively in the last years. This information, known as the Deep Web, is heterogeneous and dynamically generated by querying these back-end (relational) databases through Web Query Interfaces (WQIs) that are a special type of HTML forms. The problem of accessing to the information of Deep Web is a great challenge because the information existing usually is not indexed by general-purpose search engines. Therefore, it is necessary to create efficient mechanisms to access, extract and integrate information contained in the Deep Web. Since WQIs are the only means to access to the Deep Web, the automatic identification of WQIs plays an important role. It facilitates traditional search engines to increase the coverage and the access to interesting information not available on the indexable Web. The accurate identification of Deep Web data sources are key issues in the information retrieval process. In this paper we propose a new strategy for automatic discovery of WQIs. This novel proposal makes an adequate selection of HTML elements extracted from HTML forms, which are used in a set of heuristic rules that help to identify WQIs. The proposed strategy uses machine learning algorithms for classification of searchable (WQIs) and non-searchable (non-WQI) HTML forms using a prototypes selection algorithm that allows to remove irrelevant or redundant data in the training set. The internal content of Web Query Interfaces was analyzed with the objective of identifying only those HTML elements that are frequently appearing provide relevant information for the WQIs identification. For testing, we use three groups of datasets, two available at the UIUC repository and a new dataset that we created using a generic crawler supported by human experts that includes advanced and simple query interfaces. The experimental results show that the proposed strategy outperforms others previously reported works.  相似文献   

12.
The Service-Oriented-Architecture (SOA) development paradigm has emerged to make an improvement in critical situations such as creating, modifying and extending solutions in the domain of business processes integration. Web services technologies are considered as the main units for SOA, for both intra- and inter-enterprise communication. Nevertheless, SOA does not have information about the events that occurs in business processes, which represents the main characteristics of supply chain management. Taking this into account, this paper proposes a middleware-oriented integrated architecture with the use of semantic features (concretely with the use of Linked Data) as data provider to offer a brokerage service for the procurement of products in a Supply Chain Management (SCM). As main contributions, our system provides a hybrid architecture that combines features of both SOA and EDA between several others.  相似文献   

13.
为了解决以自然语言表示节点标签的分类树很难通过自动软件agents来进行自动推理的问题,通过词性标志、词义辨析、连接词辨析和受约束的自然语言定义及转换等步骤,将分类树中每一个节点对应的自然语言标签转换成了机器能够识别的逻辑表达式,从而使整个分类树转换成了一个轻量级本体,它适合应用在数据整合的语义匹配、文档分类和语义搜索等方面的自动推理,从而促进了本体知识的自动化推理,为以后文本自动检索奠定基础。  相似文献   

14.
Using Semantic Web technologies in complex scenarios requires that such technologies correctly interoperate by interchanging ontologies using the RDF(S) and OWL languages. This interoperability is not straightforward because of the high heterogeneity in Semantic Web technologies and, while the number of such technologies grows, affordable mechanisms for evaluating Semantic Web technology interoperability are needed to comprehend the current and future interoperability of Semantic Web technologies.This paper presents the OWL Interoperability Benchmarking, an international benchmarking activity that involved the evaluation of the interoperability of different Semantic Web technologies using OWL as the interchange language. It describes the evaluation resources used in this benchmarking activity, the OWL Lite Import Benchmark Suite and the IBSE tool, and presents how to use them for evaluating the OWL interoperability of Semantic Web technologies. Moreover, the paper offers an overview of the OWL interoperability results of the eight tools participating in the benchmarking: one ontology-based annotation tool (GATE), three ontology frameworks (Jena, KAON2, and SWI-Prolog), and four ontology development tools (Protégé Frames, Protégé OWL, SemTalk, and WebODE).  相似文献   

15.
16.
17.
The need to make the contents of the Semantic Web accessible to end-users becomes increasingly pressing as the amount of information stored in ontology-based knowledge bases steadily increases. Natural language interfaces (NLIs) provide a familiar and convenient means of query access to Semantic Web data for casual end-users. While several studies have shown that NLIs can achieve high retrieval performance as well as domain independence, this paper focuses on usability and investigates if NLIs and natural language query languages are useful from an end-user's point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full natural language query sentences with a limited set of sentence beginnings over keywords or formal query languages. NLIs to ontology-based knowledge bases can, therefore, be considered to be useful for casual or occasional end-users. As such, the overarching contribution is one step towards the theoretical vision of the Semantic Web becoming reality.  相似文献   

18.
Existing attempts to automate construction document analysis are limited in understanding the varied semantic properties of different documents. Due to the semantic conflicts, the construction specification review process is still conducted manually in practice despite the promising performance of the existing approaches. This research aimed to develop an automated system for reviewing construction specifications by analyzing the different semantic properties using natural language processing techniques. The proposed method analyzed varied semantic properties of 56 different specifications from five different countries in terms of vocabulary, sentence structure, and the organizing styles of provisions. First, the authors developed a semantic thesaurus for construction terms including 208 word-replacement rules based on Word2Vec embedding to understand the different vocabularies. Second, the authors developed a named entity recognition model based on bi-directional long short-term memory with a conditional random field layer, which identified the required keywords from given provisions with an averaged F1 score of 0.928. Third, the authors developed a provision-pairing model based on Doc2Vec embedding, which identified the most relevant provisions with an average accuracy of 84.4%. The web-based prototype demonstrated that the proposed system can facilitate the construction specification review process by reducing the time spent, supplementing the reviewer’s experience, enhancing accuracy, and achieving consistency. The results contribute to risk management in the construction industry, with practitioners being able to review construction specifications thoroughly in spite of tight schedules and few available experts.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号