首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Nowadays organisations are willing to outsource their business processes as services and make them accessible via the Web. In doing so, they can dynamically combine individual services to their service applications. However, unless the data on the Web can be meaningfully shared and is interpretable, this objective cannot be realised. In this paper, a new agent-based approach for managing ontology evolution in a Web services environment is exploited. The proposed approach has several key characteristics such as flexibility and extensibility that differentiate this research from others. The refinement mechanisms which cope with an evolving ontology are carefully examined. The novelty of our work is that inter-processes between different ontologies are studied from the agent’s perspective. Based on this perspective, an agent negotiation model is applied to reach an agreement regarding ontology discrepancy in an application. The efficiency and effectiveness of reaching an agreement over an ontology dispute is leveraged by the private negotiation strategy applied in the argumentation approach. An extended negotiation strategy is discussed to enable sufficient information in decision making at each negotiation round. A case study is presented to demonstrate ontology refinement in a Web services environment.  相似文献   

3.
Assessing semantic similarity is a fundamental requirement for many AI applications. Crisp ontology (CO) is one of the knowledge representation tools that can be used for this purpose. Thanks to the development of semantic web, CO‐based similarity assessment has become a popular approach in recent years. However, in the presence of vague information, CO cannot consider uncertainty of relations between concepts. On the other hand, fuzzy ontology (FO) can effectively process uncertainty of concepts and their relations. This paper aims at proposing an approach for assessing concept similarity based on FO. The proposed approach incorporates fuzzy relation composition in combination with an edge counting approach to assess the similarity. Accordingly, proposed measure relies on taxonomical features of an ontology in combination with statistical features of concepts. Furthermore, an evaluation approach for the FO‐based similarity measure named as FOSE is proposed. Considering social network data, proposed similarity measure is evaluated using FOSE. The evaluation results prove the dominance of proposed approach over its respective CO‐based measure.  相似文献   

4.
Cyber–physical systems are becoming increasingly complex. In these advanced systems, the different engineering domains involved in the design process become more and more intertwined. Therefore, a traditional (sequential) design process becomes inefficient in finding good design options. Instead, an integrated approach is needed where parameters in multiple different engineering domains can be chosen, evaluated, and optimized to achieve a good overall solution. However, in such an approach, the combined design space becomes vast. As such, methods are needed to mitigate this problem.In this paper, we show a method for systematically capturing and updating domain knowledge in the context of a co-design process involving different engineering domains, i.e. control and embedded. We rely on ontologies to reason about the relationships between parameters in the different domains. This allows us to derive a stepwise design space exploration workflow where this domain knowledge is used to quickly reduce the design space to a subset of likely good candidates. We illustrate our approach by applying it to the design space exploration process for an advanced electric motor control system and its deployment on embedded hardware.  相似文献   

5.
本体可以提供强大的知识表示方法,是信息检索领域中的重要内容。传统的本体概念相似度计算方法大多采用特定于描述语言的通用推理服务来进行匹配,这些方法忽略了概念的语义信息。通过设计一个基于OWL本体的语义检索模型,介绍了如何通过概念的属性以及层次关系来表达概念的语义,计算概念间的柔性相似度。实验结果表明,该方法能充分利用OWL属性特征与层次关系来计算相关概念之间的柔性相似度,可以根据需要动态地调节匹配范围,并给出其在文本分类中的应用。  相似文献   

6.
In large‐scale, complex domains such as space defense and security systems, situation assessment and decision making are evolving from centralized models to high‐level, net‐centric models. In this context, collaboration among the many actors involved in the situation assessment process is critical to achieve a prompt reaction as needed in the operational scenario. In this paper, we propose a multiagent‐based approach to situation assessment, where agents cooperate by sharing local information to reach a common and coherent assessment of situations. Specifically, we characterize situation assessment as a classification process based on OWL ontology reasoning, and we provide a protocol for cooperative multiagent situation assessment, which allows the agents to achieve coherent high‐level conclusions. We validate our approach in a real maritime surveillance scenario, where our prototype system effectively supports the user in detecting and classifying potential threats; moreover, our distributed solution performs comparably to a centralized method, while preserving independence of decision makers and dramatically reducing the amount of communication required. © 2012 Wiley Periodicals, Inc.  相似文献   

7.
Several applications in shape modeling and exploration require identification and extraction of a 3D shape part matching a 2D sketch. We present CustomCut, an on‐demand part extraction algorithm. Given a sketched query, CustomCut automatically retrieves partially matching shapes from a database, identifies the region optimally matching the query in each shape, and extracts this region to produce a customized part that can be used in various modeling applications. In contrast to earlier work on sketch‐based retrieval of predefined parts, our approach can extract arbitrary parts from input shapes and does not rely on a prior segmentation into semantic components. The method is based on a novel data structure for fast retrieval of partial matches: the randomized compound k‐NN graph built on multi‐view shape projections. We also employ a coarse‐to‐fine strategy to progressively refine part boundaries down to the level of individual faces. Experimental results indicate that our approach provides an intuitive and easy means to extract customized parts from a shape database, and significantly expands the design space for the user. We demonstrate several applications of our method to shape design and exploration.  相似文献   

8.
The integration of data from various electronic health record (EHR) systems presents a critical conflict in the sharing and exchanging of patient information across a diverse group of health‐oriented organizations. Patient health records in each system are annotated with ontologies utilizing different health‐care standards, creating ontology conflicts both at the schema and data levels. In this study, we introduce the concept of semantic ontology mapping for the facilitation and interoperability of heterogeneous EHR systems. This approach proposes a means of detecting and resolving the data‐level conflicts that generally exist in the ontology mapping process. We have extended the semantic bridge ontology in support of ontology mapping at the data level and generated the required mapping rules to reconcile data from different ontological sources into a canonical format. As a result, linked‐patient data are generated and made available in a semantic query engine to facilitate user queries of patient data across heterogeneous EHR systems.  相似文献   

9.
石油勘探开发领域本体的构建方法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
石油勘探开发领域涉及勘探、采油等二十多个专业,由于信息术语不统一,给专业之间信息共享和应用集成带来许多问题。采用本体论来解决上述问题。针对石油勘探开发业务的特点,提出了一套石油领域本体Petro-Onto构建方法,建立了该领域本体的顶层本体框架,提出了以业务模型和数据模型为参照体系自动捕获本体的方法。Petro-Onto在油田信息集成中得到应用。  相似文献   

10.
The ongoing exponential growth of online information sources has led to a need for reliable and efficient algorithms for text clustering. In this paper, we propose a novel text model called the relational text model that represents each sentence as a binary multirelation over a concept space ${\mathcal{C}}$. Through usage of the smart indexing engine (SIE), a patented technology of the Belgian company i.Know, the concept space adopted by the text model can be constructed dynamically. This means that there is no need for an a priori knowledge base such as an ontology, which makes our approach context independent. The concepts resulting from SIE possess the property that frequency of concepts is a measure for relevance. We exploit this property with the development of the CR ‐algorithm. Our approach relies on the representation of a data set ${\mathcal{D}}$ as a multirelation, of which k‐cuts can be taken. These cuts can be seen as sets of relevant patterns with respect to the topics that are described by documents. Analysis of dependencies between patterns allows to produce clusters, such that precision is sufficiently high. The best k‐cut is the one that best approximates the estimated number of clusters to ensure recall. Experimental results on Dutch news fragments show that our approach outperforms both basic and advanced methods. © 2012 Wiley Periodicals, Inc.  相似文献   

11.
Among the developments of information technology, the most popular tools nowadays for seeking the knowledge are the Google or Yahoo keywords-based search engines on the Internet. Users can easily obtain the information they need, but they still have to read and organize those documents by themselves. Due to that reason, users have to spend most of time in browsing and skipping the documents they have searched. In order to facilitate this process, this paper proposes a query-based ontology knowledge acquisition system which dynamically constructs query-based partial ontology to provide proficient answers for users’ queries. To construct the relationships and hierarchy of concepts in such an ontology, the formal concept analysis approach is adopted. After the ontology is built, the system can deduct the specific answer according to the relationships and hierarchy of ontology without asking users to read the whole document sets. We collected three kinds of sports news pages as source documents including those regarding NBA, CPBL and MLB to evaluate the precision of the system function in the experiment, which, as a result, reveals that the proposed approach indeed can work effectively.  相似文献   

12.
Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in their original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving. Thus data analysts need to adapt their analytical processes after each API release. This gets more challenging when performing an integrated or historical analysis. To cope with such complexity, in this paper, we present the Big Data Integration ontology, the core construct to govern the data integration process under schema evolution by systematically annotating it with information regarding the schema of the sources. We present a query rewriting algorithm that, using the annotated ontology, converts queries posed over the ontology to queries over the sources. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. This guarantees ontology-mediated queries to correctly retrieve data from the most recent schema version as well as correctness in historical queries. A functional and performance evaluation on real-world APIs is performed to validate our approach.  相似文献   

13.
The need to find related images from big data streams is shared by many professionals, such as architects, engineers, designers, journalist, and ordinary people. Users need to quickly find the relevant images from data streams generated from a variety of domains. The challenges in image retrieval are widely recognized, and the research aiming to address them led to the area of content‐based image retrieval becoming a “hot” area. In this paper, we propose a novel computationally efficient approach, which provides a high visual quality result based on the use of local recursive density estimation between a given query image of interest and data clouds/clusters which have hierarchical dynamically nested evolving structure. The proposed approach makes use of a combination of multiple features. The results on a data set of 65,000 images organized in two layers of a hierarchy demonstrate its computational efficiency. Moreover, the proposed Look‐a‐like approach is self‐evolving and updating adding new images by crawling and from the queries made.  相似文献   

14.
15.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

16.
With the explosive growth of various social media applications, individuals and organizations are increasingly using their contents (e.g. reviews, forum discussions, blogs, micro-blogs, comments, and postings in social network sites) for decision-making. These contents are typical big data. Opinion mining or sentiment analysis focuses on how to extract emotional semantics from these big data to help users to get a better decision. That is not an easy task, because it faces many problems, such as different context may make the meaning of the same word change variously, at the same time multilingual environment restricts the full use of the analysis results. Ontology provides knowledge about specific domains that are understandable by both the computers and developers. Building ontology is mainly a useful first step in providing and formalizing the semantics of information representation. We proposed an ontology DEMLOnto based on six basic emotions to help users to share existed information. The ontology DEMLOnto would help in identifying the opinion features associated with the contextual environment, which may change along with applications. We built the ontology according to ontology engineering. It was developed on the platform Protégé by using OWL2.  相似文献   

17.
Multi‐dimensional data originate from many different sources and are relevant for many applications. One specific sub‐type of such data is continuous trajectory data in multi‐dimensional state spaces of complex systems. We adapt the concept of spatially continuous scatterplots and spatially continuous parallel coordinate plots to such trajectory data, leading to continuous‐time scatterplots and continuous‐time parallel coordinates. Together with a temporal heat map representation, we design coordinated views for visual analysis and interactive exploration. We demonstrate the usefulness of our visualization approach for three case studies that cover examples of complex dynamic systems: cyber‐physical systems consisting of heterogeneous sensors and actuators networks (the collection of time‐dependent sensor network data of an exemplary smart home environment), the dynamics of robot arm movement and motion characteristics of humanoids.  相似文献   

18.
Over the last decade, there has been an increasing interest in paper-digital systems that allow regular paper documents to be augmented or integrated with digital information and services. Although a wide variety of technical solutions and applications have been proposed, they all rely on some means of specifying links from areas within paper pages to digital services where these areas correspond to elements of the document’s artwork. Various frameworks and tools are available to support the development of paper-digital applications, but they tend to either require some programming skills or focus on specific application domains. We present an advanced publishing solution that is based on an authoring rather than programming approach to the production of interactive paper documents. Our solution is fully general and we describe how it uses concepts of templates and variable content elements to reduce redundancies and increase the flexibility in developing paper-digital applications.  相似文献   

19.
Nowadays, many applications rely on images of high quality to ensure good performance in conducting their tasks. However, noise goes against this objective as it is an unavoidable issue in most applications. Therefore, it is essential to develop techniques to attenuate the impact of noise, while maintaining the integrity of relevant information in images. We propose in this work to extend the application of the Non-Local Means filter (NLM) to the vector case and apply it for denoising multispectral images. The objective is to benefit from the additional information brought by multispectral imaging systems. The NLM filter exploits the redundancy of information in an image to remove noise. A restored pixel is a weighted average of all pixels in the image. In our contribution, we propose an optimization framework where we dynamically fine tune the NLM filter parameters and attenuate its computational complexity by considering only pixels which are most similar to each other in computing a restored pixel. Filter parameters are optimized using Stein's Unbiased Risk Estimator (SURE) rather than using ad hoc means. Experiments have been conducted on multispectral images corrupted with additive white Gaussian noise. PSNR and similarity comparison with other approaches are provided to illustrate the efficiency of our approach in terms of both denoising performance and computation complexity.  相似文献   

20.
Ontologies have been widely used as a knowledge representation framework, and numerous methods have been put forth to match ontologies. It is well known that ontology matchers behave differently in various domains, and it is a challenge to predict or characterize their behavior. Herein, a hybrid expertise‐agreement aggregation strategy is proposed. Although others rely on the existence of a reference ontology, this typically does not exist in the real world. In this article, the fuzzy integral (FI) is used to aggregate multiple ontology matchers in lieu of a reference ontology. Specifically, we present a measure of expertise and fuse it with our previous agreement measure that is motivated by crowd sourcing to improve recall. This way, any available domain knowledge, in terms of partial ordering of a subset of inputs, can be included in the decision‐making process. By adding the domain knowledge to the agreement model, we are able to reach the upmost performance. Preliminary results demonstrate the robustness of our approach across domains. Sensitivity analysis is also provided, which shows the limits to which extreme destructive expertise affects system performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号