首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
The selection and configuration of site equipment is a fundamental part of construction preparation. Suitable site equipment supports the timely, cost-efficient and qualitative execution of the construction process. The use of planning tools based on formal knowledge management methods can both speed up the process of construction site planning and lead to better results. In this paper, we propose a rule-based knowledge inference system to support site equipment planners in a semi-automated manner using input data from building information models and working schedules. The knowledge-based system is built using the business rule management system Drools. Using a sample construction site, the feasibility of the proposed approach has been proven.  相似文献   

2.
A fundamental question that must be addressed in software agents for knowledge management is coordination in multi-agent systems. The coordination problem is ubiquitous in knowledge management, such as in manufacturing, supply chains, negotiation, and agent-mediated auctions. This paper summarizes several multi-agent systems for knowledge management that have been developed recently by the author and his collaborators to highlight new research directions for multi-agent knowledge management systems. In particular, the paper focuses on three areas of research:
  • Coordination mechanisms in agent-based supply chains. How do we design mechanisms for coordination, information and knowledge sharing in supply chains with self-interested agents? What would be a good coordination mechanism when we have a non-linear structure of the supply chain, such as a pyramid structure? What are the desirable properties for the optimal structure of efficient supply chains in terms of information and knowledge sharing? Will DNA computing be a viable tool for the analysis of agent-based supply chains?
  • Coordination mechanisms in agent-mediated auctions. How do we induce cooperation and coordination among various self-interested agents in agent-mediated auctions? What are the fundamental principles to promote agent cooperation behavior? How do we train agents to learn to cooperate rather than program agents to cooperate? What are the principles of trust building in agent systems?
  • Multi-agent enterprise knowledge management, performance impact and human aspects. Will people use agent-based systems? If so, how do we coordinate agent-based systems with human beings? What would be the impact of agent systems in knowledge management in an information economy?
  相似文献   

3.
Supporting geographically-aware web document foraging and sensemaking   总被引:1,自引:0,他引:1  
This paper reports on the development and application of strategies and tools for geographic information seeking and knowledge building that leverages unstructured text resources found on the web. Geographic knowledge building from unstructured web sources starts with web document foraging during which the quantity, scope and diversity of web-based information create incredible cognitive burdens on an analyst’s or researcher’s ability to judge information relevancy. Determining information relevancy is ultimately a process of sensemaking. In this paper, we present our research on visually supporting web document foraging and sensemaking. In particular, we present the Sense-of-Place (SensePlace) analytic environment. The scientific goal of SensePlace is to visually and computationally support analyst sensemaking with text artifacts that have potential place, time, and thematic relevance to an analytical problem through identification and visual highlighting of named entities (people, places, times, and organizations) in documents, automated inference to determine document relevance using stored knowledge, and a visual interface with coupled geographic map, timeline, and concept graph displays that are used to contextualize the contexts of potentially relevant documents. We present the results of a case study analysis using SensePlace to uncover potential population migration, geopolitical, and other infectious disease dynamics drivers for measles and other epidemics in Niger. Our analysis allowed us to demonstrate how our approach can support analysis of complex situations along (a) multi-scale geographic dimensions (i.e., vaccine coverage areas), (b) temporal dimensions (i.e., seasonal population movement and migrations), and (c) diverse thematic dimensions (effects of political upheaval, food security, transient movement, etc.).  相似文献   

4.
In the context of role-based access control (RBAC), mining approaches, such as role mining or organizational mining, can be applied to derive permissions and roles from a system's configuration or from log files. In this way, mining techniques document the current state of a system and produce current-state RBAC models. However, such current-state RBAC models most often follow from structures that have evolved over time and are not the result of a systematic rights management procedure. In contrast, role engineering is applied to define a tailored RBAC model for a particular organization or information system. Thus, role engineering techniques produce a target-state RBAC model that is customized for the business processes supported via the respective information system. The migration from a current-state RBAC model to a tailored target-state RBAC model is, however, a complex task. In this paper, we present a systematic approach to migrate current-state RBAC models to target-state RBAC models. In particular, we use model comparison techniques to identify differences between two RBAC models. Based on these differences, we derive migration rules that define which elements and element relations must be changed, added, or removed. A migration guide then includes all migration rules that need to be applied to a particular current-state RBAC model to produce the corresponding target-state RBAC model. We conducted two comparative studies to identify which visualization technique is most suitable to make migration guides available to human users. Based on the results of these comparative studies, we implemented tool support for the derivation and visualization of migration guides. Our software tool is based on the Eclipse Modeling Framework (EMF). Moreover, this paper describes the experimental evaluation of our tool.  相似文献   

5.
Establishing interschema semantic knowledge between corresponding elements in a cooperating OWL-based multi-information server grid environment requires deep knowledge, not only about the structure of the data represented in each server, but also about the commonly occurring differences in the intended semantics of this data. The same information could be represented in various incompatible structures, and more importantly the same structure could be used to represent data with many diverse and incompatible semantics. In a grid environment interschema semantic knowledge can only be detected if both the structural and semantic properties of the schemas of the cooperating servers are made explicit and formally represented in a way that a computer system can process. Unfortunately, very often there is lack of such knowledge and the underlying grid information servers (ISs) schemas, being semantically weak as a consequence of the limited expressiveness of traditional data models, do not help the acquisition of this knowledge. The solution to overcome this limitation is primarily to upgrade the semantic level of the IS local schemas through a semantic enrichment process by augmenting the local schemas of grid ISs to semantically enriched schema models, then to use these models in detecting and representing correspondences between classes belonging to different schemas. In this paper, we investigate the possibility of using OWL-based domain ontologies both for building semantically rich schema models, and for expressing interschema knowledge and reasoning about it. We believe that the use of OWL/RDF in this setting has two important advantages. On the one hand, it enables a semantic approach for interschema knowledge specification, by concentrating on expressing conceptual and semantic correspondences between both the conceptual (intensional) definition and the set of instances (extension) of classes represented in different schemas. On the other hand, it is exactly this semantic nature of our approach that allows us to devise reasoning mechanisms for discovering and reusing interschema knowledge when the need arises to compare and combine it.  相似文献   

6.
Justification logics are a family of modal epistemic logics which enables us to reasoning about justifications and evidences. In this paper, we introduce evidence-based multi-agent distributed knowledge logics, called distributed knowledge justification logics. The language of our justification logics contain evidence-based knowledge operators for individual agents and for distributed knowledge , which are interpreted respectively as “t is a justification that agent i accepts for F”, and “t is a justification that all agents accept for F if they combine their knowledge and justifications”. We study basic properties of our logics and prove the conservativity of distributed knowledge justification logics over multi-agent justification logics. We present Kripke style models, pseudo-Fitting and Fitting models, as well as Mkrtychev models (single world Fitting models) and prove soundness and completeness theorems. We also find a class of Fitting models which satisfies the principle of full communication. Finally, we establish the realization theorem, which states that distributed knowledge justification logics can be embedded into the modal distributed knowledge logics, and vise versa.  相似文献   

7.
Current conceptual workflow models use either informally defined conceptual models or several formally defined conceptual models that capture different aspects of the workflow, e.g., the data, process, and organizational aspects of the workflow. To the best of our knowledge, there are no algorithms that can amalgamate these models to yield a single view of reality. A fragmented conceptual view is useful for systems analysis and documentation. However, it fails to realize the potential of conceptual models to provide a convenient interface to automate the design and management of workflows. First, as a step toward accomplishing this objective, we propose SEAM (State-Entity-Activity-Model), a conceptual workflow model defined in terms of set theory. Second, no attempt has been made, to the best of our knowledge, to incorporate time into a conceptual workflow model. SEAM incorporates the temporal aspect of workflows. Third, we apply SEAM to a real-life organizational unit's workflows. In this work, we show a subset of the workflows modeled for this organization using SEAM. We also demonstrate, via a prototype application, how the SEAM schema can be implemented on a relational database management system. We present the lessons we learned about the advantages obtained for the organization and, for developers who choose to use SEAM, we also present potential pitfalls in using the SEAM methodology to build workflow systems on relational platforms. The information contained in this work is sufficient enough to allow application developers to utilize SEAM as a methodology to analyze, design, and construct workflow applications on current relational database management systems. The definition of SEAM as a context-free grammar, definition of its semantics, and its mapping to relational platforms should be sufficient also, to allow the construction of an automated workflow design and construction tool with SEAM as the user interface  相似文献   

8.
9.
Many information systems record executed process instances in the event log, a very rich source of information for several process management tasks, like process mining and trace comparison. In this paper, we present a framework, able to convert activities in the event log into higher level concepts, at different levels of abstraction, on the basis of domain knowledge. Our abstraction mechanism manages non trivial situations, such as interleaved activities or delays between two activities that abstract to the same concept.Abstracted traces can then be provided as an input to an intelligent system, meant to implement a variety of process management tasks, significantly enhancing the quality and the usefulness of its output.In particular, in the paper we demonstrate how trace abstraction can impact on the quality of process discovery, showing that it is possible to obtain more readable and understandable process models.We also prove, through our experimental results, the impact of our approach on the capability of trace comparison and clustering (realized by means of a metric able to take into account abstraction phase penalties) to highlight (in)correct behaviors, abstracting from details.  相似文献   

10.
As web users disseminate more of their personal information on the web, the possibility of these users becoming victims of lateral surveillance and identity theft increases. Therefore web resources containing this personal information, which we refer to as identity web references must be found and disambiguated to produce a unary set of web resources which refer to a given person. Such is the scale of the web that forcing web users to monitor their identity web references is not feasible, therefore automated approaches are required. However, automated approaches require background knowledge about the person whose identity web references are to be disambiguated. Within this paper we present a detailed approach to monitor the web presence of a given individual by obtaining background knowledge from Web 2.0 platforms to support automated disambiguation processes. We present a methodology for generating this background knowledge by exporting data from multiple Web 2.0 platforms as RDF data models and combining these models together for use as seed data. We present two disambiguation techniques; the first using a semi-supervised machine learning technique known as Self-training and the second using a graph-based technique known as Random Walks, we explain how the semantics of data supports the intrinsic functionalities of these techniques. We compare the performance of our presented disambiguation techniques against several baseline measures including human processing of the same data. We achieve an average precision level of 0.935 for Self-training and an average f-measure level of 0.705 for Random Walks in both cases outperforming several baselines measures.  相似文献   

11.
A building occupant’s experiences are not passive responses to environmental stimuli, but are the results of multifaceted, prolonged interactions between people and space. We present a framework and prototype software tool for logically reasoning about occupant perception and behaviour in the context of dynamic aspects of buildings in operation, based on qualitative deductive rules. In particular, we focus on the co-presence of different user groups and the resulting impact on perceptual and functional affordances of spatial layouts by utilising the concept of spatial artefacts. As a first proof of concept of our approach, we have implemented a prototype crowd analysis software tool in our new system ASP4BIM, developed specifically to support architectural design reasoning in the context of public-facing buildings with complex signage systems and diverse intended user groups. We evaluate our prototype on the Urban Sciences Building at Newcastle University, a large, state-of-the-art living laboratory and multipurpose academic building. Our findings are that the ASP4BIM-based prototype supports a range of novel query services for formally analysing the impacts of crowds on pedestrians that are logically derived through the use of qualitative deductive rules, that complements other powerful crowd analysis approaches such as agent-based simulation.  相似文献   

12.
宋建炜  邓逸川  苏成 《图学学报》2021,42(2):307-315
建筑施工安全事故分析是施工安全管理的重要环节,但分散在事故报告中的施工安全知识不 能得到良好的复用,无法为施工安全管理提供充分的借鉴作用。知识图谱是结构化存储和复用知识的工具, 可以用于事故案例快速检索、事故关联路径分析及统计分析等,从而更好地提高施工安全管理水平。命名实 体识别(NER)是自动构建知识图谱的关键工作,目前主要研究集中于医疗、金融、军事等领域,而在建筑施 工安全领域,尚未见到 NER 的相关研究。根据建筑施工安全领域知识图谱的应用需求,定义了该领域 5 类 概念,并明确了实体标注规范。采用改进的基于 Transformer 的双向编码表征器(BERT)预训练语言模型获取 动态字向量,并采用双向长短期记忆-条件随机场(BiLSTM-CRF)模型获取实体最优标签序列,提出了适用于 建筑施工安全领域的 NER 模型。为了训练该模型并验证其实体识别效果,收集、整理和标注了 1 000 篇施 工安全事故报告作为实验语料。实验表明,相比于传统模型,该模型在建筑施工安全事故文本中具有更优的 识别效果。  相似文献   

13.
Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of low-level machine models. By contrast, we develop a more structural approach. We show how high-level functional programs can be mapped compositionally (i.e. in a syntax-directed fashion) into a simple kind of automata which are immediately seen to be reversible. The size of the automaton is linear in the size of the functional term. In mathematical terms, we are building a concrete model of functional computation. This construction stems directly from ideas arising in Geometry of Interaction and Linear Logic—but can be understood without any knowledge of these topics. In fact, it serves as an excellent introduction to them. At the same time, an interesting logical delineation between reversible and irreversible forms of computation emerges from our analysis.  相似文献   

14.
This paper describes the application of requirements engineering concepts to support the analysis of the impact of new software systems on system-wide goals. Requirements on a new or revised software component of a socio-technical system not only have implications on the goals of the subsystem itself, but they also impact upon the goals of the existing integrated system. In industries such as air traffic management and healthcare, impacts need to be identified and demonstrated in order to assess concerns such as risk, safety, and accuracy. A method called PiLGRIM was developed which integrates means-end relationships within goal modelling with knowledge associated with the application domain. The relationship between domain knowledge and requirements, as described in a satisfaction argument, adds traceability rationale to help determine the impacts of new requirements across a network of heterogeneous actors. We report procedures that human analysts follow to use the concepts of satisfaction arguments in a software tool for i* goal modelling. Results were demonstrated using models and arguments developed in two case studies, each featuring a distinct socio-technical system??a new controlled airspace infringement detection tool for NATS (the UK??s air navigation service provider), and a new version of the UK??s HIV/AIDS patient reporting system. Results provided evidence towards our claims that the conceptual integration of i* and satisfaction arguments is usable and useful to human analysts, and that the PiLGRIM impact analysis procedures and tool support are effective and scalable to model and analyse large and complex socio-technical systems.  相似文献   

15.
Modern data centers are playing an important role in a world full of information and communication technologies (ICTs). Many efforts have been paid to build a more efficient, cleaner data center for economic, social, and environmental benefits. This objective is being enabled by emerging technologies such as cloud computing and software-defined networking (SDN). However, a data center is inherently heterogeneous, consisting of servers, networking devices, cooling devices, power supply devices, etc., resulting in daunting challenges in its management and control. Previous approaches typically focus on only a single domain, for example, traditional cloud computing for server resource (e.g., computing resource and storage resource) management and SDN for network management. In a similar context of networking device heterogeneity, network function virtualization has been proposed to offer a standard abstract interface to manage all networking devices. In this research, we take the challenge of building a suit of unified middleware to monitor and control the three intrinsic subsystems in a data centre, including ICT, power, and cooling. Specifically, we present \(\upmu \mathrm{DC}^2\) , a unified scalable IP-based data collection system for data center management with elevated extensibility, as an initial step to offer a unified platform for data center operations. Our system consists of three main parts, i.e., data-source adapters for information collection over various subsystems in a data center, a unified message bus for data transferring, and a high-performance database for persistent data storage. We have conducted performance benchmark for the key building components, namely messaging server and database, confirming that our system is scalable for a data center with high device density and real-time management requirements. Key features, such as configuration files, dynamical module loading, and data compression, enhance our implementation with high extensibility and performance. The effectiveness of our proposed data collection system is verified by sample applications, such as, traffic flow migration for load balancing, VM migration for resource reservation, and server power management for hardware safety. This research lays out a foundation for a unified data centre management in future.  相似文献   

16.
17.
In this paper, we present an approach to document enrichment, which consists of developing and integrating formal knowledge models with archives of documents, to provide intelligent knowledge retrieval and (possibly) additional knowledge-intensive services, beyond what is currently available using “standard” information retrieval and search facilities. Our approach is ontology-driven, in the sense that the construction of the knowledge model is carried out in a top-down fashion, by populating a given ontology, rather than in a bottom-up fashion, by annotating a particular document. In this paper, we give an overview of the approach and we examine the various types of issues (e.g. modelling, organizational and user interface issues) which need to be tackled to effectively deploy our approach in the workplace. In addition, we also discuss a number of technologies we have developed to support ontology-driven document enrichment and we illustrate our ideas in the domains of electronic news publishing, scholarly discourse and medical guidelines.  相似文献   

18.
陈远  任荣 《图学学报》2016,37(6):816
随着我国城市化步伐的加快,城市火灾的发生呈上升趋势,火灾防治的难度不断 加大,对防火设计和消防安全管理的要求不断提高。近年来建筑信息模型(BIM)技术的不断发 展,为建筑消防设计和消防安全管理提供了新的思路和方法。研究探讨了基于BIM 的建筑消防 安全管理的应用框架,包含基于BIM 的建筑消防设计子模型、基于BIM 的消防预案管理子模 型和基于BIM 的智能消防系统子模型。通过实际工程的案例分析,验证了BIM 技术在建筑消 防安全管理中的实现和应用价值,为建筑消防安全管理的改进提供了参考和技术支持。  相似文献   

19.
Four-dimensional models, which are 3D models with an added dimension to represent schedule information, have become an important tool in representing construction processes. These models usually rely on colors to represent the different construction states, such that when an ideal color scheme is used, engineers are able to understand the model and identify the potential problems more easily. However, up to this point, little research has been conducted in this area. This paper presents the selection, examination, and user test (SEUT) procedure, a systematic procedure to determine the ideal color scheme for a 4D model. This procedure can be performed iteratively to obtain the ideal color scheme, which would fit a 4D model according to its construction purposes. After conducting an example case following the proposed procedure, we determined an ideal color scheme for six construction states of a 4D model for plant construction. In total ten color schemes were examined and testing was conducted by 58 users over two iterations. The results show that the SEUT procedure is an effective method for determining color schemes to present 4D models and an ideal color scheme was validated and recommended in this research.  相似文献   

20.
One of the most impressive characteristics of human perception is its domain adaptation capability. Humans can recognize objects and places simply by transferring knowledge from their past experience. Inspired by that, current research in robotics is addressing a great challenge: building robots able to sense and interpret the surrounding world by reusing information previously collected, gathered by other robots or obtained from the web. But, how can a robot automatically understand what is useful among a large amount of information and perform knowledge transfer? In this paper we address the domain adaptation problem in the context of visual place recognition. We consider the scenario where a robot equipped with a monocular camera explores a new environment. In this situation traditional approaches based on supervised learning perform poorly, as no annotated data are provided in the new environment and the models learned from data collected in other places are inappropriate due to the large variability of visual information. To overcome these problems we introduce a novel transfer learning approach. With our algorithm the robot is given only some training data (annotated images collected in different environments by other robots) and is able to decide whether, and how much, this knowledge is useful in the current scenario. At the base of our approach there is a transfer risk measure which quantifies the similarity between the given and the new visual data. To improve the performance, we also extend our framework to take into account multiple visual cues. Our experiments on three publicly available datasets demonstrate the effectiveness of the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号