首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Interactions between built infrastructure are complex and nuanced; changes to any one component can have disproportionate effects on the system as a whole. For instance, adoption of heat pumps or electric vehicles by a significant proportion of a population in an urban centre would place new demands on both electricity transmission and distribution networks. It is essential that planners – both national and local – can understand and share information about the resource demands that this type of change places on national and local infrastructure. Access to integrated sources of information – from building component to national levels – is key to supporting policy makers and decision takers. However, over time, information – and as a consequence, the software that manages it – has evolved into functional silos; this has, in turn, affected the definition of data exchange standards. This limits the ability of experts in functional areas to exchange data and implement broader decision support systems. This paper describes the use of linked data approaches to permit queries across large, diverse information sources to provide reasoning about complex questions at multiple scales. The methodology defines a central context to which various external sources can be attached. These distributed sources are, in themselves, registered in a central catalogue; they remain, however, under the control of their source organisations. In this way a large, extensible, interconnected network of distributed data describing, for example, a built environment or electricity transmission network; this network of data resources can be queried centrally to provide customised views of subsets of the data, and so provide a richer view than one source in isolation. The approach was applied to prepare and integrate information about Ireland’s transmission grid and administrative boundaries, along with domestic housing stock into a single data source. The resulting data network is queried by a scenario exploration tool. This tool successfully allows analysis, at a national level by economists, of the effects of the adoption of new technologies on the national grid of Ireland.  相似文献   

2.
On-Line Analytical Processing (OLAP) is a data analysis technique typically used for local and well-prepared data. However, initiatives like Open Data and Open Government bring new and publicly available data on the web that are to be analyzed in the same way. The use of semantic web technologies for this context is especially encouraged by the Linked Data initiative. There is already a considerable amount of statistical linked open data sets published using the RDF Data Cube Vocabulary (QB) which is designed for these purposes. However, QB lacks some essential schema constructs (e.g., dimension levels) to support OLAP. Thus, the QB4OLAP vocabulary has been proposed to extend QB with the necessary constructs and be fully compliant with OLAP. In this paper, we focus on the enrichment of an existing QB data set with QB4OLAP semantics. We first thoroughly compare the two vocabularies and outline the benefits of QB4OLAP. Then, we propose a series of steps to automate the enrichment of QB data sets with specific QB4OLAP semantics; being the most important, the definition of aggregate functions and the detection of new concepts in the dimension hierarchy construction. The proposed steps are defined to form a semi-automatic enrichment method, which is implemented in a tool that enables the enrichment in an interactive and iterative fashion. The user can enrich the QB data set with QB4OLAP concepts (e.g., full-fledged dimension hierarchies) by choosing among the candidate concepts automatically discovered with the steps proposed. Finally, we conduct experiments with 25 users and use three real-world QB data sets to evaluate our approach. The evaluation demonstrates the feasibility of our approach and shows that, in practice, our tool facilitates, speeds up, and guarantees the correct results of the enrichment process.  相似文献   

3.
This paper proposes a template-based approach to semi-automatically create contextualized learning tasks out of several sources from the Web of Data. The contextualization of learning tasks opens the possibility of bridging formal learning that happens in a classroom, and informal learning that happens in other physical spaces, such as squares or historical buildings. The tasks created cover different cognitive levels and are contextualized by their location and the topics covered. We applied this approach to the domain of History of Art in the Spanish region of Castile and Leon. We gathered data from DBpedia, Wikidata and the Open Data published by the regional government and we applied 32 templates to obtain 16K learning tasks. An evaluation with 8 teachers shows that teachers would accept their students to carry out the tasks generated. Teachers also considered that the 85% of the tasks generated are aligned with the content taught in the classroom and were found to be relevant to learn in other informal spaces. The tasks created are available at https://casuallearn.gsic.uva.es/sparql.  相似文献   

4.
ABSTRACT

The article analyzes an open data movement in an unusual context of highly developed digital economy and widespread popularity of e-government services in a country that is universally well-known as one of the global leaders in promoting information society and electronic democracy, but paradoxically demonstrating modest results in propagating a presumably related concept of open government data. In this regard, paying special attention to the investigation of main drivers, stakeholders and challenges of the open data movement in Estonia, the author argues that a highly centralized administrative policy that has been widely used previously by authorities in advancing various technology-driven public reforms, which partly explains a truly impressive advance of this Nordic state in e-government, e-commerce, e-banking and evoting, does not necessarily lead to same effective results in the open data domain. On the contrary, the presence of established democratic institutions and developed civil society as well as an incredibly advanced and dynamic private ICTindustry that values competition and professional curiosity along with a very strong sense of patriotism and adherence to a particular neighborhood deeply rooted in Estonian society has played a much more important role in diffusing the concept rather than just traditional government directives and strategies.  相似文献   

5.
Data streams are becoming omnipresent on the Web. The Stream Reasoning (SR) paradigm, which combines Stream Processing with Semantic Web techniques, has been successful in processing these data streams. The progress in SR research has led to several applications in domains such as the Internet of Things, social media analysis, Smart Cities, and many others. Each of these applications produces and consumes data streams, however, there are no fixed guidelines on how to manage data streams on the Web, as there are for their static counterparts. More specifically, there is no fixed life cycle for Streaming Linked Data (SLD) yet. Tommasini et al. (2020) introduced an initial proposal for a SLD life cycle, however, it has not been verified if the proposed life cycle captures existing applications and no guidelines were given for each step.In this paper, we survey existing SR applications and identify if the life cycle proposed by Tommasini et al. fully captures the surveyed applications. Based on our analysis, we found that some of the steps needed reordering or being split up. This paper proposes an update of the life cycle and surveys the existing literature for each life cycle step while proposing a number of guidelines and best practices. Compared to the initial proposal by Tommasini et al., we drill down into the details of the processing step which was previously neglected. The updated life cycle and guidelines serves as a blueprint for future SR applications. A life cycle for SLD that allows to efficiently manage data streams on the web, brings us a step closer to the realization of the SR vision.  相似文献   

6.
Open Self-Medication1  is a Web application that better informs people when treating undiagnosed medical ailments with unprescribed, over the counter drugs, i.e., self-medicating. The application achieves this goal by providing a set of functionalities that ensure safety and efficiency of this practice. The system’s most critical operations are processed using a self-medication knowledge base, expressed in OWL, which has been inductively built on medical information obtained from a similar French project. A main characteristic of this application is that almost all the data processed by the system and presented to the end-user comes from a subset of the LOD data sets, namely DrugBank, DailyMed, Sider and DBPedia. This paper motivates the design of such an application, provides the design choices, describes some implementation details and presents lessons learned and future work.  相似文献   

7.
While the potential benefits of open government data (OGD) initiatives are significant, there has often been a lack of participation by public agencies in these efforts. Motivated by this challenge and a corresponding research gap, we develop a theoretically grounded model to explain what drives public agencies to share their data on OGD platforms. Model testing with survey and objective data from 102 public agencies indicates that agencies’ resource dependence on external innovators significantly impacts their data sharing behavior. Furthermore, conformity need and the sensitivity of their function also influence agencies’ data sharing behavior. Contributions toward research and practice are discussed.  相似文献   

8.
Linked Data Wrappers (LDWs) turn Web APIs into RDF end-points, leveraging the Linked Open Data cloud with current data. Unfortunately, LDWs are fragile upon upgrades on the underlying APIs, compromising LDW stability. Hence, for API-based LDWs to become a sustainable foundation for the Web of Data, we should recognize LDW maintenance as a continuous effort that outlives their breakout projects. This is not new in Software Engineering. Other projects in the past faced similar issues. The strategy: becoming open source and turning towards dedicated platforms. By making LDWs open, we permit others not only to inspect (hence, increasing trust and consumption), but also to maintain (to cope with API upgrades) and reuse (to adapt for their own purposes). Promoting consumption, adaptation and reuse might all help to increase the user base, and in so doing, might provide the critical mass of volunteers, current LDW projects lack. Drawing upon the Helping Theory, we investigate three enablers of volunteering applied to LDW maintenance: impetus to respond, positive evaluation of contributing and increasing awareness. Insights are fleshed out through SYQL, a LDW platform on top of Yahoo’s YQL. Specifically, SYQL capitalizes on the YQL community (i.e. impetus to respond), providesannotation overlays to easy participation (i.e. positive evaluation of contributing), and introduces aHealth Checker (i.e. increasing awareness). Evaluation is conducted for 12 subjects involved in maintaining someone else’s LDWs. Results indicate that both the Health Checker and the annotation overlays provide utility as enablers of awareness and contribution.  相似文献   

9.
Recently, several methods have been proposed for introducing Linked Open Data (LOD) into recommender systems. LOD can be used to enrich the representation of items by leveraging RDF statements and adopting graph-based methods to implement effective recommender systems. However, most of those methods do not exploit embeddings of entities and relations built on knowledge graphs, such as datasets coming from the LOD. In this paper, we propose a novel recommender system based on holographic embeddings of knowledge graphs built from Wikidata, a free and open knowledge base that can be read and edited by both humans and machines. The evaluation performed on three standard datasets such as Movielens 1M, Last.fm and LibraryThing shows promising results, which confirm the effectiveness of the proposed method.  相似文献   

10.
I-Ching Hsu  Yin-Hung Lin 《Software》2020,50(12):2293-2312
Open government data (OGD) is a type of trusted information that can be used to verify the correctness of information on social platforms. Finding interesting OGD to serve personalized needs to facilitate the development of social platforms is a challenging research topic. This study explores how to link the Taiwanese government's open data platform with Facebook and how to recommend related OGD. First, an integrated machine learning with semantic web into cloud computing framework is defined. Next, the linked data query platform (LDQP) is developed to validate its feasibility. The LDQP provides a graphical approach for personal query and links with related Facebook fan pages. LDQP automatically finds highly relevant OGD based on recent topics that users are following on Facebook when users login to Facebook via the LDQP. In this way, the LDQP query result can be dynamically adjusted and graphically displayed according to user's Facebook operations.  相似文献   

11.
The availability of large amounts of open, distributed, and structured semantic data on the web has no precedent in the history of computer science. In recent years, there have been important advances in semantic search and question answering over RDF data. In particular, natural language interfaces to online semantic data have the advantage that they can exploit the expressive power of Semantic Web data models and query languages, while at the same time hiding their complexity from the user. However, despite the increasing interest in this area, there are no evaluations so far that systematically evaluate this kind of systems, in contrast to traditional question answering and search interfaces to document spaces. To address this gap, we have set up a series of evaluation challenges for question answering over linked data. The main goal of the challenge was to get insight into the strengths, capabilities, and current shortcomings of question answering systems as interfaces to query linked data sources, as well as benchmarking how these interaction paradigms can deal with the fact that the amount of RDF data available on the web is very large and heterogeneous with respect to the vocabularies and schemas used. Here, we report on the results from the first and second of such evaluation campaigns. We also discuss how the second evaluation addressed some of the issues and limitations which arose from the first one, as well as the open issues to be addressed in future competitions.  相似文献   

12.
Publishing and sharing open government data in Linked Data format provides many opportunities in terms of data aggregation/integration and creation of information mashups. Statistical data, that contains various performance indicators and their evolution through time, is an example of data that can be used as the foundation for policy prediction, planning and adjustments, and can be re-used in different applications. However, due to Linked Data being relatively a new field, currently there is a lack of tools that enable efficient exploration and analysis of linked geospatial statistical datasets. Therefore, ESTA-LD (Exploratory Spatio-Temporal Analysis) tool was developed to address some of the Linked statistical Data management issues, such as crossing the statistical and the geographical dimensions, producing statistical maps, visualizing different measures, and comparing statistical indicators of different regions through time. This paper discusses the modeling approach that was adopted so that the published data conform to the established standards for representing statistical, spatial and temporal data in Linked Data format. The main contribution is related to the delivery of state-of-the-art open-source tools for retrieving, quality assessment, exploration and analysis of statistical Linked Data that is made available through a SPARQL endpoint.  相似文献   

13.
政府开放数据作为国家和社会发展重要战略资源,蕴含着巨大价值,但我国在政府开放数据的安全风险评估方面缺乏标准,国家数据安全面临风险。借鉴信息安全风险评估理论,以国家安全资产、开放数据脆弱性和安全威胁作为主要安全风险要素,构建政府开放数据的安全风险评估模型,利用层次分析法和模糊综合评价法对政府开放数据的安全风险进行量化评估,并通过实例验证模型的有效性。  相似文献   

14.
This article aims to share some findings about the potential value that can be obtained from the aggregation of public procurement data at a pan-European scale. The period of calculation of the “public procurement advertised in the Official Journal as % of GDP” official indicator could be significantly shortened and the cost of production brought down by harnessing the power of open data. The value of public procurement openly advertised in six countries has been calculated for three different types of prices published in the contract award notices submitted to the OJ. The three rounds of calculations have been compared against the official data released by Eurostat. This article shows how the calculation and discussion of official economic indicators becomes possible for an individual or organization (Euroalert.net) thanks to the availability of open government data (TED), open source software and cloud tools (Google BigQuery) that empower citizens and drive innovation.  相似文献   

15.
Advances in remote sensing technologies have allowed us to send an ever-increasing number of satellites in orbit around Earth. As a result, Earth Observation data archives have been constantly increasing in size in the last few years, and have become a valuable source of data for many scientific and application domains. When Earth Observation data is coupled with other data sources many pioneering applications can be developed. In this paper we show how Earth Observation data, ontologies, and linked geospatial data can be combined for the development of a wildfire monitoring service that goes beyond applications currently deployed in various Earth Observation data centers. The service has been developed in the context of European project TELEIOS that faces the challenges of extracting knowledge from Earth Observation data head-on, capturing this knowledge by semantic annotation encoded using Earth Observation ontologies, and combining these annotations with linked geospatial data to allow the development of interesting applications.  相似文献   

16.
Billions of Linked Data triples exist in thousands of RDF knowledge graphs on the Web, but few of those graphs can be queried live from Web applications. Only a limited number of knowledge graphs are available in a queryable interface, and existing interfaces can be expensive to host at high availability. To mitigate this shortage of live queryable Linked Data, we designed a low-cost Triple Pattern Fragments interface for servers, and a client-side algorithm that evaluates SPARQL queries against this interface. This article describes the Linked Data Fragments framework to analyze Web interfaces to Linked Data and uses this framework as a basis to define Triple Pattern Fragments. We describe client-side querying for single knowledge graphs and federations thereof. Our evaluation verifies that this technique reduces server load and increases caching effectiveness, which leads to lower costs to maintain high server availability. These benefits come at the expense of increased bandwidth and slower, but more stable query execution times. These results substantiate the claim that lightweight interfaces can lower the cost for knowledge publishers compared to more expressive endpoints, while enabling applications to query the publishers’ data with the necessary reliability.  相似文献   

17.
Cities are increasingly prone to urban flooding due to heavier rainfall, denser populations, augmenting imperviousness, and infrastructure aging. Urban pluvial flooding causes damage to buildings and contents, and disturbs stormwater drainage, transportation, and electricity provision. Designing and implementing efficient adaptation measures requires proper understanding of the urban response to heavy rainfall. However, implemented stormwater drainage models lack flood impact data for calibration, which results in poor flood predictions. Moreover, such models only consider rainfall and hydraulic parameters, neglecting the role of other natural, built, and social conditions in flooding mechanisms. This paper explores the potential of open spatial datasets to explain the occurrence of citizen-reported flood incidents during a heavy rain event. After a dimensionality reduction, imperviousness and proximity to watershed outflow point were found to significantly explain up to half of the flooding incidents variability, proving the usefulness of the proposed approach for urban flood modelling and management.  相似文献   

18.
Memories are an important aspect of a person’s life and experiences. The area of human digital memories focuses on encapsulating this phenomenon, in a digital format, over a lifetime. Through the proliferation of ubiquitous devices, both people and the surrounding environment are generating a phenomenal amount of data. With all of this disjointed information available, successfully searching it and bringing it together, to form a human digital memory, is a challenge. This is especially true when a lifetime of data is being examined. Linked Data provides an ideal, and novel, solution for overcoming this challenge, where a variety of data sources can be drawn upon to capture detailed information surrounding a given event. Memories, created in this way, contain vivid structures and varied data sources, which emerge through the semantic clustering of content and other memories. This paper presents DigMem, a platform for creating human digital memories, based on device-specific services and the user’s current environment. In this way, information is semantically structured to create temporal “memory boxes” for human experiences. A working prototype has been successfully developed, which demonstrates the approach. In order to evaluate the applicability of the system a number of experiments have been undertaken. These have been successful in creating human digital memories and illustrating how a user can be monitored in both indoor and outdoor environments. Furthermore, the user’s heartbeat information is analysed to determine his or her heart rate. This has been achieved with the development of a QRS Complex detection algorithm and heart rate calculation method. These methods process collected electrocardiography (ECG) information to discern the heart rate of the user. This information is essential in illustrating how certain situations can make the user feel.  相似文献   

19.
Smart Cities use Information and Communication Technologies (ICT) to manage more efficiently the resources and services offered by a city and to make them more approachable to all its stakeholders (citizens, companies and public administration). In contrast to the view of big corporations promoting holistic “smart city in a box” solutions, this work proposes that smarter cities can be achieved by combining already available infrastructure, i.e., Open Government Data and sensor networks deployed in cities, with the citizens’ active contributions towards city knowledge by means of their smartphones and the apps executed in them. In addition, this work introduces the main characteristics of the IES Cities platform, whose goal is to ease the generation of citizen-centric apps that exploit urban data in different domains. The proposed vision is achieved by providing a common access mechanism to the heterogeneous data sources offered by the city, which reduces the complexity of accessing the city’s data whilst bringing citizens closely to a prosumer (double consumer and producer) role and allowing to integrate legacy data into the cities’ data ecosystem.  相似文献   

20.
The use of Information and Communication Technologies (ICT) tools to support learning activities is nowadays generalized. Several educational registries provide information about ICT tools in order to help educators in their discovery and selection. These registries are typically isolated and require much effort to keep tool information up to date. To address this issue, this paper explores whether educational tool registries can be federated to other datasets currently available on the Web of Data. In order to answer this question, and following the Linked Data approach, this paper proposes to collect data from third-party sources, align it to a vocabulary understandable by educators and finally publish it to be consumed by educational applications. This way, an incipient educational dataset can be automatically created and easily maintained, since non-educative information is obtained from updated third-party sources. A case study with practitioners has been carried out to evaluate whether the information about ICT tools provided by this dataset is understandable and useful for educators. Evaluation results show that available information on the Web of Data can be used to obtain suitable tools for real educational settings, thus overcoming the sustainability problems of existing ICT tool registries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号