首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 0 毫秒
1.
Learning from past accidents is pivotal for improving safety in construction. However, hazard records are typically documented and stored as unstructured or semi-structured free-text rendering the ability to analyse such data a difficult task. The research presented in this study presents a novel and robust framework that combines deep learning and text mining technologies that provide the ability to analyse hazard records automatically. The framework comprises four-step modelling approach: (1) identification of hazard topics using a Latent Dirichlet Allocation algorithm (LDA) model; (2) automatic classification of hazards using a Convolution Neural Network (CNN) algorithm; (3) the production of a Word Co-occurrence Network (WCN) to determine the interrelations between hazards; and (4) quantitative analysis by Word Cloud (WC) technology of keywords to provide a visual overview of hazard records. The proposed framework is validated by analysing hazard records collected from a large-scale transport infrastructure project. It is envisaged that the use of the framework can provide managers with new insights and knowledge to better ensure positive safety outcomes in projects. The contributions of this research are threefold: (1) it is demonstrated that the process of analysing hazard records can be automated by combining deep learning and text learning; (2) hazards are able to be visualized using a systematic and data-driven process; and (3) the automatic generation of hazard topics and their classification over specific time periods enabling managers to understand their patterns of manifestation and therefore put in place strategies to prevent them from reoccurring.  相似文献   

2.
This paper has a twofold purpose. First, to characterize the sustainability of the European wood manufacturing industry. In this way, a ranking of the European countries analyzed in terms of sustainability is established. To undertake this task the sustainability of each country is defined by using several indicators of diverse nature (economic, environmental and social). These indicators are aggregated into a composite or synthetic index with the help of a binary goal programming model. In this way, a ranking according to the sustainability of the wood manufacturing industry in the European countries studied is obtained. The second step in the research consists of explaining the causes behind the level of sustainability of each country. This task is carried out by taking the composite indexes of sustainability as endogenous variables and a tentative set of economic, environmental and social variables as explanatory variables. The link between endogenous and exogenous variables is made with the help of econometric models.  相似文献   

3.
Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related mortality worldwide. New insights into the pathogenesis of this lethal disease are urgently needed. Chromosomal copy number alterations (CNAs) can lead to activation of oncogenes and inactivation of tumor suppressors in human cancers. Thus, identification of cancer-specific CNAs will not only provide new insight into understanding the molecular basis of tumor genesis but also facilitate the identification of HCC biomarkers using CNA.  相似文献   

4.
Performing business process analysis in healthcare organizations is particularly difficult due to the highly dynamic, complex, ad hoc, and multi-disciplinary nature of healthcare processes. Process mining is a promising approach to obtain a better understanding about those processes by analyzing event data recorded in healthcare information systems. However, not all process mining techniques perform well in capturing the complex and ad hoc nature of clinical workflows. In this work we introduce a methodology for the application of process mining techniques that leads to the identification of regular behavior, process variants, and exceptional medical cases. The approach is demonstrated in a case study conducted at a hospital emergency service. For this purpose, we implemented the methodology in a tool that integrates the main stages of process analysis. The tool is specific to the case study, but the same methodology can be used in other healthcare environments.  相似文献   

5.
This is the second part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and keywords. We developed a knowledge browsing environment to support our literature analysis process. We use the visualization capabilities of FCA to explore the literature, to discover and conceptually represent the main research topics in the FCA community. In this second part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 which applied FCA-based methods for knowledge discovery and ontology engineering in various application domains. These domains include software mining, web analytics, medicine, biology and chemistry data.  相似文献   

6.
In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context.From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.  相似文献   

7.
The 12-month discussion surrounding a regional university campus quickly evolved from a suggestion of independence, to a plan, to the ultimate closure of the university. This unique series of events at the University of South Florida Polytechnic (USFP) allows for an investigation of how various forms of media were used during this significant event that impacted college student’s education and immediate future. A campus wide survey was combined with social and online media monitoring to assess the topics, authors, and methods used during prominent discussions during and preceding the closure of USFP. Although social media played a crucial role, the most common format was Twitter and it was used almost exclusively by members of the media itself. Students instead relied on traditional sources to gather information. Additionally, students expressed their opinion utilizing classic methods, such as petitions, foregoing more modern Twitter or Facebook campaigns. It is incorrect to automatically assume younger demographic authorship or utilization of social media technology. Whereas social media use could expand even more over the next decade, identifying authorship remains critical as it is unclear how frequent social media is viewed as an official method of public discussion, especially when politics and higher education collide.  相似文献   

8.
In many industrial plants, development and implementation of advanced monitoring and control techniques require real-time measurement of process quality variables. However, on-line acquisition of such data may involve difficulties due to inadequacy of measurement techniques or low reliability of measuring devices. To overcome the shortcomings of traditional instrumentation, inferential sensors have been designed to infer process quality indicators from real-time measurable process variables. In recent years, due to the demonstrated advantages of Bayesian methods, interest in investigating the application of these methods for design of inferential sensors has grown. However, the potential of Bayesian methods for inferential modeling practices in the process industry has not yet been fully realized. This paper provides a general introduction to the main steps involved in development and implementation of industrial inferential sensors, and presents an overview of the relevant Bayesian methods for inferential modeling.  相似文献   

9.
In emergency evacuations, not all pedestrians know the destination or the routes to the destination, especially when the route is complex. Many pedestrians follow a leader or leaders during an evacuation. A Trace Model was proposed to simulate such tracing processes, including (1) a Dynamic Douglas–Peucker algorithm to extract global key nodes from dynamically partial routes, (2) a key node complementation rule to address the issue in which the Dynamic Douglas–Peucker algorithm does not work for an extended time when the route is straight and long, and (3) a modification to a follower’s impatience factor, which is associated with the distance from the leader. The tracing process of pupils following their teachers in a primary school during an evacuation was simulated. The virtual process was shown to be reasonable both in the indoor classroom and on the outdoor campus along complex routes. The statistical data obtained in the simulation were also studied. The results show that the Trace Model can extract relatively global key nodes from dynamically partial routes that are very similar to the results obtained by the classical Douglas–Peucker algorithm based on whole routes, and the data redundancy is effectively reduced. The results also show that the Trace Model is adaptive to the motions between followers and leaders, which demonstrates that the Trace Model is applicable for the tracing process in complex routes and is an improvement on the classical Douglas–Peucker algorithm and the social force model.  相似文献   

10.
11.
ContextDomains where data have a complex structure requiring new approaches for knowledge discovery from data are on the increase. In such domains, the information related to each object under analysis may be composed of a very broad set of interrelated data instead of being represented by a simple attribute table. This further complicates their analysis.ObjectiveIt is becoming more and more necessary to model data before analysis in order to assure that they are properly understood, stored and later processed. On this ground, we have proposed a UML extension that is able to represent any set of structurally complex hierarchically ordered data. Conceptually modelled data are human comprehensible and constitute the starting point for automating other data analysis tasks, such as comparing items or generating reference models.MethodThe proposed notation has been applied to structurally complex data from the stabilometry field. Stabilometry is a medical discipline concerned with human balance. We have organized the model data through an implementation based on XML syntax.ResultsWe have applied data mining techniques to the resulting structured data for knowledge discovery. The sound results of modelling a domain with such complex and wide-ranging data confirm the utility of the approach.ConclusionThe conceptual modelling and the analysis of non-conventional data are important challenges. We have proposed a UML profile that has been tested on data from a medical domain, obtaining very satisfactory results. The notation is useful for understanding domain data and automating knowledge discovery tasks.  相似文献   

12.
Industrial process modeling is currently undergoing a fundamental transformation, leading towards interconnected close-loop twins of models, i.e., the parametrically-controlled real-world physics model, and its corresponding digitalized virtual system model. Between these models, a highly time-sensitive mutual validation mechanism and a coherent and consistent control strategy are applied, which demand complex virtual-and-real world model-based information mapping and synchronization. Thus, this research proposes a semantic conceptual framework for industrial process modeling in the context of digital twins. Based on a hierarchical structure of digital twins, this framework modularizes the modeling process in terms of the semantic information modules of physics in the real-world phenomena, and clarifies inter-module associations and near-real-time data transmission coupled with virtual analysis in CAE environment so that the time-sensitive phenomenon information objects distributed on virtually-separated sub-level physics models can be supported for representing the real-world process comprehensively. Advanced feature concept is adopted to construct the digital models as the basic compositions of any virtual industrial process. The related feature definitions are extended in this work so that the common characteristics in the concept of digital twins could be generically and concisely represented. To validate the modeling method, the digital model of a prototyped High-velocity Oxygen-fuel (HVOF) coating process system was constructed as the case study. The data from the nozzle trajectory, the flame, and the in-flight particle behavior, and the transient thermal performance of the coating layer and substrate were synchronously incorporated for simulating the transient phenomena of the substrate component temperature and coating thickness distribution on the substrate surface. The final simulation result validates that the feature-based digital model is able to comprehensively reflect the real-world scenario on the virtual side. To illustrate how the developed model provides meaningful feedback to the real-world control, a general digital twin setup of the model is given at the end of the case study.  相似文献   

13.
This paper provided a content analysis of studies in the field of cognition in e-learning that were published in five Social Sciences Citation Index (SSCI) journals (i.e. Computers and Education, British Journal of Educational Technology, Innovations in Education and Teaching International, Educational Technology Research & Development, and Journal of Computer Assisted Learning) from 2001 to 2005. Among the 1027 articles published in these journals from 2001 to 2005, 444 articles were identified as being related to the topic of cognition in e-learning. These articles were cross analyzed by published years, journal, research topic, and citation count. Furthermore, 16 highly-cited articles across different topics were chosen for further analysis according to their research settings, participants, research design types, and research methods. It was found from the analysis of the 444 articles that “Instructional Approaches,” “Learning Environment,” and “Metacognition” were the three most popular research topics, but the analysis of the citation counts suggested that the studies related to “Instructional Approaches,” “Information Processing” and “Motivation” might have a greater impact on subsequent research. Although the use of questionnaires might still be the main method of gathering research data in e-learning cognitive studies, a clear trend was observed that more and more studies were utilizing learners’ log files or online messages as data sources for analysis. The results of the analysis provided insights for educators and researchers into research trends and patterns of cognition in e-learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号