首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

This paper presents a smart supervisory framework for a single process controller, designed for Industry 4.0 shop floors. This digitization of a full supervisory suite for a single process controller enables self-awareness, self-diagnosis, self-prognosis, and self-healing (by definition, these "self" elements are missing from other supervisory frameworks diagnosing numerous controllers in parallel). The proposed framework is aligned with the concept of a Cyber Physical System (CPS), since its implementation generates a rich cyber physical entity of the controlled process. This CPS entity can either be considered as the process digital twin, or can provide a solid basis for generating it. Finally, the framework includes the main characteristics of Industry 4.0, such as advanced use of Artificial Intelligence (AI) and big data analysis. The framework is based on four modules: (1) Control and Awareness module—performing both continuous process control and adjustments, as well as machine learning (ML) and statistical process control (SPC) for identifying abnormalities that require further diagnosis; (2) Process -diagnosis module—performing continual (recurrent) analysis of the process state and trends; (3) Prognosis and Healing module—performing prognosis and automated intervention via parameter changes, re-configurations, and automated maintenance; (4) External Interaction Platform—an interactive module for interfacing with experts, presenting them with the process analysis information and obtaining feedback from them as part of a learning process. Using an implementation showcase to illustrate the methodological framework’s applicability, we demonstrate its real-world potential. The proposed framework could serve as a guide for implementing smart process control and maintenance systems in Industry 4.0 shop floors. It could also provide a firm basis for comparison with future suggested frameworks. Future research directions could include pursuing improvements to the proposed process control framework and validating the framework by case studies of its implementation.

  相似文献   

2.
Digital twin (DT) and artificial intelligence (AI) technologies are powerful enablers for Industry 4.0 toward sustainable resilient manufacturing. Digital twins of machine tools and machining processes combine advanced digital techniques and production domain knowledge, facilitate the enhancement of agility, traceability, and resilience of production systems, and help machine tool builders achieve a paradigm shift from one-time products provision to on-going service delivery. However, the adaptability and accuracy of digital twins at the shopfloor level are restricted by heterogeneous data sources, modeling precision as well as uncertainties from dynamical industrial environments. This article proposes a novel modeling framework to address these inadequacies by in-depth integrating AI techniques and machine tool expertise using aggregated data along the product development process. A data processing procedure is constructed to contextualize metadata sources from the design, planning, manufacturing, and quality stages and link them into a digital thread. On this consistent data basis, a modeling pipeline is presented to incorporate production and machine tool prior knowledge into AI development pipeline, while considering the multi-fidelity nature of data sources in dynamic industrial circumstances. In terms of implementation, we first introduce our existing work for building digital twins of machine tool and manufacturing process. Within this infrastructure, we developed a hybrid learning-based digital twin for manufacturing process following proposed modeling framework and tested it in an external industrial project exemplarily for real-time workpiece quality monitoring. The result indicates that the proposed hybrid learning-based digital twin enables learning uncertainties of the interaction of machine tools and machining processes in real industrial environments, thus allows estimating and enhancing the modeling reliability, depending on the data quality and accessibility. Prospectively, it also contributes to the reparametrization of model parameters and to the adaptive process control.  相似文献   

3.
The integration of advanced manufacturing processes with ground-breaking Artificial Intelligence methods continue to provide unprecedented opportunities towards modern cyber-physical manufacturing processes, known as smart manufacturing or Industry 4.0. However, the “smartness” level of such approaches closely depends on the degree to which the implemented predictive models can handle uncertainties and production data shifts in the factory over time. In the case of change in a manufacturing process configuration with no sufficient new data, conventional Machine Learning (ML) models often tend to perform poorly. In this article, a transfer learning (TL) framework is proposed to tackle the aforementioned issue in modeling smart manufacturing. Namely, the proposed TL framework is able to adapt to probable shifts in the production process design and deliver accurate predictions without the need to re-train the model. Armed with sequential unfreezing and early stopping methods, the model demonstrated the ability to avoid catastrophic forgetting in the presence of severely limited data. Through the exemplified industry-focused case study on autoclave composite processing, the model yielded a drastic (88%) improvement in the generalization accuracy compared to the conventional learning, while reducing the computational and temporal cost by 56%.  相似文献   

4.
5.
Industry 4.0 Predictive Maintenance (PdM 4.0) architecture in the broadcasting chain is one of the taxonomy challenges for deploying Industry 4.0 frameworks. This paper proposes a novel PdM framework based on advanced Reference Architecture Model Industry 4.0 (RAMI 4.0) to reduce operation and maintenance costs. This framework includes real-time production monitoring, business processes, and integration based on Design Science Research (DSR) to generate an innovative Business Process Model and Notation (BPMN) meta-model. The addressed model visualizes sub-processes based on experts' and stakeholders' knowledge to reduce the cost of maintenance of audiovisual services including satellite TV, cable TV, and live audio and video broadcast services. Based on the recommendation and the concept of Industry 4.0, the proposed framework tolerates the predictable failures and further concerns in similar related industries. Some empirical experiments have been conducted by using the Islamic Republic of Iran Broadcasting’s (IRIB) high-power station (located near the capital city of Iran, Tehran) to evaluate the functionality and efficiency of the proposed predictive maintenance framework. Practical outcomes demonstrate that interval times between data collection should be increased in audio and video broadcasting predictive maintenance because of the limitation of the internal processing performance of equipment. The framework also indicates the role of the Frequency Modulation (FM) transmitters’ data clearance to reduce the instability and untrustworthy data during data mining. The proposed DSR method endorses using a customized RAMI 4.0 meta-model framework to adapt distributed broadcasting and communication with PdM 4.0, which increases the stability as well as decreasing maintenance costs of the broadcasting chain in comparison to state-of-the-art methodologies. Furthermore, it is shown that the proposed framework outperforms the best-evaluated methods in terms of acceptance.  相似文献   

6.
This paper presents new perspectives on the application of Artificial Intelligence (AI) solutions to process Spacecraft (S/C) flight data in order to augment currently used operational S/C health monitoring and diagnostics systems. It captures the growing general interest in the usage of such techniques in the Space engineering domain and applications.Jointly with the AI approach, the operational usage of S/C simulation models (referred to as “discipline models”) is also explored. During S/C development and testing activities, significant efforts are made by the discipline experts to build such models. However, using discipline-specific knowledge to support complex S/C operational activities (e.g., anomaly root cause analysis) remains a challenging task.Based on the current needs of Space Agencies and Industry and by exploiting the advances in AI-based solutions and technologies, this paper proposes an operational S/C model-based diagnostics framework, which can serve as basis for future developments. Such framework combines AI-based techniques, S/C flight data information, and discipline models. Three main needs are addressed: S/C anomaly root cause analysis, S/C prediction behavior, and discipline model refinement. Concrete operational case studies from the Project for On-Board Autonomy (PROBA) satellite family are presented to show the applicability of the proposed framework.  相似文献   

7.
ABSTRACT

Computer integrated manufacturing (CIM) has enormous benefits as it increases the rate of production, reduces errors and production waste, and streamlines manufacturing sub-systems. However, there are some new challenges related to CIM operating in the Internet of Things/Internet of Data (IoT/IoD) scenarios associated with Industry 4.0 and cyber-physical systems. The main challenge is to deal with the massive volume of data flowing between various CIM components functioning in virtual settings of IoT. This paper proposes decisional DNA-based knowledge representation framework to manage the storage, analysis, and processing of data, information, and knowledge of a typical CIM. The framework utilizes the concept of virtual engineering object and virtual engineering process for developing knowledge models of various CIM components such as automatic storage and retrieval systems, automatic guided vehicles, robots, and numerically controlled machines. The proposed model is capable of capturing in real time the manufacturing data, information and knowledge at every stage of production, that is, at the object level, the process level, and at the factory level. The significance of this study is that it will support decision-making by reusing the experience, which will not only help in effective real-time data monitoring and processing, but also make CIM system intelligent and ready to function in the virtual Industry 4.0 environment.  相似文献   

8.
In a smart city, IoT devices are required to support monitoring of normal operations such as traffic, infrastructure, and the crowd of people. IoT-enabled systems offered by many IoT devices are expected to achieve sustainable developments from the information collected by the smart city. Indeed, artificial intelligence (AI) and machine learning (ML) are well-known methods for achieving this goal as long as the system framework and problem statement are well prepared. However, to better use AI/ML, the training data should be as global as possible, which can prevent the model from working only on local data. Such data can be obtained from different sources, but this induces the privacy issue where at least one party collects all data in the plain. The main focus of this article is on support vector machines (SVM). We aim to present a solution to the privacy issue and provide confidentiality to protect the data. We build a privacy-preserving scheme for SVM (SecretSVM) based on the framework of federated learning and distributed consensus. In this scheme, data providers self-organize and obtain training parameters of SVM without revealing their own models. Finally, experiments with real data analysis show the feasibility of potential applications in smart cities. This article is the extended version of that of Hsu et al. (Proceedings of the 15th ACM Asia Conference on Computer and Communications Security. ACM; 2020:904-906).  相似文献   

9.
Recent development in the Wire arc additive manufacturing (WAAM) provides a promising alternative for fabricating high value-added medium to large metal components for many industries such as aerospace and maritime industry. However, challenges stemming from the demand for increasingly complex and high-quality products, hinder the widespread adoption of the conventional WAAM method for manufacturing industries. The development of artificial intelligence (AI) techniques may provide new opportunities to upgrade WAAM to the next generation. Hence, this paper provides a comprehensive review of the state-of-the-art research on AI techniques in WAAM. Firstly, we proposed a novel concept of intelligent wire arc additive manufacturing (IWAAM) and revealed the challenges of developing IWAAM. Secondly, an overview of the research progress of applying AI techniques to several aspects of the WAAM process chain, including fabrication process pre-design, online deposition control and offline parameter optimization is provided. Thirdly, the relevant machine learning algorithms, and the knowledge of corresponding AI techniques, are also reviewed in detail. Through reviewing the current research articles, issues of applying AI techniques to the WAAM process are presented and analysed. Finally, future research perspectives in terms of novel AI technique applications and AI technique enhancement are discussed. Through this systematic review, it is expected that WAAM may gradually develop into a smart/intelligent manufacturing technology in the context of Industry 4.0 through the adoption of AI techniques.  相似文献   

10.
生成式人工智能技术自ChatGPT发布以来,不断突破瓶颈,吸引了资本规模投入、多领域革命和政府重点关注。本文首先分析了大模型的发展动态、应用现状和前景,然后从以下3个方面对大模型相关技术进行了简要介绍:1)概述了大模型相关构造技术,包括构造流程、研究现状和优化技术;2)总结了3类当前主流图像—文本的大模型多模态技术;3)介绍了根据评估方式不同而划分的3类大模型评估基准。参数优化与数据集构建是大模型产品普及与技术迭代的核心问题;多模态能力是大模型重要发展方向之一;设立评估基准是比较与约束大模型的关键方法。此外,本文还讨论了现有相关技术面临的挑战与未来可能的发展方向。现阶段的大模型产品已有强大的理解能力和创造能力,在教育、医疗和金融等领域已展现出广阔的应用前景。但同时,它们也存在训练部署困难、专业知识不足和安全隐患等问题。因此,完善参数优化、优质数据集构建、多模态等技术,并建立统一、全面、便捷的评估基准,将成为大模型突破现有局限的关键。  相似文献   

11.
In classical time domain Box-Jenkins identification discrete-time plant and noise models are estimated using sampled input/output signals. The frequency content of the input/output samples covers uniformly the whole unit circle in a natural way, even in case of prefiltering. Recently, the classical time domain Box-Jenkins framework has been extended to frequency domain data captured in open loop. The proposed frequency domain maximum likelihood (ML) solution can handle (i) discrete-time models using data that only covers a part of the unit circle, and (ii) continuous-time models. Part I of this series of two papers (i) generalizes the frequency domain ML solution to the closed loop case, and (ii) proves the properties of the ML estimator under non-standard conditions. Contrary to the classical time domain case it is shown that the controller should be either known or estimated. The proposed ML estimators are applicable to frequency domain data as well as time domain data.  相似文献   

12.
As the manufacturing industry is approaching implementation of the 4th industrial revolution, changes will be required in terms of scheduling, production planning and control as well as cost-accounting departments. Industry 4.0 promotes decentralized production and hence, cost models are required to capture costs of products and jobs within the production network considering the utilized manufacturing system paradigm A new mathematical cost model is proposed for assessing the cost-benefit analysis of introducing Industry 4.0 elements to the manufacturing facility, specifically, integrating and connecting external suppliers as strategic partners and establishing an infrastructure for communicating information between the manufacturing company and its strategic suppliers. The mathematical model takes into consideration the bi-directional relationship between hourly rates and total hours assigned to workcentres/activities in a certain production period. A case study, from a multinational machine builder, is developed and solved using the proposed model. Results suggest that though an additional cost is required to establish infrastructure to connect suppliers, the responsiveness and agility achieved resulting from uncertainty outweighs the additional cost.  相似文献   

13.
Analyzing satellite images and remote sensing (RS) data using artificial intelligence (AI) tools and data fusion strategies has recently opened new perspectives for environmental monitoring and assessment. This is mainly due to the advancement of machine learning (ML) and data mining approaches, which facilitate extracting meaningful information at a large scale from geo-referenced and heterogeneous sources. This paper presents the first review of AI-based methodologies and data fusion strategies used for environmental monitoring, to the best of the authors’ knowledge. The first part of the article discusses the main challenges of geographical image analysis. Thereafter, a well-designed taxonomy is introduced to overview the existing frameworks, which have been focused on: (i) detecting different environmental impacts, e.g. land cover land use (LULC) change, gully erosion susceptibility (GES), waterlogging susceptibility (WLS), and land salinity and infertility (LSI); (ii) analyzing AI models deployed for extracting the pertinent features from RS images in addition to data fusion techniques used for combining images and/or features from heterogeneous sources; (iii) describing existing publicly-shared and open-access datasets; (iv) highlighting most frequent evaluation metrics; and (v) describing the most significant applications of ML and data fusion for RS image analysis. This is followed by an overview of existing works and discussions highlighting some of the challenges, limitations and shortcomings. To provide the reader with insight into real-world applications, two case studies illustrate the use of AI for classifying LULC changes and monitoring the environmental impacts due to dams’ construction, where classification accuracies of 98.57% and 97.05% have been reached, respectively. Lastly, recommendations and future directions are drawn.  相似文献   

14.

Developments in advanced innovations have prompted the generation of an immense amount of digital information. The data deluge contains hidden information that is difficult to extract. In the biomedical domain, the development of technology has caused the production of voluminous data. Processing these voluminous textual data is referred to as ‘biomedical content mining’. Emerging artificial intelligence (AI) models play a major role in the automation of Pharma 4.0. In AI, natural language processing (NLP) plays a dynamic role in extracting knowledge from biomedical documents. Research articles published by scientists and researchers contain an enormous amount of hidden information. Most of the original and peer-reviewed articles are indexed in PubMed. Extracting meaningful information from a large number of literature documents is very difficult for human beings. This research aims to extract the named entities of literature documents available in the life science domain. A high-level architecture is proposed along with a novel named entity recognition (NER) model. The model is built using rule-based machine learning (ML). The proposed ArRaNER model produced better accuracy and was also able to identify more entities. The NER model was tested on two different datasets: a PubMed dataset and a Wikipedia talk dataset. The ArRaNER model obtains an accuracy of 83.42% on the PubMed articles and 77.65% on the Wikipedia articles.

  相似文献   

15.
This paper addresses the problem of minimizing the staleness of query results for streaming applications with update semantics under overload conditions. Staleness is a measure of how out-of-date the results are compared with the latest data arriving on the input. Real-time streaming applications are subject to overload due to unpredictably increasing data rates, while in many of them, we observe that data streams and queries in fact exhibit “update semantics” (i.e., the latest input data are all that really matters when producing a query result). Under such semantics, overload will cause staleness to build up. The key to avoid this is to exploit the update semantics of applications as early as possible in the processing pipeline. In this paper, we propose UpStream, a storage-centric framework for load management over streaming applications with update semantics. We first describe how we model streams and queries that possess the update semantics, providing definitions for correctness and staleness for the query results. Then, we show how staleness can be minimized based on intelligent update key scheduling techniques applied at the queue level, while preserving the correctness of the results, even for complex queries that involve sliding windows. UpStream is based on the simple idea of applying the updates in place, yet with great returns in terms of lowering staleness and memory consumption, as we also experimentally verify on the Borealis system.  相似文献   

16.
Today's customers are characterized by individual requirements that lead the manufacturing industry to increased product variety and volume reduction. Manufacturing systems and more specifically assembly systems (ASs) should allow quick adaptation of manufacturing assets so as to respond to the evolving market requirements that lead to mass customization. Meanwhile, the manufacturing era is changing due to the fourth industrial revolution, i.e., Industry 4.0, that will change the traditional manufacturing environment to an IoT-based one. In this context, this paper introduces the concept of cyber-physical microservice in the Manufacturing and the ASs domain and presents the Cyber-Physical microservice and IoT-based (CPuS-IoT) framework. The CPuS-IoT framework exploits the benefits of the microservice architectural style and the IoT technologies, but also utilizes the existing in this domain huge investment based on traditional technologies, to support the life cycle of evolvable ASs in the age of Industry 4.0. It provides a solid basis to capture domain knowledge that is used by a model-driven engineering (MDE) approach to semi-automate the development, evolution and operation of ASs, as well as, to establish a common vocabulary for assembly system experts and IoT ones. The CPuS-IoT approach and framework effectively combines MDE with IoT and the microservice architectural paradigm. A case study for the assembly of an everyday life product is adopted to demonstrate the approach even to non-experts of this domain.  相似文献   

17.

This paper reviews the current state of the art in artificial intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically machine learning (ML) algorithms, is provided including convolutional neural networks (CNNs), generative adversarial networks (GANs), recurrent neural networks (RNNs) and deep Reinforcement Learning (DRL). We categorize creative applications into five groups, related to how AI technologies are used: (i) content creation, (ii) information analysis, (iii) content enhancement and post production workflows, (iv) information extraction and enhancement, and (v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, ML-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of ML in domains with fewer constraints, where AI is the ‘creator’, remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human-centric—where it is designed to augment, rather than replace, human creativity.

  相似文献   

18.
针对不平衡噪声数据流的分类问题,本文利用基于平均概率的集成分类器AP与抽样技术,提出了一种处理不平衡噪声数据流的集成分类器(IMDAP)模型。实验结果表明,该集成分类器更能适应存在概念漂移与噪声的不平衡数据流挖掘分类,其整体分类性能优于AP集成分类器模型,能明显提升少数类的分类精度,并且具有与AP相近的时间复杂度。  相似文献   

19.
Designers rely on performance predictions to direct the design toward appropriate requirements. Machine learning (ML) models exhibit the potential for rapid and accurate predictions. Developing conventional ML models that can be generalized well in unseen design cases requires an effective feature engineering and selection. Identifying generalizable features calls for good domain knowledge by the ML model developer. Therefore, developing ML models for all design performance parameters with conventional ML will be a time-consuming and expensive process. Automation in terms of feature engineering and selection will accelerate the use of ML models in design.Deep learning models extract features from data, which aid in model generalization. In this study, we (1) evaluate the deep learning model’s capability to predict the heating and cooling demand on unseen design cases and (2) obtain an understanding of extracted features. Results indicate that deep learning model generalization is similar to or better than that of a simple neural network with appropriate features. The reason for the satisfactory generalization using the deep learning model is its ability to identify similar design options within the data distribution. The results also indicate that deep learning models can filter out irrelevant features, reducing the need for feature selection.  相似文献   

20.
In recent years, Industry 4.0 has been introduced as a popular term to describe the trend towards digitisation and automation of the manufacturing environment. Despite its potential benefits in terms of improvements in productivity and quality, this concept has not gained much attention in the construction industry. This development is founded in the fact that the far-reaching implications of the increasingly digitised and automated manufacturing environment are still widely unknown. Against this backdrop, the primary objective of this paper is to explore the state of the art as well as the state of practice of Industry 4.0 relating technologies in the construction industry by pointing out the political, economic, social, technological, environmental and legal implications of its adoption. In this context, we present the results of our triangulation approach, which consists of a comprehensive systematic literature review and case study research, by illustrating a PESTEL framework and a value chain model. Additionally, we provide recommendations for further research within a research agenda.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号