首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Transfer learning (TL) is a machine learning (ML) method in which knowledge is transferred from the existing models of related problems to the model for solving the problem at hand. Relational TL enables the ML models to transfer the relationship networks from one domain to another. However, it has two critical issues. One is determining the proper way of extracting and expressing relationships among data features in the source domain such that the relationships can be transferred to the target domain. The other is how to do the transfer procedure. Knowledge graphs (KGs) are knowledge bases that use data and logic to graph-structured information; they are helpful tools for dealing with the first issue. The proposed relational feature transfer learning algorithm (RF-TL) embodies an extended structural equation modelling (SEM) as a method for constructing KGs. Additionally, in fields such as medicine, economics, and law related to people’s lives and property safety and security, the knowledge of domain experts is a gold standard. This paper introduces the causal analysis and counterfactual inference in the TL domain that directs the transfer procedure. Different from traditional feature-based TL algorithms like transfer component analysis (TCA) and CORelation Alignment (CORAL), RF-TL not only considers relations between feature items but also utilizes causality knowledge, enabling it to perform well in practical cases. The algorithm was tested on two different healthcare-related datasets — sleep apnea questionnaire study data and COVID-19 case data on ICU admission — and compared its performance with TCA and CORAL. The experimental results show that RF-TL can generate better transferred models that give more accurate predictions with fewer input features.  相似文献   

2.
This paper proposes using Deep Neural Networks (DNN) models for recognizing construction workers’ postures from motion data captured by wearable Inertial Measurement Units (IMUs) sensors. The recognized awkward postures can be linked to known risks of Musculoskeletal Disorders among workers. Applying conventional Machine Learning (ML)-based models has shown promising results in recognizing workers’ postures. ML models are limited – they reply on heuristic feature engineering when constructing discriminative features for characterizing postures. This makes further improving the model performance regarding recognition accuracy challenging. In this paper, the authors investigate the feasibility of addressing this problem using a DNN model that, through integrating Convolutional Neural Networks (CNN) with Long Short-Term Memory (LSTM) layers, automates feature engineering and sequential pattern detection. The model’s recognition performance was evaluated using datasets collected from four workers on construction sites. The DNN model integrating one convolutional and two LSTM layers resulted in the best performance (measured by F1 Score). The proposed model outperformed baseline CNN and LSTM models suggesting that it leveraged the advantages of the two baseline models for effective feature learning. It improved benchmark ML models’ recognition performance by an average of 11% under personalized modelling. The recognition performance was also improved by 3% when the proposed model was applied to 8 types of postures across three subjects. These results support that the proposed DNN model has a high potential in addressing challenges for improving the recognition performance that was observed when using ML models.  相似文献   

3.
Instrumentation is beneficial in civil engineering for monitoring structures during their construction and operation. The data collected can be used to observe real-time response and develop data-driven models for predicting future behaviour. However, a limited number of sensors are usually used for on-site civil engineering construction due to cost restrictions and practicalities. This results in relatively small raw datasets, which often contain errors and anomalies. Interpreting and making judicious use of the available dataset for developing reliable predictive model represents a significant challenge. Therefore, it is essential to pre-process and clean the data for improving their quality. To date, little investigation has been performed in the application of such data cleaning methods to geotechnical engineering datasets collected from full-scale sites. The purpose of this study is to apply simple and effective data pre-processing techniques to site-data collected from a highway embankment constructed on a sequence of soil layers of different physical make-up and non-linear consolidation characteristics. Various cleaning methods were applied to magnetic extensometer data collected for monitoring settlement within foundation soils beneath the embankment. PCA was used to explore raw data, identify and remove outliers. Numerous filtering and smoothing methods were used to clean noise in the data and their results were further compared using RMSE and NMSE. The methods adopted for data pre-processing and cleaning proved very effective for capturing the raw settlement behaviour on site. The findings from this study would be useful to site engineers regarding complex decision-making relating to ground response due to embankment construction. This also has positive prospects for developing dynamic prediction models for embankment settlement.  相似文献   

4.
The deterministic and probabilistic prediction of ship motion is important for safe navigation and stable real-time operational control of ships at sea. However, the volatility and randomness of ship motion, the non-adaptive nature of single predictors and the poor coverage of quantile regression pose serious challenges to uncertainty prediction, making research in this field limited. In this paper, a multi-predictor integration model based on hybrid data preprocessing, reinforcement learning and improved quantile regression neural network (QRNN) is proposed to explore the deterministic and probabilistic prediction of ship pitch motion. To validate the performance of the proposed multi-predictor integrated prediction model, an experimental study is conducted with three sets of actual ship longitudinal motions during sea trials in the South China Sea. The experimental results indicate that the root mean square errors (RMSEs) of the proposed model of deterministic prediction are 0.0254°, 0.0359°, and 0.0188°, respectively. Taking series #2 as an example, the prediction interval coverage probabilities (PICPs) of the proposed model of probability predictions at 90%, 95%, and 99% confidence levels (CLs) are 0.9400, 0.9800, and 1.0000, respectively. This study signifies that the proposed model can provide trusted deterministic predictions and can effectively quantify the uncertainty of ship pitch motion, which has the potential to provide practical support for ship early warning systems.  相似文献   

5.
Incorporation of nanomaterials in device structure is the key to enhance performance of polymer light emitting diodes (PLEDs). The major challenges that impede competence of PLEDs, for application in display technology, are (i) non-availability of stable low work function metals to act as cathode, (ii) presence of charge trapping centers in the polymer chains and (iii) total internal reflection of light at ITO/glass and glass/air interfaces. The foremost problem leads to increase in turn ON voltage of the device and reduction in electron injection from cathode. Low injection and high trapping probability of electrons lead to charge imbalance in the emissive layer and shifting of recombination zone towards cathode. This immensely constrains the formation and radiative decay of excitons in the emissive layer and declines the luminosity of the device. In this review, experimental studies on the integration of nanomaterials in PLED structures to enhance device luminance are presented. The diverse impact of their geometric features, ionization potential, electrical conductivity and refractive index on the carrier transport and light extraction in PLEDs is discussed and a perspective on this evolving research path is provided.  相似文献   

6.
Reliable and accurate ship motion prediction is essential for ship navigation at sea and marine operations. Although previous studies have yielded rich results in the field of ship motion prediction, most of them have ignored the importance of the dynamic characteristics of ship motion for constructing forecasting models. Besides, the limitations of the single model and the autocorrelation characteristics of the residual series are also unfavorable factors that hinder the forecasting performance. To fill these gaps, a multi-objective heterogeneous integration model based on decomposition-reconstruction mechanism and adaptive segmentation error correction method is proposed in this paper for ship motion multi-step prediction. Specifically, the proposed model is divided into three stages, which are decomposition-reconstruction mechanism, multi-objective heterogeneous integration model and adaptive segmentation error correction method. The effectiveness of the proposed model is verified using four sets of real ship motion data collected from two sites in the South China Sea. The evaluation results show that the proposed model can effectively improve the prediction performance and outperforms other traditional models and state-of-the-art models in the field of ship motion prediction. Prospectively, the model proposed in this study can be used as an effective aid to ship warning systems and has the potential for practical application in ship marine operations.  相似文献   

7.
Quality control is a critical aspect of the modern electronic circuit industry. In addition to being a pre-requisite to proper functioning, circuit quality is closely related to safety, security, and economic issues. Quality control has been reached through system testing. Meanwhile, device miniaturization and multilayer Printed Circuit Boards have increased the electronic circuit test complexity considerably. Hence, traditional test processes based on manual inspections have become outdated and inefficient. More recently, the concept of Advanced Manufacturing or Industry 4.0 has enabled the manufacturing of customized products, tailored to the changing customers’ demands. This scenario points out additional requirements for electronic system testing: it demands a high degree of flexibility in production processes, short design and manufacturing cycles, and cost control. Thus, there is a demand for circuit testing systems that present effectiveness and accessibility without placing numerous test points. This work is focused on automated test solutions based on machine learning, which are becoming popular with advances in computational tools. We present a new testing approach that uses autoencoders to detect firmware or hardware anomalies based on the electric current signature. We built a test set-up using an embedded system development board to evaluate the proposed approach. We implemented six firmware versions that can run independently on the test board – one of them is considered anomaly-free. In order to obtain a reference frame to our results, two other classification techniques (a computer vision algorithm and a random forest classification model) were employed to detect anomalies on the same development board. The outcomes of the experiments demonstrated that the proposed test method is highly effective. For several test scenarios, the correct detection rate was above 99%. Test results showed that autoencoder and random forest approaches are effective. However, random forests require all data classes to be trained. Training an autoencoder, on the other hand, only requires the reference (anomaly-free) class.  相似文献   

8.
We discuss the model of electronic signatures as described by the European eIDAS Regulation from the perspective of common understanding of electronic signatures in the cryptographic community. We show that these two perspectives do not present the same picture. The discrepancies between them may become opportunities as well as barriers for rapid deployment of electronic signatures and seals in business and administration.We focus particularly on validation of advanced electronic signatures and its interplay with the data protection requirements of GDPR. We show that by tweaking the existing technical standards one can reduce the number of problems and achieve compliance with both GDPR and eIDAS.Among others, we wish to bring attention to the evolving regulatory framework that without any doubt will have a substantial impact on the ecosystem of electronic signatures.  相似文献   

9.
Metro shield construction will inevitably cause changes in the stress and strain state of the surrounding soil, resulting in stratum deformation and surface settlement (SS), which will seriously endanger the safety of nearby buildings, roads and underground pipe networks. Therefore, in the design and construction stage, optimizing the shield construction parameters (SCP) is the key to reducing the SS rate and increasing the safe driving speed (DS). However, optimization of existing SCP are challenged by the need to construct a unified multiobjective model for optimization that are efficient, convenient, and widely applicable. This paper innovatively proposes a hybrid intelligence framework that combines random forest (RF) and non-dominant classification genetic algorithm II (NSGA-II), which overcomes the shortcomings of time-consuming and high cost for the establishment and verification of traditional prediction models. First, RF is used to rank the importance of 10 influencing factors, and the nonlinear mapping relationship between the main SCP and the two objectives is constructed as the fitness function of the NSGA-II algorithm. Second, a multiobjective optimization framework for RF-NSGA-II is established, based on which the optimal Pareto front is calculated, and reasonable optimized control ranges for the SCP are obtained. Finally, a case study in the Wuhan Rail Transit Line 6 project is examined. The results show that the SS is reduced by 12.5% and the DS is increased by 2.5% with the proposed framework. Meanwhile, the prediction results are compared with the back-propagation neural network (BPNN), support vector machine (SVM), and gradient boosting decision tree (GBDT). The findings indicate that the RF-NSGA-II framework can not only meet the requirements of SS and DS calculation, but also used as a support tool for real-time optimization and control of SCP.  相似文献   

10.
In the era of digitalization, there are many emerging technologies, such as the Internet of Things (IoT), Digital Twin (DT), Cloud Computing and Artificial Intelligence (AI), which are quickly developped and used in product design and development. Among those technologies, DT is one promising technology which has been widely used in different industries, especially manufacturing, to monitor the performance, optimize the progresses, simulate the results and predict the potential errors. DT also plays various roles within the whole product lifecycle from design, manufacturing, delivery, use and end-of-life. With the growing demands of individualized products and implementation of Industry 4.0, DT can provide an effective solution for future product design, development and innovation. This paper aims to figure out the current states of DT research focusing on product design and development through summarizing typical industrial cases. Challenges and potential applications of DT in product design and development are also discussed to inspire future studies.  相似文献   

11.
Information extracted from aerial photographs is widely used in the fields of urban planning and design. An effective method for detecting buildings in aerial photographs is to use deep learning to understand the current state of a target region. However, the building mask images used to train the deep learning model must be manually generated in many cases. To overcome this challenge, a method has been proposed for automatically generating mask images by using textured three-dimensional (3D) virtual models with aerial photographs. Some aerial photographs include clouds, which degrade image quality. These clouds can be removed by using a generative adversarial network (GAN), which leads to improvements in training quality. Therefore, the objective of this research was to propose a method for automatically generating building mask images by using 3D virtual models with textured aerial photographs. In this study, using GAN to remove clouds in aerial photographs improved training quality. A model trained on datasets generated by the proposed method was able to detect buildings in aerial photographs with IoU = 0.651.  相似文献   

12.
Smart manufacturing has great potential in the development of network collaboration, mass personalised customisation, sustainability and flexibility. Customised production can better meet the dynamic user needs, and network collaboration can significantly improve production efficiency. Industrial internet of things (IIoT) and artificial intelligence (AI) have penetrated the manufacturing environment, improving production efficiency and facilitating customised and collaborative production. However, these technologies are isolated and dispersed in the applications of machine design and manufacturing processes. It is a challenge to integrate AI and IIoT technologies based on the platform, to develop autonomous connect manufacturing machines (ACMMs), matching with smart manufacturing and to facilitate the smart manufacturing services (SMSs) from the overall product life cycle. This paper firstly proposes a three-terminal collaborative platform (TTCP) consisting of cloud servers, embedded controllers and mobile terminals to integrate AI and IIoT technologies for the ACMM design. Then, based on the ACMMs, a framework for SMS to generate more IIoT-driven and AI-enabled services is presented. Finally, as an illustrative case, a more autonomous engraving machine and a smart manufacturing scenario are designed through the above-mentioned method. This case implements basic engraving functions along with AI-enabled automatic detection of broken tool service for collaborative production, remote human-machine interface service for customised production and network collaboration, and energy consumption analysis service for production optimisation. The systematic method proposed can provide some inspirations for the manufacturing industry to generate SMSs and facilitate the optimisation production and customised and collaborative production.  相似文献   

13.
In this study, two types of convolutional neural network (CNN) classifiers are designed to handle the problem of classifying black plastic wastes. In particular, the black plastic wastes have the property of absorbing laser light coming from spectrometer. Therefore, the classification of black plastic wastes remains still a challenging problem compared to classifying other colored plastic wastes using existing spectroscopy (i.e., NIR). When it comes the classification problem of black plastic wastes, effective classification techniques by the laser spectroscopy of Fourier Transform-Infrared Radiation (FT-IR) with Attenuated Total Reflectance (ATR) and Raman to analyze the classification problem of black plastic wastes are introduced. Due to the strong ability of extracting spatial features and remarkable performance in image classification, 1D and 2D CNN through data features are designed as classifiers. The technique of chemical peak points selection is considered to reduce data redundancy. Furthermore, through the selection of data features based on the extracted 1D data with peak points is introduced. Experimental results demonstrate that 2DCNN classifier designed with the help of 2D data feature selection as well as 1DCNN classifier shows the best performance compared with other reported methods for classifying black plastic wastes.  相似文献   

14.
With the ever-increasing demand for personalized product functions, product structure becomes more and more complex. To design a complex engineering product, it involves mechanical, electrical, automation and other relevant fields, which requires a closer multidisciplinary collaborative design (MCD) and integration. However, the traditional design method lacks multidisciplinary coordination, which leads to interaction barriers between design stages and disconnection between product design and prototype manufacturing. To bridge the gap, a novel digital twin-enabled MCD approach is proposed. Firstly, the paper explores how to converge the MCD into the digital design process of complex engineering products in a cyber-physical system manner. The multidisciplinary collaborative design is divided into three parts: multidisciplinary knowledge collaboration, multidisciplinary collaborative modeling and multidisciplinary collaborative simulation, and the realization methods are proposed for each part. To be able to describe the complex product in a virtual environment, a systematic MCD framework based on the digital twin is further constructed. Integrate multidisciplinary collaboration into three stages: conceptual design, detailed design and virtual verification. The ability to verify and revise problems arising from multidisciplinary fusions in real-time minimizes the number of iterations and costs in the design process. Meanwhile, it provides a reference value for complex product design. Finally, a design case of an automatic cutting machine is conducted to reveal the feasibility and effectiveness of the proposed approach.  相似文献   

15.
This paper presents a novel denoising approach based on deep learning and signal processing to improve communication efficiency. Construction activities take place when different trades come to the site for overlapped periods to perform their works, which may easily produce hazardous noise levels. The existence of noise affects workers' health issues, especially hearing and rhythm of the heart, and impacts communication efficiency between workers. The proposed approach employs signal processing technique to transform the noisy audio into image and utilize neural networks to extract noisy features and denoise the image. The denoised image is then converted to obtain the denoised audio. Experiments on reducing the side effect of several common noises in construction sites were conducted, compared with the performance of denoising using conventional wavelet transform. Standard objective measures, such as signal-to-noise ratio (SNR), and subjective measures, such as listening tests are used for evaluations. Our experimental results show that the proposed algorithm achieved significant improvements over the traditional method, as evidenced by the following quantitative results of median value: MSE of 0.002, RMSE of 0.049, SNR of 5.7 dB, PSNR of 25.8 dB, and SSR of 8.Results indicate that the proposed algorithm outperforms conventional denoising methods in terms of both objective and subjective evaluation metrics and have the potential to facilitate communication between site workers when facing different noise sources inevitably.  相似文献   

16.
To make use of the great opportunities for emission reduction in early building design, future emissions need to be calculated when only geometric, but no detailed material information about a building is available. Currently, early design phase life cycle assessments (LCAs) are heavily reliant on assumptions of specific material choices, leading to single point emission values which suggest a precision not representative for an early design stage. By adding knowledge about possible locations and functions of materials within a building to life cycle inventory (LCI) data, the EarlyData knowledge base makes LCA data sets accessible and more transparent. Additionally, “generic building parts” are defined, which describe building parts independently of precise material choices as a combination of layers with specific functions. During evaluation, enriched LCI data and generic building parts enable assessment of a vast number of possible material combinations at once. Thus, instead of single value results for a particular material combination, ranges of results are displayed revealing the building parts with the greatest emission reduction potential. The application of the EarlyData tool is illustrated on a use case comparing a wood building and a concrete building. The database is developed with extensibility in mind, to include other criteria, such as (life cycle) costs.  相似文献   

17.
18.
The China-Pakistan Economic Corridor (CPEC) is considered as an excellent breakthrough for improving the economic and security situation in the region. The estimated worth of CPEC is 62$ billion which is comprising of 49 developmental projects. China-Pakistan Fiber Optic Project (CPFOP) is one of the core projects among these, which will deliver safe route of voice traffic between both countries. CPFOP is greatly beneficial in terms of enhanced security and revenue generation. Currently, Pakistan’s international connectivity is via submarine cables. CPFOP will provide an alternative route for international telecom traffic and also assist in achieving the rapidly growing internet traffic demand in Pakistan. It is estimated that 17 million people will get benefit from this project. However, every project has some undesirable impacts. The aim of this research paper is twofold; 1st to trace out the pros and cons of CPFOP. 2ndly, performing a risk assessment of CPFOP by using Fuzzy VIKOR technique. This approach will help in prioritizing a list of failure modes of Fiber Optic Cable (FOC). Lastly, this paper will help authorities for optimizing and safeguarding national interest in the wake of CPFOP.  相似文献   

19.
Architecture, engineering, and construction projects need to be promoted in harmony with the natural environment and with the aim of preserving people’s living environment. At the planning and design stage, decision-makers and stakeholders share and assess landscape images during and after construction in order to avoid as much uncertainty as possible when performing environmental impact assessment. Given the lack of a standard visualization method for future landscapes that do not yet exist, mixed reality (MR), which overlays virtual content onto a real scene, has attracted attention in the field of landscape design. One challenge in MR is occlusion, which occurs when virtual objects obscure physical objects that should be rendered in the foreground. In MR-based landscape visualization, the distance between the MR camera and real objects located in front of the virtual objects might vary and might be large, causing difficulty for existing occlusion handling methods. In the process of landscape design, an evidence-based approach has also become important. Landscape index estimation using semantic segmentation by deep learning, which can recognize the surrounding environment, has been actively studied for landscape assessment. In this study, semantic segmentation by deep learning was integrated into an MR system to enable dynamic occlusion handling and landscape index estimation for both existing and designed landscape assessment. This system can be operated on a mobile device with video communication over the internet by connecting to real-time semantic segmentation on a high-performance personal computer. The applicability of the developed system is demonstrated through accuracy verification and case studies.  相似文献   

20.
Target design methodologies (DfX) were developed to cope with specific engineering design issues such as cost-effectiveness, manufacturability, assemblability, maintainability, among others. However, DfX methodologies are undergoing the lack of real integration with 3D CAD systems. Their principles are currently applied downstream of the 3D modelling by following the well-known rules available from the literature and engineers’ know-how (tacit internal knowledge).This paper provides a method to formalize complex DfX engineering knowledge into explicit knowledge that can be reused for Advanced Engineering Informatics to aid designers and engineers in developing mechanical products. This research work wants to define a general method (ontology) able to couple DfX design guidelines (engineering knowledge) with geometrical product features of a product 3D model (engineering parametric data). A common layer for all DfX methods (horizontal) and dedicated layers for each DfX method (vertical) allow creating the suitable ontology for the systematic collection of the DfX rules considering each target. Moreover, the proposed framework is the first step for developing (future work) a software tool to assist engineers and designers during product development (3D CAD modelling).A design for assembly (DfA) case study shows how to collect assembly rules in the given framework. It demonstrates the applicability of the CAD-integrated DfX system in the mechanical design of a jig-crane. Several benefits are recognized: (i) systematic collection of DfA rules for informatics development, (ii) identification of assembly issues in the product development process, and (iii) reduction of effort and time during the design review.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号