首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this study, 7075 - Al2O3 (5 wt%) composites with a particle size of 0.3 µm, 2 µm, and 15 µm were developed by hot pressing. The dry sliding wear performance of the specimens was evaluated under loads of 5 N, 10 N, 20 N, 30 N, and at sliding speeds of 80 mm/s, 110 mm/s, 140 mm/s by reciprocating wear tests. The wear tests showed that 7075 - 5Al2O3 (15 µm) exhibited the best wear performance. The volume loss of 7075 - 5Al2O3 (15 µm) under load of 30 N for sliding speed of 140 mm/s was 37.1% lower than the unreinforced 7075 alloy. The volume loss (mm3) of composites reinforced with the particle size of 0.3 µm, 2 µm, and 15 µm was 11.62, 9.87, and 8.07, respectively, for load of 30 N and sliding speed of 140 mm/s. An increase in the applied load and sliding speed increased the wear severity by changing the wear mechanism from abrasion to delamination. The analysis of variance (ANOVA) showed that the load was the most significant parameter on the volume loss. The linear regression (LR), support vector regression (SVR), artificial neural network (ANN), and extreme learning machine (ELM) were used for the prediction of volume loss. The determination coefficient (R2) of the LR, SVR, ANN, and ELM was 0.814, 0.976, 0.935, and 0.989, respectively. The ELM model has the highest success. Thus, the ELM model has significant potential for the prediction of wear behaviour for Al matrix composites.  相似文献   

2.
Fibrillar dry adhesives have shown great potential in many applications thanks to their tunable adhesion, notably for pick-and-place handling of fragile objects. However, controlling and monitoring alignment with the target objects is mandatory to enable reliable handling. In this paper, we present an in-line monitoring system that allows optical analysis of an array of individual fibrils (with a contact radius of 350 µm) in contact with a smooth glass substrate, followed by the prediction of their adhesion performance. Images recorded at maximum compressive preload represent characteristic contact signatures that were used to extract visual features. These features, in turn, were used to create a linear model and to train different linear and non-linear regression models for predicting adhesion force depending on the misalignment angle. Support vector regression and boosted tree models exhibited highest accuracies and outperformed an analytical model reported in literature. Overall, this new approach enables predictions in gripping objects by contact observations in near real-time, which likely improves the reliability of handling operations.  相似文献   

3.
This paper aims to understand model the effect of vibration on particle percolation. The percolation of small particles in a vibrated bed of big particles is studied by DEM. It is found the percolation velocity (Vp) decreases with increasing vibration amplitude (A) and frequency (f) when the size ratio of small to large particles (d/D) is smaller than the spontaneous percolation threshold of 0.154. Vibration can enable percolation when the size ratio is larger than 0.154, while Vp increases with increasing A and f first and then decreases. Vp can be correlated to the vibration velocity amplitude under a given size ratio. Previous radial dispersion model can still be applied while the dispersion coefficient is affected by vibration conditions and size ratio. Furthermore, a machine learning model is trained to predict Vp as a function of A, f and d/D, and is then used to obtain the percolation threshold size ratio as a function of vibration conditions.  相似文献   

4.
5.
Big data is increasingly available in all areas of manufacturing and operations, which presents an opportunity for better decision making and discovery of the next generation of innovative technologies. Recently, there have been substantial developments in the field of patent analytics, which describes the science of analysing large amounts of patent information to discover trends. We define Intellectual Property Analytics (IPA) as the data science of analysing large amount of IP information, to discover relationships, trends and patterns for decision making. In this paper, we contribute to the ongoing discussion on the use of intellectual property analytics methods, i.e artificial intelligence methods, machine learning and deep learning approaches, to analyse intellectual property data. This literature review follows a narrative approach with search strategy, where we present the state-of-the-art in intellectual property analytics by reviewing 57 recent articles. The bibliographic information of the articles are analysed, followed by a discussion of the articles divided in four main categories: knowledge management, technology management, economic value, and extraction and effective management of information. We hope research scholars and industrial users, may find this review helpful when searching for the latest research efforts pertaining to intellectual property analytics.  相似文献   

6.
In this study, four different machine learning (ML) models were used to simulate the migration behavior of minerals during coal slime flotation based on particle characteristics (shape, size, compositions, and types): random forest (RF), logistic regression (LR), AdaBoosting (Ada), and k-nearest neighbors (KNN). For ML model development, 70% of the total data was used for the training phase, and 30% was used for the testing phase. F-score and area under the curve (AUC) were used as the most vital indicators for evaluating the different ML models. Compared to the other ML models, the RF model had the best accuracy for simulating particle migration behavior during flotation. Furthermore, the RF model avoided the drawback of having to be retrained when the feed conditions changed. The results revealed that particle size and particle composition play the most significant role in coal slime flotation.  相似文献   

7.
With the growing number of applications of artificial intelligence such as autonomous cars or smart industrial equipment, the inaccuracy of utilized machine learning algorithms could lead to catastrophic outcomes. Human-in-the-loop computing combines human and machine intelligence resulting in a hybrid intelligence of complementary strengths. Whereas machines are unbeatable in logic and computation speed, humans are contributing with their creative and dynamic minds. Hybrid intelligent systems are necessary to achieve high accuracy and reliability of machine learning algorithms. In a design science research project with a Swedish manufacturing company, this paper presents an application of human-in-the-loop computing to make operational processes more efficient. While conceptualizing a Smart Power Distribution for electric industrial equipment, this research presents a set of principles to design machine-learning algorithms for hybrid intelligence. From being AI-ready as an organization to clearly focusing on the customer benefits of a hybrid intelligent system, designers need to build and strengthen the trust in the human-AI relationship to make future applications successful and reliable. With the growing trends of technological advancements and incorporation of artificial intelligence in more and more applications, the alliance of humans and machines have become even more crucial.  相似文献   

8.
A thermodynamically-based precipitation model, employing the classical nucleation and growth theories, has been adapted to deal with simultaneous precipitation of metastable and stable phases. This model gives an estimation of the precipitation kinetics (time evolution of radius and density of precipitates for both phases, as well as the evolution of solute fraction) in a wide range of temperature. Results were successfully compared with an experimental isothermal precipitation diagram (Time–Temperature-Transformation, TTT) from the literature for the precipitation of ε carbide and Fe3C in low-carbon steels.  相似文献   

9.
Industrial robots are widely used in various areas owing to their greater degrees of freedom (DOFs) and larger operation space compared with traditional frame movement systems involving sliding and rotational stages. However, the geometrical transfer of joint kinematic errors and the relatively weak rigidity of industrial robots compared with frame movement systems decrease their absolute kinematic accuracy, thereby limiting their further application in ultraprecision manufacturing. This imposes a stringent requirement for improving the absolute kinematic accuracy of industrial robots in terms of the position and orientation of the robot arm end. Current measurement and compensation methods for industrial robots either require expensive measuring systems, producing positioning or orientation errors, or offer low measurement accuracy. Herein, a kinematic calibration method for an industrial robot using an artifact with a hybrid spherical and ellipsoid surface is proposed. A system with submicrometric precision for measuring the position and orientation of the robot arm end is developed using laser displacement sensors. Subsequently, a novel kinematic error compensating method involving both a residual learning algorithm and a neural network is proposed to compensate for nonlinear errors. A six-layer recurrent neural network (RNN) is designed to compensate for the kinematic nonlinear errors of a six-DOF industrial robot. The results validate the feasibility of the proposed method for measuring the kinematic errors of industrial robots, and the compensation method based on the RNN improves the accuracy via parameter fitting. Experimental studies show that the measuring system and compensation method can reduce motion errors by more than 30%. The present study provides a feasible and economic approach for measuring and improving the motion accuracy of an industrial robot at the submicrometric measurement level.The full text can be downloaded at https://link.springer.com/article/10.1007/s40436-022-00400-6  相似文献   

10.
One of the problems encountered when galvanizing reactive steel is the thickness of the zinc layer. The aim of this paper is to apply Doehlert design taking into account the effects of the bath temperature, the immersion time and the withdrawal speed in order to obtain a reduction of the zinc consumption. Our results showed that there is a serious opportunity to optimize the galvanizing process at a minimum thickness playing only on the physical parameters and without resorting to any addition.  相似文献   

11.
Machine-learning algorithms pervade our daily lives. In epidemiology, supervised machine learning has the potential for classification, diagnosis and risk factor identification. Here, we report the use of support vector machine learning to identify the features associated with hock burn on commercial broiler farms, using routinely collected farm management data. These data lend themselves to analysis using machine-learning techniques. Hock burn, dermatitis of the skin over the hock, is an important indicator of broiler health and welfare. Remarkably, this classifier can predict the occurrence of high hock burn prevalence with accuracy of 0.78 on unseen data, as measured by the area under the receiver operating characteristic curve. We also compare the results with those obtained by standard multi-variable logistic regression and suggest that this technique provides new insights into the data. This novel application of a machine-learning algorithm, embedded in poultry management systems could offer significant improvements in broiler health and welfare worldwide.  相似文献   

12.
Metabolomics experiments involve the simultaneous detection of a high number of metabolites leading to large multivariate datasets and computer-based applications are required to extract relevant biological information. A high-throughput metabolic fingerprinting approach based on ultra performance liquid chromatography (UPLC) and high resolution time-of-flight (TOF) mass spectrometry (MS) was developed for the detection of wound biomarkers in the model plant Arabidopsis thaliana. High-dimensional data were generated and analysed with chemometric methods.Besides, machine learning classification algorithms constitute promising tools to decipher complex metabolic phenotypes but their application remains however scarcely reported in that research field. The present work proposes a comparative evaluation of a set of diverse machine learning schemes in the context of metabolomic data with respect to their ability to provide a deeper insight into the metabolite network involved in the wound response. Standalone classifiers, i.e. J48 (decision tree), kNN (instance-based learner), SMO (support vector machine), multilayer perceptron and RBF network (neural networks) and Naive Bayes (probabilistic method), or combinations of classification and feature selection algorithms, such as Information Gain, RELIEF-F, Correlation Feature-based Selection and SVM-based methods, are concurrently assessed and cross-validation resampling procedures are used to avoid overfitting.This study demonstrates that machine learning methods represent valuable tools for the analysis of UPLC-TOF/MS metabolomic data. In addition, remarkable performance was achieved, while the models' stability showed the robustness and the interpretability potential. The results allowed drawing attention to both temporal and spatial metabolic patterns in the context of stress signalling and highlighting relevant biomarkers not evidenced with standard data treatment.  相似文献   

13.
M. Naresh  S. Sikdar  J. Pal 《Strain》2023,59(5):e12439
A vibration data-based machine learning architecture is designed for structural health monitoring (SHM) of a steel plane frame structure. This architecture uses a Bag-of-Features algorithm that extracts the speeded-up robust features (SURF) from the time-frequency scalogram images of the registered vibration data. The discriminative image features are then quantised to a visual vocabulary using K-means clustering. Finally, a support vector machine (SVM) is trained to distinguish the undamaged and multiple damage cases of the frame structure based on the discriminative features. The potential of the machine learning architecture is tested for an unseen dataset that was not used in training as well as with some datasets from entirely new damages close to existing (i.e., trained) damage classes. The results are then compared with those obtained using three other combinations of features and learning algorithms—(i) histogram of oriented gradients (HOG) feature with SVM, (ii) SURF feature with k-nearest neighbours (KNN) and (iii) HOG feature with KNN. In order to examine the robustness of the approach, the study is further extended by considering environmental variabilities along with the localisation and quantification of damage. The experimental results show that the machine learning architecture can effectively classify the undamaged and different joint damage classes with high testing accuracy that indicates its SHM potential for such frame structures.  相似文献   

14.
Background While engineering instructional materials and practice problems for pre‐college students are often presented in the context of real‐life situations, college‐level texts are typically written in abstract form. Purpose (Hypothesis ) The goal of this study was to jointly examine the impact of contextualizing engineering instruction and varying the number of practice opportunities on pre‐college students' learning and learning perceptions. Design/ Method Using a 3 × 2 factorial design, students were randomly assigned to learn about electrical circuit analysis with an instructional program that represented problems in abstract, contextualized, or both forms, either with two practice problems or four practice problems. The abstract problems were devoid of any real‐life context and represented with standard abstract electrical circuit diagrams. The contextualized problems were anchored around real‐life scenarios and represented with life‐like images. The combined contextualized‐abstract condition added abstract circuit diagrams to the contextualized representation. To measure learning, students were given a problem‐solving near‐transfer post‐test. Learning perceptions were measured using a program‐rating survey where students had to rate the instructional program's diagrams, helpfulness, and difficulty. Results Students in the combined contextualized‐abstract condition scored higher on the post‐test, produced better problem representations, and rated the program's diagrams and helpfulness higher than their counterparts. Students who were given two practice problems gave higher program diagram and helpfulness ratings than those given four practice problems. Conclusions These findings suggest that pre‐college engineering instruction should consider anchoring learning in real‐life contexts and providing students with abstract problem representations that can be transferred to a variety of problems.  相似文献   

15.
Analysing historical patterns of artificial intelligence (AI) adoption can inform decisions about AI capability uplift, but research to date has provided a limited view of AI adoption across different fields of research. In this study we examine worldwide adoption of AI technology within 333 fields of research during 1960–2021. We do this by using bibliometric analysis with 137 million peer-reviewed publications captured in The Lens database. We define AI using a list of 214 phrases developed by expert working groups at the Organisation for Economic Cooperation and Development (OECD). We found that 3.1 million of the 137 million peer-reviewed research publications during the entire period were AI-related, with a surge in AI adoption across practically all research fields (physical science, natural science, life science, social science and the arts and humanities) in recent years. The diffusion of AI beyond computer science was early, rapid and widespread. In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to cover over half of all research fields by 1972, over 80% by 1986 and over 98% in current times. We note AI has experienced boom-bust cycles historically; the AI “springs” and “winters”. We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.  相似文献   

16.
Additive manufacturing becomes a more and more important technology for production, mainly driven by the ability to realise extremely complex structures using multiple materials but without assembly or excessive waste. Nevertheless, like any high-precision technology additive manufacturing responds to interferences during the manufacturing process. These interferences – like vibrations – might lead to deviations in product quality, becoming manifest for instance in a reduced lifetime of a product or application issues. This study targets the issue of detecting such interferences during a manufacturing process in an exemplary experimental setup. Collection of data using current sensor technology directly on a 3D-printer enables a quantitative detection of interferences. The evaluation provides insights into the effectiveness of the realised application-oriented setup, the effort required for equipping a manufacturing system with sensors, and the effort for acquisition and processing the data. These insights are of practical utility for organisations dealing with additive manufacturing: the chosen approach for detecting interferences shows promising results, reaching interference detection rates of up to 100% depending on the applied data processing configuration.  相似文献   

17.
The authors propose an innovative Internet of Things (IoT) based E-commerce business model Cloud Laundry for mass scale laundry services. The model utilises big data analytics, intelligent logistics management, and machine learning techniques. Using GPS and real-time update of big data, it calculates the best transportation path and update and re-route the logistic terminals quickly and simultaneously. Cloud laundry intelligently and dynamically provides the best laundry solutions based on the current state spaces of the laundry terminals through the user's specifications and thus offers local hotel customers with convenient, efficient, and transparent laundry services. Taking advantage of the rapid development of the big data industry, user interest modelling, and information security and privacy considerations, cloud laundry uses smartphone terminal control and big data models to maintain customers’ security needs. Different from the traditional laundry industry, cloud laundry companies have higher capital turnover, more liquidity, and stronger profitability. Therefore, this new generation of smart laundry business model could be of interest to not only academic researchers, but E-commerce entrepreneurs as well.  相似文献   

18.
In this study, abnormalities in medical images are analysed using three classifiers, and the results are compared. Breast cancer remains a major public health problem among women worldwide. Recently, many algorithms have evolved for the investigation of breast cancer diagnosis through medical imaging. A computer-aided microcalcification detection method is proposed to categorise the nature of breast cancer as either benign or malignant from input mammogram images. The standard mammogram image corpus, the Mammogram Image Analysis Society database is utilised, and feature extraction is performed using five different wavelet families at level 4 and level 6 decomposition. The work is accomplished through firefly algorithm (FA), extreme learning machine (ELM) and least-square-based non-linear regression (LSNLR) classifiers. The performance of the classifiers is compared by benchmark metrics, such as total error rate, specificity, sensitivity, area under the receiver operating characteristic curve, precision, F1 score and the Matthews correlation coefficient. As validation of the classifier results, a kappa analysis is included to determine the agreement among classifiers. The LSNLR classifier attains a 3% to 7% improvement in average accuracy compared with the average classification accuracy of the FA (86.75%) and ELM (90.836%) classifiers.  相似文献   

19.
The SPLASH experiment has been designed in 1985 by the CEA to simulate thermal fatigue due to short cooling shocks on steel specimens and is similar to the device reported by Marsh in Ref. [ 1 ]. The purpose of this paper is to discuss the mechanical and the fatigue analysis of the experiment using results from FEM computations. The lifetime predictions are obtained using a modified dissipated energy with a maximal pressure term and agree with the experimental observations. The numerical analysis of the mechanical state shows an important evolution of the triaxiality ratio during the loading cycle. Further comparisons and discussions of the fatigue criteria are provided in the second part of the paper (Part II) 2 .  相似文献   

20.
The solution of instrumented indentation inverse problems by physically-based models still represents a complex challenge yet to be solved in metallurgy and materials science. In recent years, Machine Learning (ML) tools have emerged as a feasible and more efficient alternative to extract complex microstructure-property correlations from instrumented indentation data in advanced materials. On this basis, the main objective of this review article is to summarize the extent to which different ML tools have been recently employed in the analysis of both numerical and experimental data obtained by instrumented indentation testing, either using spherical or sharp indenters, particularly by nanoindentation. Also, the impact of using ML could have in better understanding the microstructure-mechanical properties-performance relationships of a wide range of materials tested at this length scale has been addressed.The analysis of the recent literature indicates that a combination of advanced nanomechanical/microstructural characterization with finite element simulation and different ML algorithms constitutes a powerful tool to bring ground-breaking innovation in materials science. These research means can be employed not only for extracting mechanical properties of both homogeneous and heterogeneous materials at multiple length scales, but also could assist in understanding how these properties change with the compositional and microstructural in-service modifications. Furthermore, they can be used for design and synthesis of novel multi-phase materials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号