首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Artificial neural network (ANN)‐based methods have been extensively investigated for equipment health condition prediction. However, effective condition‐based maintenance (CBM) optimization methods utilizing ANN prediction information are currently not available due to two key challenges: (i) ANN prediction models typically only give a single remaining life prediction value, and it is hard to quantify the uncertainty associated with the predicted value; (ii) simulation methods are generally used for evaluating the cost of the CBM policies, while more accurate and efficient numerical methods are not available, which is critical for performing CBM optimization. In this paper, we propose a CBM optimization approach based on ANN remaining life prediction information, in which the above‐mentioned key challenges are addressed. The CBM policy is defined by a failure probability threshold value. The remaining life prediction uncertainty is estimated based on ANN lifetime prediction errors on the test set during the ANN training and testing processes. A numerical method is developed to evaluate the cost of the proposed CBM policy more accurately and efficiently. Optimization can be performed to find the optimal failure probability threshold value corresponding to the lowest maintenance cost. The effectiveness of the proposed CBM approach is demonstrated using two simulated degradation data sets and a real‐world condition monitoring data set collected from pump bearings. The proposed approach is also compared with benchmark maintenance policies and is found to outperform the benchmark policies. The proposed CBM approach can also be adapted to utilize information obtained using other prognostics methods. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
This paper formulates a model to simultaneously optimize the redundancy and imperfect opportunistic maintenance of a multi‐state weighted k‐out‐of‐n system. Different from existing approaches that consider binary or multi‐state elements, our approach considers modular redundancy in which each module/subsystem is composed of several multi‐state components in series. The status of each component is considered to degrade with use. Therefore, a new condition‐based opportunistic maintenance approach using three different thresholds for a component health state is developed. The objective is to determine 1) the minimal‐cost of k‐out‐of‐n system structure, 2) optimal imperfect opportunistic maintenance strategy, 3) optimal maintenance capacity, and 4) optimal inspection interval subject to an availability constraint. System availability is defined as the ability to satisfy consumer demand. Based on the three‐phase approach, a simulation procedure is used to evaluate the expected multi‐state system availability and life cycle costs. Also, a multi‐seed Tabu search heuristic algorithm with a proper neighborhood generation mechanism is proposed to solve the formulated problem. An application to the optimal design of a wind farm is provided to illustrate the proposed approach. Sensitivity analysis is conducted to discuss the influence of the different parameters of the simulation model. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
A new reliability‐based optimal maintenance scheduling method is presented that considers the effect of maintenance in reducing costs. An ordering list of element maintenance effects with various maintenance‐interval types is constructed. By means of this ordering list, reliability‐based optimal maintenance scheduling for simple reliability structures and composite reliability systems is then carried out. The properties of the proposed method, such as the evaluation of maintenance cost reduction, the simplicity of the proposed method by sacrificing system availability within the allowance method, the operation decision based on the optimal maintenance schedule, etc., are discussed. With simulations, the effectiveness of the proposed method is verified. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Inverse analysis for structural damage identification often involves an optimization process that minimizes the discrepancies between the computed responses and the measured responses. Conventional single‐objective optimization approach defines the objective function by combining multiple error terms into a single one, which leads to a weaker constraint in solving the identification problem. A multi‐objective approach is proposed, which minimizes multiple error terms simultaneously. Its non‐domination‐based convergence provides a stronger constraint that enables robust identification of damages with lower false‐negative detection rate. Another merit of the proposed approach is quantified confidence in damage detection through processing Pareto‐optimal solutions. Numerical examples that simulate static testing are provided to compare the proposed approach with conventional formulation based on single‐objective optimization. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
The theory of network reliability has been applied to many complicated network structures, such as computer and communication networks, piping systems, electricity networks, and traffic networks. The theory is used to evaluate the operational performance of networks that can be modeled by probabilistic graphs. Although evaluating network reliability is an Non‐deterministic Polynomial‐time hard problem, numerous solutions have been proposed. However, most of them are based on sequential computing, which under‐utilizes the benefits of multi‐core processor architectures. This paper addresses this limitation by proposing an efficient strategy for calculating the two‐terminal (terminal‐pair) reliability of a binary‐state network that uses parallel computing. Existing methods are analyzed. Then, an efficient method for calculating terminal‐pair reliability based on logical‐probabilistic calculus is proposed. Finally, a parallel version of the proposed algorithm is developed. This is the first study to implement an algorithm for estimating terminal‐pair reliability in parallel on multi‐core processor architectures. The experimental results show that the proposed algorithm and its parallel version outperform an existing sequential algorithm in terms of execution time. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Because of the necessity for considering various creative and engineering design criteria, optimal design of an engineering system results in a highly‐constrained multi‐objective optimization problem. Major numerical approaches to such optimal design are to force the problem into a single objective function by introducing unjustifiable additional parameters and solve it using a single‐objective optimization method. Due to its difference from human design in process, the resulting design often becomes completely different from that by a human designer. This paper presents a novel numerical design approach, which resembles the human design process. Similar to the human design process, the approach consists of two steps: (1) search for the solution space of the highly‐constrained multi‐objective optimization problem and (2) derivation of a final design solution from the solution space. Multi‐objective gradient‐based method with Lagrangian multipliers (MOGM‐LM) and centre‐of‐gravity method (CoGM) are further proposed as numerical methods for each step. The proposed approach was first applied to problems with test functions where the exact solutions are known, and results demonstrate that the proposed approach can find robust solutions, which cannot be found by conventional numerical design approaches. The approach was then applied to two practical design problems. Successful design in both the examples concludes that the proposed approach can be used for various design problems that involve both the creative and engineering design criteria. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
In this globally competitive business environment, design engineers are constantly striving to establish new and effective tools and techniques to ensure a robust and reliable product design. Robust design (RD) and reliability‐based design approaches have shown the potential to deal with variability in the life cycle of a product. This paper explores the possibilities of combining both approaches into a single model and proposes a hybrid quality loss function‐based multi‐objective optimization model. The model is unique because it uses a hybrid form of quality loss‐based objective function that is defined in terms of desirable as well as undesirable deviations to obtain efficient design points with minimum quality loss. The proposed approach attempts to optimize the product design by addressing quality loss, variability, and life‐cycle issues simultaneously by combining both reliability‐based and RD approaches into a single model with various customer aspirations. The application of the approach is demonstrated using a leaf spring design example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
9.
Many multi‐axial fatigue limit criteria are formalized as a linear combination of a shear stress amplitude and a normal stress. To identify the shear stress amplitude, appropriate conventional definitions, as the minimum circumscribed circle (MCC) or ellipse (MCE) proposals, are in use. Despite computational improvements, deterministic algorithms implementing the MCC/MCE methods are exceptionally time‐demanding when applied to “coiled” random loading paths resulting from in‐service multi‐axial loadings and they may also provide insufficiently robust and reliable results. It would be then preferable to characterize multi‐axial random loadings by statistical re‐formulations of the deterministic MCC/MCE methods. Following an early work of Pitoiset et al., this paper presents a statistical re‐formulation for the MCE method. Numerical simulations are used to compare both statistical re‐formulations with their deterministic counterparts. The observed general good trend, with some better performance of the statistical approach, confirms the validity, reliability and robustness of the proposed formulation.  相似文献   

10.
In many cases, data do not follow a specific probability distribution in practice. As a result, a variety of distribution‐free control charts have been developed to monitor changes in the processes. An existing rank‐based multivariate cumulative sum (CUSUM) procedure based on the antirank vector does not quickly detect the large shift levels of the process mean. In this paper, we explore and develop an improved version of the existing rank‐based multivariate CUSUM procedure in order to overcome the difficulty. The numerical experiments show that the proposed approach dramatically outperforms the existing rank‐based multivariate CUSUM procedure in terms of the out‐of‐control average run length. In addition, the proposed approach particularly resolves the critical problem of the original approach, which occurs in the simultaneous shifts whose components are all the same but not 0. We believe that the proposed approach can be utilized for monitoring real data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
A model‐based scheme is proposed for monitoring multiple gamma‐distributed variables. The procedure is based on the deviance residual, which is a likelihood ratio statistic for detecting a mean shift when the shape parameter is assumed to be unchanged and the input and output variables are related in a certain manner. We discuss the distribution of this statistic and the proposed monitoring scheme. An example involving the advance rate of a drill is used to illustrate the implementation of the deviance residual monitoring scheme. Finally, a simulation study is performed to compare the average run length (ARL) performance of the proposed method to the standard Shewhart control chart for individuals. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
13.
In nano‐structures, the influence of surface effects on the properties of material is highly important because the ratio of surface to volume at the nano‐scale level is much higher than that of the macro‐scale level. In this paper, a novel temperature‐dependent multi‐scale model is presented based on the modified boundary Cauchy‐Born (MBCB) technique to model the surface, edge, and corner effects in nano‐scale materials. The Lagrangian finite element formulation is incorporated into the heat transfer analysis to develop the thermo‐mechanical finite element model. The temperature‐related Cauchy‐Born hypothesis is implemented by using the Helmholtz free energy to evaluate the temperature effect in the atomistic level. The thermo‐mechanical multi‐scale model is applied to determine the temperature related characteristics at the nano‐scale level. The first and second derivatives of free energy density are computed using the first Piola‐Kirchhoff stress and tangential stiffness tensor at the macro‐scale level. The concept of MBCB is introduced to capture the surface, edge, and corner effects. The salient point of MBCB model is the definition of radial quadrature used at the surface, edge, and corner elements as an indicator of material behavior. The characteristics of quadrature are derived by interpolating the data from the atomic level laid in a circular support around the quadrature in a least‐square approach. Finally, numerical examples are modeled using the proposed computational algorithm, and the results are compared with the fully atomistic model to illustrate the performance of MBCB multi‐scale model in the thermo‐mechanical analysis of metallic nano‐scale devices. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Co‐simulation is a prominent method to solve multi‐physics problems. Multi‐physics simulations using a co‐simulation approach have an intrinsic advantage. They allow well‐established and specialized simulation tools for different fields and signals to be combined and reused with minor adaptations in contrast to the monolithic approach. However, the partitioned treatment of the coupled system poses the drawback of stability and accuracy challenges. If several different subsystems are used to form the co‐simulation scenario, these issues are especially important. In this work, we propose a new co‐simulation algorithm based on interface Jacobians. It allows for the stable and accurate solution of complex co‐simulation scenarios involving several different subsystems. Furthermore, the Interface Jacobian‐based Co‐Simulation Algorithm is formulated such that it enables parallel execution of the participating subsystems. This results in a high‐efficient procedure. Furthermore, the Interface Jacobian‐based Co‐Simulation Algorithm handles algebraic loops as the co‐simulation scenario is defined in residual form. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Multi‐scale problems are often solved by decomposing the problem domain into multiple subdomains, solving them independently using different levels of spatial and temporal refinement, and coupling the subdomain solutions back to obtain the global solution. Most commonly, finite elements are used for spatial discretization, and finite difference time stepping is used for time integration. Given a finite element mesh for the global problem domain, the number of possible decompositions into subdomains and the possible choices for associated time steps is exponentially large, and the computational costs associated with different decompositions can vary by orders of magnitude. The problem of finding an optimal decomposition and the associated time discretization that minimizes computational costs while maintaining accuracy is nontrivial. Existing mesh partitioning tools, such as METIS, overlook the constraints posed by multi‐scale methods and lead to suboptimal partitions with a high performance penalty. We present a multi‐level mesh partitioning approach that exploits domain‐specific knowledge of multi‐scale methods to produce nearly optimal mesh partitions and associated time steps automatically. Results show that for multi‐scale problems, our approach produces decompositions that outperform those produced by state‐of‐the‐art partitioners like METIS and even those that are manually constructed by domain experts. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we presented a continuous‐time Markov process‐based model for evaluating time‐dependent reliability indices of multi‐state degraded systems, particularly for some automotive subsystems and components subject to minimal repairs and negative repair effects. The minimal repair policy, which restores the system back to an “as bad as old” functioning state just before failure, is widely used for automotive systems repair because of its low cost of maintenance. The current study distinguishes with others that the negative repair effects, such as unpredictable human error during repair work and negative effects caused by propagated failures, are considered in the model. The negative repair effects may transfer the system to a degraded operational state that is worse than before due to an imperfect repair. Additionally, a special condition that a system under repair may be directly transferred to a complete failure state is also considered. Using the continuous‐time Markov process approach, we obtained the general solutions to the time‐dependent probabilities of each system state. Moreover, we also provided the expressions for several reliability measures include availability, unavailability, reliability, mean life time, and mean time to first failure. An illustrative numerical example of reliability assessment of an electric car battery system is provided. Finally, we use the proposed multi‐state system model to model a vehicle sub‐frame fatigue degradation process. The proposed model can be applied for many practical systems, especially for the systems that are designed with finite service life.  相似文献   

17.
The multi‐atlas patch‐based label fusion (LF) method mainly focuses on the measurement of the patch similarity which is the comparison between the atlas patch and the target patch. To enhance the LF performance, the distribution probability about the target can be used during the LF process. Hence, we consider two LF schemes: in the first scheme, we keep the results of the interpolation so that we can obtain the labels of the atlas with discrete values (between 0 and 1) instead of binary values in the label propagation. In doing so, each atlas can be treated as a probability atlas. Second, we introduce the distribution probability of the tissue (to be segmented) in the sparse patch‐based LF process. Based on the probability of the tissue and sparse patch‐based representation, we propose three different LF methods which are called LF‐Method‐1, LF‐Method‐2, and LF‐Method‐3. In addition, an automated estimation method about the distribution probability of the tissue is also proposed. To evaluate the accuracy of our proposed LF methods, the methods were compared with those of the nonlocal patch‐based LF method (Nonlocal‐PBM), the sparse patch‐based LF method (Sparse‐PBM), majority voting method, similarity and truth estimation for propagated segmentations, and hierarchical multi‐atlas LF with multi‐scale feature representation and label‐specific patch partition (HMAS). Based on our experimental results and quantitative comparison, our methods are promising in the magnetic resonance image segmentation. © 2017 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 27, 23–32, 2017  相似文献   

18.
The solution of problems in computational fluid dynamics (CFD) represents a classical field for the application of advanced numerical methods. Many different approaches were developed over the years to address CFD applications. Good examples are finite volumes, finite differences (FD), and finite elements (FE) but also newer approaches such as the lattice‐Boltzmann (LB), smooth particle hydrodynamics or the particle finite element method. FD and LB methods on regular grids are known to be superior in terms of raw computing speed, but using such regular discretization represents an important limitation in dealing with complex geometries. Here, we concentrate on unstructured approaches which are less common in the GPU world. We employ a nonstandard FE approach which leverages an optimized edge‐based data structure allowing a highly parallel implementation. Such technique is applied to the ‘convection‐diffusion’ problem, which is often considered as a first step towards CFD because of similarities to the nonconservative form of the Navier–Stokes equations. In this regard, an existing highly optimized parallel OpenMP solver is ported to graphics hardware based on the OpenCL platform. The optimizations performed are discussed in detail. A number of benchmarks prove that the GPU‐accelerated OpenCL code consistently outperforms the OpenMP version. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
It is generally accepted that the additional hardening of materials could largely shorten multi‐axis fatigue life of engineering components. To consider the effects of additional hardening under multi‐axial loading, this paper summarizes a new multi‐axial low‐cycle fatigue life prediction model based on the critical plane approach. In the new model, while critical plane is adopted to calculate principal equivalent strain, a new plane, subcritical plane, is also defined to calculate a correction parameter due to the effects of additional hardening. The proposed fatigue damage parameter of the new model combines the material properties and the angle of the loading orientation with respect to the principal axis and can be established with Coffin‐Manson equation directly. According to experimental verification and comparison with other traditional models, it is clear that the new model has satisfactory reliability and accuracy in multi‐axial fatigue life prediction.  相似文献   

20.
Atlas‐based segmentation is a high level segmentation technique which has become a standard paradigm for exploiting prior knowledge in image segmentation. Recent multiatlas‐based methods have provided greatly accurate segmentations of different parts of the human body by propagating manual delineations from multiple atlases in a data set to a query subject and fusing them. The female pelvic region is known to be of high variability which makes the segmentation task difficult. We propose, here, an approach for the segmentation of magnetic resonance imaging (MRI) called multiatlas‐based segmentation using online machine learning (OML). The proposed approach allows separating regions which may be affected by cervical cancer in a female pelvic MRI. The suggested approach is based on an online learning method for the construction of the dataset of atlases. The experiments demonstrate the higher accuracy of the suggested approach compared to a segmentation technique based on a fixed dataset of atlases and single‐atlas‐based segmentation technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号