首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many multi‐axial fatigue limit criteria are formalized as a linear combination of a shear stress amplitude and a normal stress. To identify the shear stress amplitude, appropriate conventional definitions, as the minimum circumscribed circle (MCC) or ellipse (MCE) proposals, are in use. Despite computational improvements, deterministic algorithms implementing the MCC/MCE methods are exceptionally time‐demanding when applied to “coiled” random loading paths resulting from in‐service multi‐axial loadings and they may also provide insufficiently robust and reliable results. It would be then preferable to characterize multi‐axial random loadings by statistical re‐formulations of the deterministic MCC/MCE methods. Following an early work of Pitoiset et al., this paper presents a statistical re‐formulation for the MCE method. Numerical simulations are used to compare both statistical re‐formulations with their deterministic counterparts. The observed general good trend, with some better performance of the statistical approach, confirms the validity, reliability and robustness of the proposed formulation.  相似文献   

2.
A computational framework for scale‐bridging in multi‐scale simulations is presented. The framework enables seamless combination of at‐scale models into highly dynamic hierarchies to build a multi‐scale model. Its centerpiece is formulated as a standalone module capable of fully asynchronous operation. We assess its feasibility and performance for a two‐scale model applied to two challenging test problems from impact physics. We find that the computational cost associated with using the framework may, as expected, become substantial. However, the framework has the ability of effortlessly combining at‐scale models to render complex multi‐scale models. The main source of the computational inefficiency of the framework is related to poor load balancing of the lower‐scale model evaluation We demonstrate that the load balancing can be efficiently addressed by recourse to conventional load‐balancing strategies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
An algorithm is suggested to improve the efficiency of the multi‐level Newton method that is used to solve multi‐physics problems. It accounts for full coupling between the subsystems by using the direct differentiation method rather than error prone finite difference calculations and retains the advantage of greater flexibility over the tightly coupled approaches. Performance of the algorithm is demonstrated by solving a fluid–structure interaction problem. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
A case study on preventive maintenance (PM) of a multi‐equipment system is presented in this paper. Each equipment of the system consists of many components/subsystems connected in series. Because of the series structure, opportunistic maintenance (OM) policies are more effective for the components of the equipment. A new OM policy based on the classification of opportunities has been proposed. Various OM policies have been evaluated using simulation modeling, and the new policy has been found to be more effective than the existing OM policies. The impact of this policy on the overall system has also been simulated. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

5.
6.
The reliability of a multi‐attribute deteriorating production system is controlled using versatile identical inspection facilities. An attribute state is dichotomous (up designates proper function versus down). A product item is conforming if all the system attributes are up when it is produced. When a system attribute is detected as down it is restored back to an up state. Inspection of an attribute can rely on observations of the system, recently produced items, or both. Inspection policy determines the inspection capacity, frequency of inspecting each attribute and inspection schedule. These decisions involve a tradeoff between the cost of inspectors and the loss associated with the roportion of non‐conforming items due to lack of adequate inspection. Three models are introduced, analyzed and solved. In the first model, inspection and restoration are perfect, product attribute is up (down) when the system attribute is up (down), and restoration is immediate. The assumptions of perfect inspection and restoration are relaxed in the second model. The third model relaxes in addition the assumption of immediate restoration. An efficient heuristic solution scheme is provided for solving these models. Sensitivity of the solution to system parameters is studied. Numerical experiments provide some insights regarding the combined effect of imperfect production, inspection and restoration, in various conditions of inspection and restoration durations. Copyright © 2001 John Wiley & Sons Ltd.  相似文献   

7.
Global/multi‐modal optimization problems arise in many engineering applications. Owing to the existence of multiple minima, it is a challenge to solve the multi‐modal optimization problem and to identify the global minimum especially if efficiency is a concern. In this paper, variants of the multi‐start with clustering strategy are developed and studied for identifying multiple local minima in nonlinear global optimization problems. The study considers the sampling procedure, the use of Hessian information in forming clusters, the technique for cluster analysis and the local search procedure. Variations of multi‐start with clustering are applied to 15 multi‐modal problems. A comparative study focuses on the overall search effectiveness in terms of the number of local searches performed, local minima found and required function evaluations. The performance of these multi‐start clustering algorithms ranges from very efficient to very robust. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
A multi‐material topology optimization scheme is presented. The formulation includes an arbitrary number of phases with different mechanical properties. To ensure that the sum of the volume fractions is unity and in order to avoid negative phase fractions, an obstacle potential function, which introduces infinity penalty for negative densities, is utilized. The problem is formulated for nonlinear deformations, and the objective of the optimization is the end displacement. The boundary value problems associated with the optimization problem and the equilibrium equation are solved using the finite element method. To illustrate the possibilities of the method, it is applied to a simple boundary value problem where optimal designs using multiple phases are considered. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Fusion of multimodal imaging data supports medical experts with ample information for better disease diagnosis and further clinical investigations. Recently, sparse representation (SR)‐based fusion algorithms has been gaining importance for their high performance. Building a compact, discriminative dictionary with reduced computational effort is a major challenge to these algorithms. Addressing this key issue, we propose an adaptive dictionary learning approach for fusion of multimodal medical images. The proposed approach consists of three steps. First, zero informative patches of source images are discarded by variance computation. Second, the structural information of remaining image patches is evaluated using modified spatial frequency (MSF). Finally, a selection rule is employed to separate the useful informative patches of source images for dictionary learning. At the fusion step, batch‐OMP algorithm is utilized to estimate the sparse coefficients. A novel fusion rule which measures the activity level in both spatial domain and transform domain is adopted to reconstruct the fused image with the sparse vectors and trained dictionary. Experimental results of various medical image pairs and clinical data sets reveal that the proposed fusion algorithm gives better visual quality and competes with existing methodologies both visually and quantitatively.  相似文献   

10.
Multi‐material Eulerian methods were originally developed to solve problems in hypervelocity impact. They have proven to be useful for many other problems involving high‐strain rates such as the dynamic compaction of a powder and high‐speed machining. An implicit formulation has been developed to extend the range of applicability to quasi‐static problems such as hot isostatic pressing (HIP) and other material processing operations. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

11.
Software reliability growth models, which are based on nonhomogeneous Poisson processes, are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems. Faults in the systems, which eventually cause the failures, are usually connected with each other in complicated ways. Considering a group of networked faults, we raise a new model to examine the reliability of software systems and assess the model's performance from real‐world data sets. Our numerical studies show that the new model, capturing networking effects among faults, well fits the failure data. We also formally study the optimal software release policy using the multi‐attribute utility theory (MAUT), considering both the reliability attribute and the cost attribute. We find that, if the networking effects among different layers of faults were ignored by the software testing team, the best time to release the software package to the market would be much later while the utility reaches its maximum. Sensitivity analysis is further delivered.  相似文献   

12.
Multi‐scale problems are often solved by decomposing the problem domain into multiple subdomains, solving them independently using different levels of spatial and temporal refinement, and coupling the subdomain solutions back to obtain the global solution. Most commonly, finite elements are used for spatial discretization, and finite difference time stepping is used for time integration. Given a finite element mesh for the global problem domain, the number of possible decompositions into subdomains and the possible choices for associated time steps is exponentially large, and the computational costs associated with different decompositions can vary by orders of magnitude. The problem of finding an optimal decomposition and the associated time discretization that minimizes computational costs while maintaining accuracy is nontrivial. Existing mesh partitioning tools, such as METIS, overlook the constraints posed by multi‐scale methods and lead to suboptimal partitions with a high performance penalty. We present a multi‐level mesh partitioning approach that exploits domain‐specific knowledge of multi‐scale methods to produce nearly optimal mesh partitions and associated time steps automatically. Results show that for multi‐scale problems, our approach produces decompositions that outperform those produced by state‐of‐the‐art partitioners like METIS and even those that are manually constructed by domain experts. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

13.
It is generally accepted that the additional hardening of materials could largely shorten multi‐axis fatigue life of engineering components. To consider the effects of additional hardening under multi‐axial loading, this paper summarizes a new multi‐axial low‐cycle fatigue life prediction model based on the critical plane approach. In the new model, while critical plane is adopted to calculate principal equivalent strain, a new plane, subcritical plane, is also defined to calculate a correction parameter due to the effects of additional hardening. The proposed fatigue damage parameter of the new model combines the material properties and the angle of the loading orientation with respect to the principal axis and can be established with Coffin‐Manson equation directly. According to experimental verification and comparison with other traditional models, it is clear that the new model has satisfactory reliability and accuracy in multi‐axial fatigue life prediction.  相似文献   

14.
We develop an asynchronous time integration and coupling method with domain decomposition for linear and non‐linear problems in mechanics. To ensure stability in the time integration and in coupling between domains, we use variational integrators with local Lagrange multipliers to enforce continuity at the domain interfaces. The asynchronous integrator lets each domain step with its own time step, using a smaller time step where required by stability and accuracy constraints and a larger time step where allowed. We show that in practice the time step is limited by accuracy requirements rather than by stability requirements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Problems of the form Z (σ) u (σ)= f (σ), where Z is a given matrix, f is a given vector, and σ is a circular frequency or circular frequency‐related parameter arise in many applications including computational structural and fluid dynamics, and computational acoustics and electromagnetics. The straightforward solution of such problems for fine increments of σ is computationally prohibitive, particularly when Z is a large‐scale matrix. This paper discusses an alternative solution approach based on the efficient computation of u and its successive derivatives with respect to σ at a few sample values of this parameter, and the reconstruction of the solution u (σ) in the frequency band of interest using multi‐point Padé approximants. This computational methodology is illustrated with applications from structural dynamics and underwater acoustic scattering. In each case, it is shown to reduce the CPU time required by the straightforward approach to frequency sweep computations by two orders of magnitude. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
In this globally competitive business environment, design engineers are constantly striving to establish new and effective tools and techniques to ensure a robust and reliable product design. Robust design (RD) and reliability‐based design approaches have shown the potential to deal with variability in the life cycle of a product. This paper explores the possibilities of combining both approaches into a single model and proposes a hybrid quality loss function‐based multi‐objective optimization model. The model is unique because it uses a hybrid form of quality loss‐based objective function that is defined in terms of desirable as well as undesirable deviations to obtain efficient design points with minimum quality loss. The proposed approach attempts to optimize the product design by addressing quality loss, variability, and life‐cycle issues simultaneously by combining both reliability‐based and RD approaches into a single model with various customer aspirations. The application of the approach is demonstrated using a leaf spring design example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
A sub‐cycling algorithm presented by Belytschko and Mullen (Int. J. Numer. Meth. Engng 1978; 12 (10):1575–1586) has been successfully applied in the finite element method. However, the problem of how to apply the sub‐cycling to flexible multi‐body dynamics (FMD) still lacks investigation. This paper presents a Newmark‐based sub‐cycling, which is suitable for solving condensed FMD models. Common‐step update formulae and sub‐step update formula for the sub‐cycling are established based on the original Newmark integration algorithm. Stability of the procedure is validated by means of energy balance checking during the integral process. Numerical examples indicate that the sub‐cycling is able to enhance the computational efficiency without dropping accuracy greatly. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
Silicon wafers are commonly used materials in the semiconductor manufacturing industry. Their geometric quality directly affects the production cost and yield. Therefore, improvement in the quality of wafers is critical for meeting the current competitive market needs. Conventional summary metrics such as total thickness variation, bow and warp can neither fully reflect the local variability within each wafer nor provide useful insight for root cause diagnosis and quality improvement. The advancement of sensing technology enables two-dimensional (2D) data mapping to characterise the geometric shapes of wafers, which provides more information than summary metrics. The objective of this research is to develop a statistical model to characterise the thickness variation of wafers based on 2D data maps. Specifically, the thickness variation of wafers is decomposed into macro-scale and micro-scale variations, which are modelled as a cubic curve and a first-order intrinsic Gaussian Markov random field, respectively. The models can successfully capture both the macro-scale mean trend and the micro-scale local variation, with important engineering implications for process monitoring, fault diagnosis and run-to-run control. A practical case study from a wafer manufacturing process is performed to show the effectiveness of the proposed methodology.  相似文献   

19.
When the individual PDFs of closely‐spaced random variables such as natural frequencies of a structure overlap, generation of sample sets by assuming the frequencies to be independent random variables can lead to incorrect sets of frequencies in the sense that the frequencies do not remain as ordered sets. Rejection of such disordered sample sets results in individual density functions that are significantly different from the distributions initially assumed for sampling each random variable. One way to overcome this constraint in the simulation of an ordered set of random variables is to consider them in an implicit manner using a joint PDF. In this paper, we present a formulation for a joint density function that is developed using fundamental probability approaches. The formulation ensures that the sampled random variables always remain as ordered sets and maintain the individual density functions for each variable. Application of the proposed formulation is illustrated for cases with not just two closely‐spaced variables but also for a case with multiple closely‐spaced variables such that the PDFs of more than two random variables overlap with each other. An expression is presented to determine the exact number of terms needed in the formulation. However, it is also illustrated that only two terms are sufficient in most applications even when the exact number of terms needed is very high. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
Computational aspects of a recently developed gradient elasticity model are discussed in this paper. This model includes the (Aifantis) strain gradient term along with two higher‐order acceleration terms (micro‐inertia contributions). It has been demonstrated that the presence of these three gradient terms enables one to capture the dispersive wave propagation with great accuracy. In this paper, the discretisation details of this model are thoroughly investigated, including both discretisation in time and in space. Firstly, the critical time step is derived that is relevant for conditionally stable time integrators. Secondly, recommendations on how to choose the numerical parameters, primarily the element size and time step, are given by comparing the dispersion behaviour of the original higher‐order continuum with that of the discretised medium. In so doing, the accuracy of the discretised model can be assessed a priori depending on the selected discretisation parameters for given length‐scales. A set of guidelines can therefore be established to select optimal discretisation parameters that balance computational efficiency and numerical accuracy. These guidelines are then verified numerically by examining the wave propagation in a one‐dimensional bar as well as in a two‐dimensional example. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号