首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multivariate statistical process control with artificial contrasts   总被引:1,自引:0,他引:1  
A multivariate control region can be considered to be a pattern that represents the normal operating conditions of a process. Reference data can then be generated and used to learn the difference between this region and random noise. Then multivariate statistical process control can be converted to a supervised learning task. This can dramatically reshape the control region and open the control problem to a rich collection of supervised learning tools. Such tools provide generalization error estimates that can be used to specify error rates. The effectiveness of such an approach is shown here. Such a computational approach is now easily accomplished with modern computing resources. Examples use random forests and a regularized least squares classifier as the learners.  相似文献   

2.
The ability to predict how far a drug will penetrate into the tumour microenvironment within its pharmacokinetic (PK) lifespan would provide valuable information about therapeutic response. As the PK profile is directly related to the route and schedule of drug administration, an in silico tool that can predict the drug administration schedule that results in optimal drug delivery to tumours would streamline clinical trial design. This paper investigates the application of mathematical and computational modelling techniques to help improve our understanding of the fundamental mechanisms underlying drug delivery, and compares the performance of a simple model with more complex approaches. Three models of drug transport are developed, all based on the same drug binding model and parametrized by bespoke in vitro experiments. Their predictions, compared for a ‘tumour cord’ geometry, are qualitatively and quantitatively similar. We assess the effect of varying the PK profile of the supplied drug, and the binding affinity of the drug to tumour cells, on the concentration of drug reaching cells and the accumulated exposure of cells to drug at arbitrary distances from a supplying blood vessel. This is a contribution towards developing a useful drug transport modelling tool for informing strategies for the treatment of tumour cells which are ‘pharmacokinetically resistant’ to chemotherapeutic strategies.  相似文献   

3.
In this article a new algorithm for optimization of multi-modal, nonlinear, black-box objective functions is introduced. It extends the recently-introduced adaptive multi-modal optimization by incorporating surrogate modelling features similar to response surface methods. The resulting algorithm has reduced computational intensity and is well-suited for optimization of expensive objective functions. It relies on an adaptive, multi-resolution mesh to obtain an initial estimation of the objective function surface. Local surrogate models are then constructed and used to generate additional trial points around the local minima discovered. The steps of mesh refinement and surrogate modelling continue until convergence is achieved. The algorithm produces progressively accurate surrogate models, which can be used for post-optimization studies such as sensitivity and tolerance analyses with minimal computational effort. This article demonstrates the effectiveness of the algorithm using comparative optimization of several multi-modal objective functions, and shows an engineering application of the design of a power electronic converter.  相似文献   

4.
We present a methodical procedure for topology optimization under uncertainty with multiresolution finite element (FE) models. We use our framework in a bifidelity setting where a coarse and a fine mesh corresponding to low- and high-resolution models are available. The inexpensive low-resolution model is used to explore the parameter space and approximate the parameterized high-resolution model and its sensitivity, where parameters are considered in both structural load and stiffness. We provide error bounds for bifidelity FE approximations and their sensitivities and conduct numerical studies to verify these theoretical estimates. We demonstrate our approach on benchmark compliance minimization problems, where we show significant reduction in computational cost for expensive problems such as topology optimization under manufacturing variability, reliability-based topology optimization, and three-dimensional topology optimization while generating almost identical designs to those obtained with a single-resolution mesh. We also compute the parametric von Mises stress for the generated designs via our bifidelity FE approximation and compare them with standard Monte Carlo simulations. The implementation of our algorithm, which extends the well-known 88-line topology optimization code in MATLAB, is provided.  相似文献   

5.
A parametrized reduced order modeling methodology for cracked two dimensional solids is presented, where the parameters correspond to geometric properties of the crack, such as location and size. The method follows the offline-online paradigm, where in the offline, training phase, solutions are obtained for a set of parameter values, corresponding to specific crack configurations and a basis for a lower dimensional solution space is created. Then in the online phase, this basis is used to obtain solutions for configurations that do not lie in the training set. The use of the same basis for different crack geometries is rendered possible by defining a reference configuration and employing mesh morphing to map the reference to different target configurations. To enable the application to complex geometries, a mesh morphing technique is introduced, based on inverse distance weighting, which increases computational efficiency and allows for special treatment of boundaries. Applications in linear elastic fracture mechanics are considered, with the extended finite element method being used to represent discontinuous and asymptotic fields.  相似文献   

6.
Efficiently joining materials with dissimilar mechanical and thermal properties is fundamental to the development of strong and lightweight load-bearing hybrid structures particularly for aerospace applications. This paper presents a ply-interleaving technique for joining dissimilar composite materials. The load-carrying capacity of such a joint depends strongly on several design parameters such as the distance between ply terminations, the spatial distribution of ply terminations, and the stiffness and coefficients of thermal expansion of the composites. The effects of these factors on the strength of quasi-isotropic hybrid carbon/glass fibre composite are investigated using combined experimental, analytical and computational methods. Through fractographic analyses significant insights are gained into the failure mechanism of the hybrid joints, which are then used to aid the development of predictive models using analytical and high fidelity computational methods. To characterise the interaction between transverse matrix cracking and delamination, continuum damage mechanics model and cohesive zone model are employed. The predictions are found to correlate well with experimental data. These modelling tools pave the way for optimising hybrid joint concepts, which will enable the structural integration of dielectric windows required for multifunctional load-bearing antenna aircraft structures.  相似文献   

7.
In this paper, we present an adaptive level set method for motion of high codimensional objects (e.g., curves in three dimensions). This method uses only two (or a few fixed) levels of meshes. A uniform coarse mesh is defined over the whole computational domain. Any coarse mesh cell that contains the moving object is further divided into a uniform fine mesh. The coarse‐to‐fine ratios in the mesh refinement can be adjusted to achieve optimal efficiency. Refinement and coarsening (removing the fine mesh within a coarse grid cell) are performed dynamically during the evolution. In this adaptive method, the computation is localized mostly near the moving objects; thus, the computational cost is significantly reduced compared with the uniform mesh over the whole domain with the same resolution. In this method, the level set equations can be solved on these uniform meshes of different levels directly using standard high‐order numerical methods. This method is examined by numerical examples of moving curves and applications to dislocation dynamics simulations. This two‐level adaptive method also provides a basis for using locally varying time stepping to further reduce the computational cost. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
This paper presents an adaptive refinement strategy based on a hierarchical element subdivision dedicated to modelling elastoplastic materials in transient dynamics. At each time step, the refinement is automatic and starts with the calculation of the solution on a coarse mesh. Then, an error indicator is used to control the accuracy of the solution and a finer localized mesh is created where the user‐prescribed accuracy is not reached. A new calculation is performed on this new mesh using the non‐linear ‘Full Approximation Scheme’ multigrid strategy. Applying the error indicator and the refinement strategy recursively, the optimal mesh is obtained. This mesh verifies the error indicator on the whole structure. The multigrid strategy is used for two purposes. First, it optimizes the computational cost of the solution on the finest localized mesh. Second, it ensures information transfer among the different hierarchical meshes. A standard time integration scheme is used and the mesh is reassessed at each time step. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
A class of preconditioners built around a coarse/fine mesh framework is presented. The proposed method involves the reconstruction of the stiffness equations using a coarse/fine mesh idealization with relative degrees-of-freedom derived from the element shape functions. This approach leads naturally to effective preconditioners for iterative solvers which only require a factorization involving coarse mesh variables. A further extension is the application of the proposed method to super-elements in conjunction with substructuring (domain decomposition) techniques. The derivation of the coarse/fine mesh discretization via the use of transformation matrices, allows a straightforward implementation of the proposed techniques (as well as multigrid type procedures) within an existing finite element system.  相似文献   

10.
A computational certification framework under limited experimental data is developed. By this approach, a high‐fidelity model (HFM) is first calibrated to limited experimental data. Subsequently, the HFM is employed to train a low‐fidelity model (LFM). Finally, the calibrated LFM is utilized for component analysis. The rational for utilizing HFM in the initial stage stems from the fact that constitutive laws of individual microphases in HFM are rather simple so that the number of material parameters that needs to be identified is less than in the LFM. The added complexity of material models in LFM is necessary to compensate for simplified kinematical assumptions made in LFM and for smearing discrete defect structure. The first‐order computational homogenization model, which resolves microstructural details including the structure of defects, is selected as the HFM, whereas the reduced‐order homogenization is selected as the LFM. Certification framework illustration, verification, and validation are conducted for ceramic matrix composite material system comprised of the 8‐harness weave architecture. Blind validation is performed against experimental data to validate the proposed computational certification framework.  相似文献   

11.
Drug side-effects impose massive costs on society, leading to almost one-third drug failure in the drug discovery process. Therefore, early identification of potential side-effects becomes vital to avoid risks and reduce costs. Existing computational methods employ few drug features and predict drug side-effects from either drug side or side-effect side separately. In this work, we explore to predict drug side-effects by combining heterogeneous drug features and employing the bipartite local models (BLMs) which fuse predictions from both the drug side and side-effect side. Specifically, we integrate drug chemical structures, drug interacted proteins and drug associated genes into a unified framework to measure the comprehensive similarity between drugs first. Then, high-quality and balanced training samples are selected for individual drugs and individual side-effects using the designed balanced sample selection framework, based on drug comprehensive similarities and side-effect cosine similarities respectively. Trained with corresponding training samples, BLMs first predict drugs associated with a given side-effect, then predict side-effects for a given drug. This produces two independent predictions for each putative drug-side-effect association which are further combined to give a definitive prediction. The performance of the proposed method was evaluated on side-effect prediction for 901 drugs from DrugBank. Particularly, we performed 5-fold cross-validation experiments on the 742 characterized drugs and independent testing experiment on the 159 uncharacterized drugs. The simulative predictions show that the side-effect prediction performance is significantly improved owing to the integration of information from drug chemical, biological and genomic spaces, the proposed sample selection framework, and the implemented BLMs.  相似文献   

12.
Inter-phase momentum coupling for particle flows is usually achieved by means of direct numerical simulation (DNS) or point source method (PSM). DNS requires the mesh size of the continuous phase to be much smaller than the size of the smallest particle in the system, whereas PSM requires the mesh size of the continuous phase to be much larger than the particle size. However, for applications where mesh sizes are similar to the size of particles in the system, neither DNS nor PSM is suitable. In order to overcome the dependence of mesh on particle sizes associated with DNS or PSM, a two-layer mesh method (TMM) is proposed. TMM involves the use of a coarse mesh to track the movement of particle clouds and a fine mesh for the continuous phase, with mesh interpolation for information exchange between the coarse and fine mesh Numerical tests of different interpolation methods show that a conservative interpolation scheme of the second order yields the most accurate results. Numerical simulations of a fluidized bed show that there is a good agreement between predictions using TMM with a second-order interpolation scheme and the experimental results, as well as predictions obtained with PSM. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Fractures tend to propagate along the least resistance paths, and homogeneous‐based models may not be able to reliably predict the true crack paths, as they are not capable of capturing nonlinearities and local damage induced by local inhomogeneity. This paper presents a stochastic numerical modelling framework for simulating fracturing in natural heterogeneous materials. Fracture propagation is modelled using Francfort and Marigo's variational theory, and randomness in the material properties is introduced by random field principle. A computational strategy on the basis of nonlinear dimensionality reduction framework is developed that maps domain of spatially variable properties of the materials to a low‐dimensional space. This strategy allows us to predict the most probable fracture patterns leading to failure by an optimisation algorithm. The reliability and performance of the developed methodology are examined through simulation of experimental case studies and comparison of predictions with measured data.  相似文献   

14.
In this paper, two algorithms are developed for the determination of pickup and delivery point locations for an AGV system. The first algorithm is applicable to general layout configurations and seeks solutions until the local optimum is reached by comparing relative locations of pickup and delivery points. The second algorithm generates solutions with a minimal amount of computational time due to its exploitation of the structural feature of departmental layouts. Computational results are provided to check the quality of solutions from each algorithm.  相似文献   

15.
In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multi-index stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multiphysics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that, when the input-output response is sufficiently smooth, our algorithm produces approximations that can be over two orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.  相似文献   

16.
Nanoparticle (NP)-mediated drug/gene delivery involves phenomena at broad range spatial and temporal scales. The interplay between these phenomena makes the NP-mediated drug/gene delivery process very complex. In this paper, we have identified four key steps in the NP-mediated drug/gene delivery: (i) design and synthesis of delivery vehicle/platform; (ii) microcirculation of drug carriers (NPs) in the blood flow; (iii) adhesion of NPs to vessel wall during the microcirculation and (iv) endocytosis and exocytosis of NPs. To elucidate the underlying physical mechanisms behind these four key steps, we have developed a multiscale computational framework, by combining all-atomistic simulation, coarse-grained molecular dynamics and the immersed molecular electrokinetic finite element method (IMEFEM). The multiscale computational framework has been demonstrated to successfully capture the binding between nanodiamond, polyethylenimine and small inference RNA, margination of NP in the microcirculation, adhesion of NP to vessel wall under shear flow, as well as the receptor-mediated endocytosis of NPs. Moreover, the uncertainties in the microcirculation of NPs has also been quantified through IMEFEM with a Bayesian updating algorithm. The paper ends with a critical discussion of future opportunities and key challenges in the multiscale modeling of NP-mediated drug/gene delivery. The present multiscale modeling framework can help us to optimize and design more efficient drug carriers in the future.  相似文献   

17.
Mesh refinement is an important process with regards to achieving good accuracy for computational simulation and analysis. Currently, there is a lack of a high‐fidelity refinement algorithm for the accurate modelling of geometry in the absence of a physical geometric model. This paper focuses on using a surface interpolation procedure based on a quartic triangular Bezier patch to approximate the underlying geometry of a mesh and to determine the locations of new subdivision vertices. A robust methodology is used for feature retention and accurate curve fitting at sharp edges and hard vertices. This extends the applicability of the surface fitting procedure to any arbitrary geometric configuration. The refinement is based on a new 1:9 subdivision scheme and its implementation is discussed in detail. Despite its high order subdivision footprint, computational efficiency is made possible by the effective use of lookup tables. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
A rigorous computational framework for the dimensional reduction of discrete, high‐fidelity, nonlinear, finite element structural dynamics models is presented. It is based on the pre‐computation of solution snapshots, their compression into a reduced‐order basis, and the Galerkin projection of the given discrete high‐dimensional model onto this basis. To this effect, this framework distinguishes between vector‐valued displacements and manifold‐valued finite rotations. To minimize computational complexity, it also differentiates between the cases of constant and configuration‐dependent mass matrices. Like most projection‐based nonlinear model reduction methods, however, its computational efficiency hinges not only on the ability of the constructed reduced‐order basis to capture the dominant features of the solution of interest but also on the ability of this framework to compute fast and accurate approximations of the projection onto a subspace of tangent matrices and/or force vectors. The computation of the latter approximations is often referred to in the literature as hyper reduction. Hence, this paper also presents the energy‐conserving sampling and weighting (ECSW) hyper reduction method for discrete (or semi‐discrete), nonlinear, finite element structural dynamics models. Based on mesh sampling and the principle of virtual work, ECSW is natural for finite element computations and preserves an important energetic aspect of the high‐dimensional finite element model to be reduced. Equipped with this hyper reduction procedure, the aforementioned Galerkin projection framework is first demonstrated for several academic but challenging problems. Then, its potential for the effective solution of real problems is highlighted with the realistic simulation of the transient response of a vehicle to an underbody blast event. For this problem, the proposed nonlinear model reduction framework reduces the CPU time required by a typical high‐dimensional model by up to four orders of magnitude while maintaining a good level of accuracy. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Level set methods are becoming an attractive design tool in shape and topology optimization for obtaining efficient and lighter structures. In this paper, a dynamic implicit boundary‐based moving superimposed finite element method (s‐version FEM or S‐FEM) is developed for structural topology optimization using the level set methods, in which the variational interior and exterior boundaries are represented by the zero level set. Both a global mesh and an overlaying local mesh are integrated into the moving S‐FEM analysis model. A relatively coarse fixed Eulerian mesh consisting of bilinear rectangular elements is used as a global mesh. The local mesh consisting of flexible linear triangular elements is constructed to match the dynamic implicit boundary captured from nodal values of the implicit level set function. In numerical integration using the Gauss quadrature rule, the practical difficulty due to the discontinuities is overcome by the coincidence of the global and local meshes. A double mapping technique is developed to perform the numerical integration for the global and coupling matrices of the overlapped elements with two different co‐ordinate systems. An element killing strategy is presented to reduce the total number of degrees of freedom to improve the computational efficiency. A simple constraint handling approach is proposed to perform minimum compliance design with a volume constraint. A physically meaningful and numerically efficient velocity extension method is developed to avoid the complicated PDE solving procedure. The proposed moving S‐FEM is applied to structural topology optimization using the level set methods as an effective tool for the numerical analysis of the linear elasticity topology optimization problems. For the classical elasticity problems in the literature, the present S‐FEM can achieve numerical results in good agreement with those from the theoretical solutions and/or numerical results from the standard FEM. For the minimum compliance topology optimization problems in structural optimization, the present approach significantly outperforms the well‐recognized ‘ersatz material’ approach as expected in the accuracy of the strain field, numerical stability, and representation fidelity at the expense of increased computational time. It is also shown that the present approach is able to produce structures near the theoretical optimum. It is suggested that the present S‐FEM can be a promising tool for shape and topology optimization using the level set methods. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
We propose a method for coronary arterial dynamics computation with medical-image-based time-dependent anatomical models. The objective is to improve the computational analysis of coronary arteries for better understanding of the links between the atherosclerosis development and mechanical stimuli such as endothelial wall shear stress and structural stress in the arterial wall. The method has two components. The first one is element-based zero-stress (ZS) state estimation, which is an alternative to prestress calculation. The second one is a “mixed ZS state” approach, where the ZS states for different elements in the structural mechanics mesh are estimated with reference configurations based on medical images coming from different instants within the cardiac cycle. We demonstrate the robustness of the method in a patient-specific coronary arterial dynamics computation where the motion of a thin strip along the arterial surface and two cut surfaces at the arterial ends is specified to match the motion extracted from the medical images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号