共查询到20条相似文献,搜索用时 15 毫秒
1.
The objective of Latin Hypercube Sampling is to determine an effective procedure for sampling from a (possibly correlated) multivariate population to estimate the distribution function (or at least a significant number of moments) of a complicated function of its variables. The typical application involves a computer-based model in which it is largely impossible to find a way (closed form or numerical) to do the necessary transformation of variables and where it is expensive to run in terms of computing resources and time. Classical approaches to hypercube sampling have used sophisticated stratified sampling techniques; but such sampling may provide incorrect measures of the output parameters' variances or covariances due to correlation between the sampling pairs. In this work, we offer a strategy which provides a sampling specification minimizing the sum of the absolute values of the pairwise differences between the true and sampled correlation pairs. We show that optimal plans can be obtained for even small sample sizes. We consider the characteristics of permutation matrices which minimize the sum of correlations between column pairs and then present an effective heuristic for solution. This heuristic generally finds plans which match the correlation structure exactly. When it does not, we provide a hybrid lagrangian/heuristic method, which empirically has found the optimal solution for all cases tested.This article was processed by the author using the LATEX style filepljour2 from Springer-Verlag. 相似文献
2.
A Minimum Bias Latin Hypercube Design 总被引:1,自引:0,他引:1
Deterministic engineering design simulators can be too complex to be amenable to direct optimization. An indirect route involves data collection from the simulator and fitting of less complex surrogates: metamodels, which are more readily optimized. However, common statistical experiment plans are not appropriate for data collection from deterministic simulators due to their poor projection properties. Data collection plans based upon number-theoretic methods are also inappropriate because they tend to require large sample sizes in order to achieve their desirable properties. We develop a new class of data collection plan, the Minimum Bias Latin Hypercube Design (MBLHD), for sampling from deterministic process simulators. The class represents a compromise between empirical model bias reduction and dispersion of the points within the input variable space. We compare the MBLHD class to previously known classes by several model independent measures selected from three general families: discrepancies, maximin distance measures, and minimax distance measures. In each case, the MBLHD class is at least competitive with the other classes; and, in several cases the MBLHD class demonstrates superior performance. We also make a comparison of the empirical squared bias of fitted metamodels. We approximate a mechanistic model for water flow through a borehole, using both kriging and polynomial metamodels. Here again, the performance of the MBLHD class is encouraging 相似文献
3.
A. S. Wood 《International journal for numerical methods in engineering》1986,23(9):1757-1771
In this paper the enthalpy approach for solving Stefan problems is applied to a three-dimensional situation. A novel and particularly efficient finite-difference scheme is then used to solve the resulting equations. As a starting solution for the numerical algorithm, a three-dimensional result using the heat-balance integral method is obtained for the case of heat flow in two phases with, possibly, different thermal properties. 相似文献
4.
《成像科学杂志》2013,61(6):467-474
AbstractData hiding technique can hide a certain amount of secret data into digital content such as image, document, audio or video. Reversible compressed image data hiding can loosely restore the cover image after extracting the secret data from the stego-image. In this paper, we present an efficient reversible image data hiding scheme based on side match vector quantisation. Mapping concept is useful for this scheme because it converts the ternary into binary. The proposed scheme significantly increases the payload size of a block, and the quality analysis of the proposed scheme showed that it contains a better peak signal to noise than other schemes. 相似文献
5.
We propose a preconditioning scheme for diagonalizing large Hamiltonian matrices arising in first-principles plane-wave pseudopotential calculations. The new scheme is based on the Neumann expansion for the inverse, shifted Hamiltonian from which only a nonlocal part is omitted. The preconditioner is applied to a gradient vector using the fast Fourier transformation technique. In the framework of the Davidson-type diagonalization algorithm, we have found the present preconditioning scheme to be more efficient than widely accepted diagonal scaling methods. 相似文献
6.
The finite element alternating method (FEAM) is an extremely useful and efficient scheme for the accurate calculation of
stress intensity factors in complex three dimensional structures. This approach involves the combination of an analytical
solution for a crack in an infinite body with a finite element solution for the uncracked component. Previously a three-dimensional
fatigue crack growth algorithm has been incorporated into the alternating method for the case of constant amplitude loading.
The major accomplishment outlined in this paper is the implementation of an additional numerical scheme for the more difficult
case of variable amplitude loading. Test cases, with an emphasis on components taken from the aerospace industry, are used
to validate the revised alternating method computer program. Results reveal excellent levels of correlation with existing
packages and highlight the suitability of the finite element alternating method for fatigue crack growth analyses in a wide
variety of components such as aircraft components, pipelines, offshore structures, pressure vessels, ships, etc. 相似文献
7.
An efficient finite element scheme is devised for problems in linear viscoelasticity of solids with a moving boundary. Such problems arise, for example, in the burning process of solid fuel (propellant). Since viscoelastic constitutive behavior is inherently associated with a “memory,” the potential need to store and operate on the entire history of the numerical solution has been a source of concern in computational viscoelasticity. A well-known “memory trick” overcomes this difficulty in the fixed-boundary case. Here the “memory trick” is extended to problems involving moving boundaries. The computational aspects of this extended scheme are discussed, and its performance is demonstrated via a numerical example. In addition, a special numerical integration rule is proposed for the viscoelastic integral, which is more accurate than the commonly-used trapezoidal rule and does not require additional computational effort. 相似文献
8.
J. Fish M. Pandheeradi V. Belsky 《International journal for numerical methods in engineering》1995,38(10):1597-1610
We present a new approach for the fast, efficient solution of large systems of non-linear equations arising from the finite element discretization. The proposed non-linear solver builds on the advantages of the popular methods of solution that are currently being employed, while eliminating most of their undesirable features. It combines the well-known BFGS method with the FAS version of the multigrid method, introduced by Brandt,1 to form a fast, efficient solution method for non-linear problems. We present numerical performance studies that are indicative of the convergence properties as well as the stability of the new method. 相似文献
9.
A semi-implicit finite element scheme is proposed for two-dimensional tidal flow computations. In the scheme, each term of the governing equations, rather than each dependent variable, is expanded in terms of the unknown nodal values and it helps to reduce computer execution time. The friction terms are represented semi-implicitly to improve stability, but this requires no additional computational effort. Test cases where analytic solutions have been obtained for the shallow water equations are employed to test the proposed scheme and the test results show that the scheme is efficient and stable. An numerical experiment is also included to compare the proposed scheme with another finite element scheme employing Serendipity-type Hermitian cubic basis functions. A numerical model of an actual bay is constructed based on the proposed scheme and computed tidal flows bear close resemblance to flows measured in field survey. 相似文献
10.
An efficient finite element (FE) scheme to deal with a class of coupled fluid-solid problems is presented. The main ingredients of such methodology are: an accurate Q1/P0 solid element (trilinear in velocities and constant piecewise-discontinuous pressures); a large deformation plasticity model; an algorithm to deal with material failure, cracking propagation and fragment formation; and a fragment rigidization methodology to avoid the possible numerical instabilities that may produce pieces of material flying away from the main solid body. All the mentioned schemes have been fully parallelized and coupled using a loose-embedded procedure with a well-established and validated computational fluid dynamics (CFD) code (FEFLO). A CSD and a CFD/CSD coupled case are presented and analyzed. 相似文献
11.
12.
Two techniques are proposed to improve the performance of Latin hypercube sampling. To improve the statistics for each variable, it is proposed that realizations for each variable be acquired by finding the probabilistic means of equiprobable disjunct intervals in the variable's domain, instead of using the cumulative distribution function directly, as is currently done. To reduce error in correlations between variables (correlated or uncorrelated), it is proposed to perform a single-switchoptimized method on the realizations for one variable at a time, instead of using matrix manipulation, as is the current custom. Limitations in hypercube sampling will be discussed, and numerical results involving a simple Poisson process will be offered. 相似文献
13.
Simplified chemical kinetic schemes are a crucial prerequisite for the simulation of complex three-dimensional turbulent flows, and various methods for the generation of reduced mechanisms have been developed in the past. The method of intrinsic low-dimensional manifolds (ILDM), e.g., provides a mathematical tool for the automatic simplification of chemical kinetics, but one problem of this method is the fact that the information which comes out of the mechanism reduction procedure has to be stored for subsequent use in reacting-flow calculations. In most cases tabulation procedures are used which store the relevant data (such as reduced reaction rates) in terms of the reaction progress variables, followed by table look-up during the reacting-flow calculations. This can result in huge amounts of storage needed for the multi-dimensional tabulation. In order to overcome this problem a storage scheme is presented which is based on orthogonal polynomials. Instead of the use of small tabulation cells and local mesh refinement, the thermochemical state space is divided into a small number of coarse cells. Within these coarse cells polynomial approximations are used instead of frequently used multi-linear interpolation. This leads to a considerable decrease of needed storage. The hydrogen-oxygen system is considered as an example. Even for this small chemical system, a decrease of the needed storage requirement by a factor of 100 is obtained. 相似文献
14.
15.
Space-filling and projective properties are desired features in the design of computer experiments to create global metamodels to replace expensive computer simulations in engineering design. The goal in this article is to develop an efficient and effective sequential Quasi-LHD (Latin Hypercube design) sampling method to maintain and balance the two aforementioned properties. The sequential sampling is formulated as an optimization problem, with the objective being the Maximin Distance, a space-filling criterion, and the constraints based on a set of pre-specified minimum one-dimensional distances to achieve the approximate one-dimensional projective property. Through comparative studies on sampling property and metamodel accuracy, the new approach is shown to outperform other sequential sampling methods for global metamodelling and is comparable to the one-stage sampling method while providing more flexibility in a sequential metamodelling procedure. 相似文献
16.
Y. D. LEE R. C. McCLUNG G. G. CHELL 《Fatigue & Fracture of Engineering Materials & Structures》2008,31(11):1004-1016
This paper summarizes the development of an efficient stress intensity factor (SIF) solution scheme applicable to a corner crack (CC) in a rectangular section subjected to arbitrary stressing on the crack plane. A general bivariant weight function (WF) formulation developed previously for a CC in a plate was extended to address a CC at a hole. Two supplemental algorithms were developed to achieve a substantial reduction in the computational time necessary for practical application. The new SIF solution scheme was validated by comparison with more than 180 three‐dimensional (3D) boundary element (BE) solutions. 相似文献
17.
Shah M. Yunus Peter C. Kohnke Sunil Saigal 《International journal for numerical methods in engineering》1989,28(12):2777-2793
This paper presents an efficient numerical integration scheme for evaluating the matrices (stiffness, mass, stress-stiffness and thermal load) for a doubly curved, multilayered, composite, quadrilateral shell finite element. The element formulation is based on three-dimensional continuum mechanics theory and it is applicable to the analysis of thin and moderately thick composite shells. The conventional formulation requires a 2 × 2 × 2 or 2 × 2 × 1 Gauss integration per layer for the calculation of element matrices. This method becomes uneconomical when a large number of layers is used owing to an excessive amount of computations. The present formulation is based on explicit separation of the thickness variable from the shell surface parallel variables. With the through-thickness variables separated, they are combined with the thickness dependent material properties and integrated separately. The element matrices are computed using the integrated material matrices and only a 2 × 2 spatial Gauss integration scheme. The response results using the present formulation are identical to those obtained using the conventional formulation. For a small number of layers, the present method requires slightly more CPU time. However, for a larger number of layers, numerical data are presented to demonstrate that the present formulation is an order-of-magnitude economical compared to the conventional scheme. 相似文献
18.
The paper proposes Latin hypercube sampling combined with the stratified sampling of variance reduction technique to calculate accurate fracture probability. In the compound sampling, the number of simulations is relatively small and the calculation error is satisfactory. 相似文献
19.
K. S. Li 《Materials and Structures》1993,26(9):517-521
A sampling scheme which consists of the following two sampling rules is considered: (i) the sample mean value of the random
samples should not be less than a times the specified standard and (ii) all individual test results should not be less thanb times the specified standard. By varying the constantsa andb and the sampling size, the sampling scheme generates a wide range of operating characteristics (OC) curves. An approximate
closed-form solution is proposed for calculating the OC curve of the proposed scheme. Applications of the scheme are also
discussed.
Resume On considère une procédure d’échantillonnage qui consiste à appliquer les deux règles suivantes: (i) la valeur moyenne d’échantillons pris au hasard ne devrait pas être inférieure àa fois la valeur spécifiée officielle: (ii) tous les résultats d’essai individuels ne devraient pas être inférieurs àb fois la valeur spécifiée officielle. La régle d’échantillonnage proposée présente trois paramètres: la taille de l’éprouvette,n, et les constantesa etb. En faisant varier ces trois paramètres, la procédure d’échantillonnage donne lieu à un vaste éventail de courbes de fonctionnement caractéristique. et permet d’établir une courbe OC acceptable pour le producteur et le consommateur. On peut l’utiliser aussi pour produire une nouvelle règle de réception qui donnera une courbe OC similaire à celle d’une procédure d’échantillonnage existante. On propose une solution approchée pour le calcul de la courbe OC de la procédure proposée. On discute aussi les applications de cette procédure.相似文献
20.
Monitoring generator oscillations using Prony analysis is an application that is being increasingly implemented using phasor measurement units (PMUs) data. Most of the existing literature for power system oscillations using Prony method report implementing fixed sampling window that is obtained following a systematic iterative approach taking into account potential disturbances. The fixed sampling technique, although simple to implement, does not optimise the data acquisition time. Moreover, sampling using inadequate intervals can potentially lead to errors during Prony analysis. In this study mathematical formulations are derived to show the effect of small and large sampling intervals. Further, a technique using condition number is proposed as a quality index for signalling the errors arising because of the choice of sampling interval. To facilitate an efficient monitoring rate, while still maintaining sufficient accuracy, an adaptive sampling scheme is proposed. This study demonstrates that the proposed scheme can potentially exhibit faster monitoring rate with acceptable accuracy during practical implementation. 相似文献