共查询到20条相似文献,搜索用时 0 毫秒
1.
The objective of Latin Hypercube Sampling is to determine an effective procedure for sampling from a (possibly correlated) multivariate population to estimate the distribution function (or at least a significant number of moments) of a complicated function of its variables. The typical application involves a computer-based model in which it is largely impossible to find a way (closed form or numerical) to do the necessary transformation of variables and where it is expensive to run in terms of computing resources and time. Classical approaches to hypercube sampling have used sophisticated stratified sampling techniques; but such sampling may provide incorrect measures of the output parameters' variances or covariances due to correlation between the sampling pairs. In this work, we offer a strategy which provides a sampling specification minimizing the sum of the absolute values of the pairwise differences between the true and sampled correlation pairs. We show that optimal plans can be obtained for even small sample sizes. We consider the characteristics of permutation matrices which minimize the sum of correlations between column pairs and then present an effective heuristic for solution. This heuristic generally finds plans which match the correlation structure exactly. When it does not, we provide a hybrid lagrangian/heuristic method, which empirically has found the optimal solution for all cases tested.This article was processed by the author using the LATEX style filepljour2 from Springer-Verlag. 相似文献
2.
Sliced Latin hypercube designs (SLHDs) have important applications in designing computer experiments with continuous and categorical factors. However, a randomly generated SLHD can be poor in terms of space-filling, and based on the existing construction method that generates the SLHD column by column using sliced permutation matrices, it is also difficult to search for the optimal SLHD. In this article, we develop a new construction approach that first generates the small Latin hypercube design in each slice and then arranges them together to form the SLHD. The new approach is intuitive and can be easily adapted to generate orthogonal SLHDs and orthogonal array-based SLHDs. More importantly, it enables us to develop general algorithms that can search for the optimal SLHD efficiently. 相似文献
3.
We propose an approach for constructing a new type of design, called a sliced orthogonal array-based Latin hypercube design. This approach exploits a slicing structure of orthogonal arrays with strength two and makes use of sliced random permutations. Such a design achieves one- and two-dimensional uniformity and can be divided into smaller Latin hypercube designs with one-dimensional uniformity. Sampling properties of the proposed designs are derived. Examples are given for illustrating the construction method and corroborating the derived theoretical results. Potential applications of the constructed designs include uncertainty quantification of computer models, computer models with qualitative and quantitative factors, cross-validation and efficient allocation of computing resources. Supplementary materials for this article are available online. 相似文献
4.
A Minimum Bias Latin Hypercube Design 总被引:1,自引:0,他引:1
Deterministic engineering design simulators can be too complex to be amenable to direct optimization. An indirect route involves data collection from the simulator and fitting of less complex surrogates: metamodels, which are more readily optimized. However, common statistical experiment plans are not appropriate for data collection from deterministic simulators due to their poor projection properties. Data collection plans based upon number-theoretic methods are also inappropriate because they tend to require large sample sizes in order to achieve their desirable properties. We develop a new class of data collection plan, the Minimum Bias Latin Hypercube Design (MBLHD), for sampling from deterministic process simulators. The class represents a compromise between empirical model bias reduction and dispersion of the points within the input variable space. We compare the MBLHD class to previously known classes by several model independent measures selected from three general families: discrepancies, maximin distance measures, and minimax distance measures. In each case, the MBLHD class is at least competitive with the other classes; and, in several cases the MBLHD class demonstrates superior performance. We also make a comparison of the empirical squared bias of fitted metamodels. We approximate a mechanistic model for water flow through a borehole, using both kriging and polynomial metamodels. Here again, the performance of the MBLHD class is encouraging 相似文献
5.
Felipe A. C. Viana 《Quality and Reliability Engineering International》2016,32(5):1975-1985
The growing power of computers enabled techniques created for design and analysis of simulations to be applied to a large spectrum of problems and to reach high level of acceptance among practitioners. Generally, when simulations are time consuming, a surrogate model replaces the computer code in further studies (e.g., optimization, sensitivity analysis, etc.). The first step for a successful surrogate modeling and statistical analysis is the planning of the input configuration that is used to exercise the simulation code. Among the strategies devised for computer experiments, Latin hypercube designs have become particularly popular. This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use. The discussion starts with the early developments in optimization of the point selection and goes all the way to the pitfalls of the indiscriminate use of Latin hypercube designs. Final thoughts are given on opportunities for future research. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
6.
Abstract A dynamic access control mechanism suitable for hardware implementation is proposed. Each file is safeguarded by q lock values, where q is number of the possible access classes. The access right of a user to some file can be revealed very efficiently. In addition, dynamic operations such as changing a privilege value, insertion or deletion of a user, and insertion or deletion of a file can be done easily. Compared to Wu and Hwangs’ and Chang's methods, the computation of lock values is simple and the verification of an access request is quite efficient. The storage required is proportional to m?n which is less than that of the above‐mentioned methods, where m is the number of users and n is the number of files. 相似文献
7.
A. S. Wood 《International journal for numerical methods in engineering》1986,23(9):1757-1771
In this paper the enthalpy approach for solving Stefan problems is applied to a three-dimensional situation. A novel and particularly efficient finite-difference scheme is then used to solve the resulting equations. As a starting solution for the numerical algorithm, a three-dimensional result using the heat-balance integral method is obtained for the case of heat flow in two phases with, possibly, different thermal properties. 相似文献
8.
《成像科学杂志》2013,61(6):467-474
AbstractData hiding technique can hide a certain amount of secret data into digital content such as image, document, audio or video. Reversible compressed image data hiding can loosely restore the cover image after extracting the secret data from the stego-image. In this paper, we present an efficient reversible image data hiding scheme based on side match vector quantisation. Mapping concept is useful for this scheme because it converts the ternary into binary. The proposed scheme significantly increases the payload size of a block, and the quality analysis of the proposed scheme showed that it contains a better peak signal to noise than other schemes. 相似文献
9.
针对水声信道,提出了一种改进的停等自动重传请求(stop-and-wait automatic repeat request,简称SWARQ)方案。为了提高传输的可靠性,分组中的每个比特连续传输Ⅳ次。对所提方案的理论上所得到的吞吐量关于信道的比特信噪比(SNR)、分组长度、每个比特连续传输的次数以及信道传输速率与信道传播时延之积进行了优化.较之于文献中所提及适合水声信道的基本SWARQ及其改进方案.所提出的ARQ方案能显著地提高水声通信系统的吞吐性能。而且.随着比特信噪比的增大,最优的N对分组长度、信道传输速率与信道传播时延之积的敏感度大大地降低。 相似文献
10.
We propose a preconditioning scheme for diagonalizing large Hamiltonian matrices arising in first-principles plane-wave pseudopotential calculations. The new scheme is based on the Neumann expansion for the inverse, shifted Hamiltonian from which only a nonlocal part is omitted. The preconditioner is applied to a gradient vector using the fast Fourier transformation technique. In the framework of the Davidson-type diagonalization algorithm, we have found the present preconditioning scheme to be more efficient than widely accepted diagonal scaling methods. 相似文献
11.
Abstract Solid angle sampling is an important variance‐reduced technique when solving global illumination problems by Monte Carlo methods. In this paper, we present an efficient solid angle sampling technique where a paraboloidal luminaire is taken as a case study. The efficiency of our technique is due to the fact that we employ a tight bounding volume to approximate the solid angle. Our technique includes three processes. The construction process builds a bounding volume. It is a convex, frustum‐like polyhedron with a compromise between the tightness and the vertex numbers. The projection process approximates the solid angle as a convex spherical polygon on a unit hemisphere. Finally, the triangulation process triangulates the convex spherical polygon into spherical triangles for stratified sampling. We analyzed our technique in Monte Carlo direct lighting and Monte Carlo path tracing rendering algorithms. The results show that our technique provides up to 90% sampling efficiency. The significance of the proposed technique is that solid angle sampling, from being not possible for the paraboloidal luminaire, is now feasible. In addition, this technique is efficient for sampling, and it is also applicable to other types of luminaires, such as cylindrical and conic luminaires. 相似文献
12.
An efficient finite element scheme is devised for problems in linear viscoelasticity of solids with a moving boundary. Such
problems arise, for example, in the burning process of solid fuel (propellant). Since viscoelastic constitutive behavior is
inherently associated with a “memory,” the potential need to store and operate on the entire history of the numerical solution
has been a source of concern in computational viscoelasticity. A well-known “memory trick” overcomes this difficulty in the
fixed-boundary case. Here the “memory trick” is extended to problems involving moving boundaries. The computational aspects
of this extended scheme are discussed, and its performance is demonstrated via a numerical example. In addition, a special
numerical integration rule is proposed for the viscoelastic integral, which is more accurate than the commonly-used trapezoidal
rule and does not require additional computational effort. 相似文献
13.
F. Huq R. Brannon L. Graham‐Brady 《International journal for numerical methods in engineering》2016,105(1):33-62
A computational scheme is developed for sampling‐based evaluation of a function whose inputs are statistically variable. After a general abstract framework is developed, it is applied to initialize and evolve the size and orientation of cracks within a finite domain, such as a finite element or similar subdomain. The finite element is presumed to be too large to explicitly track each of the potentially thousands (or even millions) of individual cracks in the domain. Accordingly, a novel binning scheme is developed that maps the crack data to nodes on a reference grid in probability space. The scheme, which is clearly generalizable to applications involving arbitrary numbers of random variables, is illustrated in the scope of planar deformations of a brittle material containing straight cracks. Assuming two random variables describe each crack, the cracks are assigned uniformly random orientations and non‐uniformly random sizes. Their data are mapped to a computationally tractable number of nodes on a grid laid out in the unit square of probability space so that Gauss points on the grid may be used to define an equivalent subpopulation of the cracks. This significantly reduces the computational cost of evaluating ensemble effects of large evolving populations of random variables. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
14.
The finite element alternating method (FEAM) is an extremely useful and efficient scheme for the accurate calculation of
stress intensity factors in complex three dimensional structures. This approach involves the combination of an analytical
solution for a crack in an infinite body with a finite element solution for the uncracked component. Previously a three-dimensional
fatigue crack growth algorithm has been incorporated into the alternating method for the case of constant amplitude loading.
The major accomplishment outlined in this paper is the implementation of an additional numerical scheme for the more difficult
case of variable amplitude loading. Test cases, with an emphasis on components taken from the aerospace industry, are used
to validate the revised alternating method computer program. Results reveal excellent levels of correlation with existing
packages and highlight the suitability of the finite element alternating method for fatigue crack growth analyses in a wide
variety of components such as aircraft components, pipelines, offshore structures, pressure vessels, ships, etc. 相似文献
15.
Rajiv Shenoy Marilyn Smith Michael Park 《International journal for numerical methods in engineering》2015,101(6):470-488
A parallel localization scheme is presented to enable solution transfers between unstructured grids. The scheme relies on neighbor walks to reduce the number of candidate elements that are visited to find the enclosing element. An advancing front method efficiently allows a subset of nodes to efficiently sweep through the grid, progressively reducing search spaces. The algorithm is parallelized permitting solution transfers over arbitrary grid decompositions. A hierarchical localization process helps prevent the neighbor walk algorithm from failing when encountering the boundaries of a concave domain by localizing the boundaries before the interior of the domain is localized. Random selections of the next step interrupt cyclic loops that may occur during a neighbor walk. The complexity of the search algorithm is verified over parallel decompositions and is effectively independent of the number of partitions. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
16.
Enumeration and Evaluation of Small Orthogonal Latin Hypercube Designs for Polynomial Regression Models 下载免费PDF全文
Haralambos Evangelaras 《Quality and Reliability Engineering International》2016,32(7):2381-2389
In this paper, we construct and evaluate all nonisomorphic Latin hypercube designs with n≤16 runs, the use of which guarantee that the estimates of the first‐order effects are uncorrelated with each other and also uncorrelated with the estimates of the second‐order effects, in polynomial regression models. The produced designs are evaluated using well‐known and popular criteria, and optimal designs are presented in every case studied. An effort to construct nonisomorphic small Latin hypercubes in which only the estimates of the first‐order effects are required to be uncorrelated with each other has also been made, and new designs are presented. All the constructed designs, besides their stand‐alone properties, are useful for the construction of bigger orthogonal Latin hypercubes with desirable properties, using well‐known techniques proposed in the literature. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
17.
J. Fish M. Pandheeradi V. Belsky 《International journal for numerical methods in engineering》1995,38(10):1597-1610
We present a new approach for the fast, efficient solution of large systems of non-linear equations arising from the finite element discretization. The proposed non-linear solver builds on the advantages of the popular methods of solution that are currently being employed, while eliminating most of their undesirable features. It combines the well-known BFGS method with the FAS version of the multigrid method, introduced by Brandt,1 to form a fast, efficient solution method for non-linear problems. We present numerical performance studies that are indicative of the convergence properties as well as the stability of the new method. 相似文献
18.
A semi-implicit finite element scheme is proposed for two-dimensional tidal flow computations. In the scheme, each term of the governing equations, rather than each dependent variable, is expanded in terms of the unknown nodal values and it helps to reduce computer execution time. The friction terms are represented semi-implicitly to improve stability, but this requires no additional computational effort. Test cases where analytic solutions have been obtained for the shallow water equations are employed to test the proposed scheme and the test results show that the scheme is efficient and stable. An numerical experiment is also included to compare the proposed scheme with another finite element scheme employing Serendipity-type Hermitian cubic basis functions. A numerical model of an actual bay is constructed based on the proposed scheme and computed tidal flows bear close resemblance to flows measured in field survey. 相似文献
19.
An efficient finite element (FE) scheme to deal with a class of coupled fluid-solid problems is presented. The main ingredients of such methodology are: an accurate Q1/P0 solid element (trilinear in velocities and constant piecewise-discontinuous pressures); a large deformation plasticity model; an algorithm to deal with material failure, cracking propagation and fragment formation; and a fragment rigidization methodology to avoid the possible numerical instabilities that may produce pieces of material flying away from the main solid body. All the mentioned schemes have been fully parallelized and coupled using a loose-embedded procedure with a well-established and validated computational fluid dynamics (CFD) code (FEFLO). A CSD and a CFD/CSD coupled case are presented and analyzed. 相似文献
20.